Artificial intelligenceCultureFeaturedisraelJudaismMagazineMagazine - Life & ArtsOpinionPremiumScience and TechnologyTechnology

What we can learn about AI from Jewish myth

It was a scorching day in June 1965, and Israel’s pioneering Weizmann Institute of Science in the southern hamlet of Rehovoth was unveiling only the second computer the fledgling Jewish state had ever seen. But in selecting a speaker for the ceremony, the institute’s chairman didn’t invite a mathematician, a scientist, or an engineer. Instead, he asked Gershom Scholem, a decorated Jewish historian, to address the gathering. Why? Because, weeks earlier, Scholem had captivated the chairman by labeling the new machine “the Golem of Rehovoth.”

In his remarks, Scholem recounted the legend of the golem, a superhuman being fashioned from clay and designed by humans with the godlike power of creation to protect embattled Jewish communities in various shtetls in Europe.

Reproduction of a Golem

In the historian’s telling, the origin stories of the computer and the golem bear numerous similarities: Both were created from material substances (silicon and clay), both were “invested” with a spark of intelligence, and both were made in the image of their creators. Scholem called the golem’s creation “an affirmation of the productive and creative power of Man,” a paean to the immense abilities with which we have been endowed. He name-checked two brilliant engineers widely regarded as the midwives of the modern computer, praising “John von Neumann and Norbert Wiener, who contributed more than anyone else to the magic that has produced the modern Golem.”

But golem stories usually ended unhappily, with the creature slipping the bonds of its human creators and threatening to go berserk. The golem, Scholem cautioned, was “controlled by its creator … but which at the same time may have a dangerous tendency to outgrow that control and develop destructive potentialities.”

Thus did a Jewish historian 60 years ago pinpoint the critical issue bound up in modern technology: How can we nurture the human urge to create and improve life for all of humanity while ensuring our machines benefit the many, not the few, and remain under our supervision? Simply replace “computer” with “artificial intelligence” to encounter the urgent, divisive debate occupying today’s technologists, lawyers, philosophers, and policymakers.

Whether computers will soon perform most tasks at least as well as humans, whether our robot overlords will exterminate or enslave us, and whether humans will exhaust our natural desire and capacity for artistic and practical creation: These questions have bedeviled observers for decades, but the emergence in late 2022 of ChatGPT turbocharged the discussion. Almost overnight, OpenAI’s shiny new toy was helping write wedding toasts, obituaries, news summaries, and even term papers. The chatbot’s emergence inspired myriad think pieces about the future of writing, research, and the creative act itself. Individuals, companies, governments, and groups the world over fiercely contested AI’s potential costs and benefits.

Many cheered the transformations these tools had already begun to effect. Others strenuously decried their fearsome capabilities. Some downplayed the breakthroughs and continued to view our machines as extensions of ourselves, though they embraced AI’s potential. And there were those who minimized and even ridiculed the machines’ achievements. The reactions fall into four distinct schools of thought differing across two axes: one considering precisely how independent machines are from their creators, the other placing ethical, societal, and practical value on machine breakthroughs.

Positive Autonomists, including Open AI’s Sam Altman and Marc Andreessen, a leading AI-heavy venture capitalist, regard AI’s recent advances as truly revolutionary — a difference in kind, not just degree, from previous computing technology, and they wholeheartedly applaud these breakthroughs, hailing their capacity to enhance and extend life in fundamental ways.

Meanwhile, Negative Autonomists, including leading AI scientist Geoffrey Hinton, self-described “rationalist” Erik Hoel, and, at times, Elon Musk, also regard generative AI as transformative technology that will deeply alter human existence, but for the worse — some go so far as to urge its immediate and permanent deactivation.

Then, too, the Positive Automatoners, including Orly Lobel of the University of San Diego School of Law and Yasuo Kuniyoshi of the University of Tokyo’s Next Generation AI Research Lab, regard advanced machines as no more than an extension of human capabilities, a force multiplier that reflects and implements its programmers’ own abilities, assumptions, and biases and that can benefit humanity if programmed appropriately.

And finally, Negative Automatoners such as the decorated linguist Noam Chomsky and AI pioneer Gary Marcus downplay the significance of generative AI, view it as a mere mechanical prosthetic, and fret that it will harm civilization by fundamentally cheapening the human experience.

How can we possibly reconcile these vastly divergent views? How do we simultaneously boost and block AI’s advances or subsidize and throttle its evolution? How may we facilitate machine breakthroughs while retaining the ability to terminate destructive robots?

To find answers, we need to turn back the clock many centuries, to where we find antecedents of the four AI schools of thought that shed light on and lend nuance to those perspectives.

As Scholem explained in 1965, the golem was the product of human innovation, created by a person using holy words, capable of operating on its own, and with a purpose that served the community as a whole. Most importantly, its operation remained dependent on its human creator, who was able and willing to terminate its existence when absolutely necessary, such as when the initially good golem turned bad.

But the golem isn’t the only supernatural entity we can learn from as we enter the AI age. Another mythical creature infused the lives and consciousnesses of Jewish and non-Jewish people in the medieval and early modern periods. This phantasm, known as the dybbuk, or demon, possessed humans living in tightly knit communities, reflecting and amplifying particular character traits of the person it possessed. Summoning and expelling the malign spirit required the rabbi and other congregation members to identify and isolate the offending character trait, be it heresy, sexual deviance, fraudulent business conduct, or the like, and expunge it from the host and, by extension, the community.

The dybbuk also had a friendly cousin known as the maggid, a force that would inhabit and inspire scholars much as a muse arouses a poet. Centuries ago, rabbis recorded their ecstatic encounters with maggids, who spurred bursts of creativity and amplified their better angels. Non-Jewish cultures across the globe, too, are replete with stories about this type of demon or sprite — benevolent spirits that inspire human innovation.

A deeper understanding of the ancient, medieval, and early modern predecessors to the four schools of AI thought helps us formulate a sensible approach to maximize AI’s potential while minimizing its drawbacks.

First and foremost, these examples should inspire us to embrace the tremendously beneficial possibilities that today’s machines present. Whether or not AI can be properly characterized as autonomous, it is rapidly transforming numerous fields, from basic science to drug discovery to language processing to artistic expression. We should welcome innovation that enhances and extends human life, just as the ancients ushered the golem into existence.

Much as the golem could be created only through purity of spirit and thought, we must develop contemporary machines for appropriate purposes, to benefit the larger community. While numerous regulatory schemes have arisen to do exactly this, many of them would unduly shackle the golem. Instead, we should embrace a rigorous set of voluntary guidelines that both AI companies and industry organizations would adopt and enforce.

NUCLEAR POWER NEEDS AI TO THRIVE

Third, in promulgating boundaries on AI development, we must ensure that our machines reflect the best of ourselves — that they serve as our maggids and not our dybbuks. And finally, we must attune ourselves to the possibility, however slim, that AI could cause catastrophic harm to us and our planet. Even if the probability of such an event is extremely small, we must make certain to include some sort of kill switch able to terminate our machines in the event of calamity.

Scholem concluded his Rehovoth remarks by issuing a warning to the newly created golem and its creator: “Develop peacefully and don’t destroy the world. Shalom.” We would do well to heed his sage advice.

Michael M. Rosen is an attorney and writer in Israel, a nonresident senior fellow at the American Enterprise Institute, and author of Like Silicon From Clay: What Ancient Jewish Sources Can Teach Us About AI, which was released on March 4 and from which this piece is excerpted.

Source link

Related Posts

1 of 267