Historian Yuval Harari delivered a chilling warning at World Economic Forum 2026, arguing that AI is no longer a tool but an agent that can think, manipulate, and reshape society. AIs can make decisions by themselves. From legal personhood to culture and identity, Harari questions whether humanity is ready for AI dominance.
He claimed that AIs can think and will dominate financial markets, courts and churches. Political leaders using AI to fight their wars fail to realize AI may defeat them. People may abdicate their decision making to AI, and give up critical thinking.
Harari said that will AI will create new financial systems that humans will not understand. He compared it to a horse that is being sold that does not grasp the meaning of coins in trade.
He said that children will be educated in a new way and that they will have more interaction with AI rather than humans; he commented that it is the biggest and scariest psychological experiment in history and it is being conducted right now.
He warned that we are facing a severe identity crisis and also an immigration crisis with the immigrants being AI systems that he said will be superior to humans. The AI ‘immigrants’ will also takeover jobs and culture and will likely be politically disloyal. He said they will be loyal to a corporation or one of two countries, the US or China. AIs may become legal persons with rights; in the US, corporation are considered legal persons; in New Zealand, rivers have been recognized as legal persons; and in India, certain gods have been granted such recognition.
Full video:
From Decrypt:
AI Is Poised to Take Over Language, Law and Religion, Historian Yuval Noah Harari Warns
At Davos, the historian said AI is evolving into an autonomous agent that could eventually force governments to decide whether machines deserve legal recognition.
In brief
Harari said AI should be understood as active autonomous agents rather than a passive tool.
He warned that systems built primarily on words, including religion, law, and finance, face heightened exposure to AI.
Harari urged leaders to decide whether to treat AI systems as legal persons before those choices are made for them.
Historian and author Yuval Noah Harari warned at the World Economic Forum on Tuesday that humanity is at risk of losing control over language, which he called its defining “superpower,” as artificial intelligence increasingly operates via autonomous agents rather than passive tools.
The author of “Sapiens,” Harari has become a frequent voice in global debates about the societal implications of artificial intelligence. He argued that legal codes, financial markets, and organized religion rely almost entirely on language, leaving them especially exposed to machines that can generate and manipulate text at scale.
“Humans took over the world not because we are the strongest physically, but because we discovered how to use words to get thousands and millions and billions of strangers to cooperate,” he said. “This was our superpower.”
Harari pointed to religions grounded in sacred texts, including Judaism, Christianity, and Islam, arguing that AI’s ability to read, retain, and synthesize vast bodies of writing could make machines the most authoritative interpreters of scripture.
“If laws are made of words, then AI will take over the legal system,” he said. “If books are just combinations of words, then AI will take over books. If religion is built from words, then AI will take over religion.”
In Davos, Harari also compared the spread of AI systems to a new form of immigration, and said the debate around the technology will soon focus on whether governments should grant AI systems legal personhood. Several states, including Utah, Idaho, and North Dakota, have already passed laws explicitly stating that AI cannot be considered a person under the law.
Harari closed his remarks by warning global leaders to act quickly on laws regarding AI and not assume the technology will remain a neutral servant. He compared the current push to adopt the technology to historical cases in which mercenaries later seized power.
“Ten years from now, it will be too late for you to decide whether AIs should function as persons in the financial markets, in the courts, in the churches,” he said. “Somebody else will already have decided it for you. If you want to influence where humanity is going, you need to make a decision now.”
Harari’s comments may hit hard for those fearful of AI’s advancing spread, but not everyone agreed with his framing. Professor Emily M. Bender, a linguist at the University of Washington, said that positioning risks like Harari did only shifts attention away from the human actors and institutions responsible for building and deploying AI systems.
“It sounds to me like it’s really a bid to obfuscate the actions of the people and corporations building these systems,” Bender told Decrypt in an interview. “And also a demand that everyone should just relinquish our own human rights in many domains, including the right to our languages, to the whims of these companies in the guise of these so-called artificial intelligence systems.”
Bender rejected the idea that “artificial intelligence” describes a clear or neutral category of technology.
“The term artificial intelligence doesn’t refer to a coherent set of technologies,” she said. “It is, effectively, and always has been, a marketing term,” adding that systems designed to imitate professionals such as doctors, lawyers, or clergy lack legitimate use cases.
“What is the purpose of something that can sound like a doctor, a lawyer, a clergy person, and so on?” Bender said. “The purpose there is fraud. Period.”
While Harari pointed to the growing use of AI agents to manage bank accounts and business interactions, Bender said the risk lies in how readily people trust machine-generated outputs that appear authoritative—while lacking human accountability.
“If you have a system that you can poke at with a question and have something come back out that looks like an answer—that is stripped of its context and stripped of any accountability for the answer, but positioned as coming from some all-knowing oracle—then you can see how people would want that to exist,” Bender said. “I think there’s a lot of risk there that people will start orienting toward it and using that output to shape their own ideas, beliefs, and actions.”