Stephen Hawking & The Perils of AI

Stephen Hawking

Stephen Hawking: Governments are engaged in an AI arms race that could destroy humanity

“Mankind is in danger of destroying ourselves by our greed and stupidity.”

On Monday, English theoretical physicist, cosmologist, author and Director of Research at the Centre for Theoretical Cosmology within the University of Cambridge, Stephen Hawking went on the Larry King show. He was less than optimistic about the future of humanity.

Six years ago, Hawking was on the King show and said, “Mankind is in danger of destroying ourselves by our greed and stupidity.” When asked if he thought humanity has changed since their last visit, Hawking replied, “We have certainly not become less greedy or less stupid.”

In summarizing the last six years,
Stephen Hawking said, “Six years ago, I was warning about pollution and overcrowding; they have gotten worse since then.”

Hawking’s faith in humanity is apparently dwindling as the theoretical physicist predicted little more than doom and gloom. “The population has grown by half a billion since our last interview, with no end in sight. At this rate, it will be eleven billion by 2100. Air pollution has increased by 8 percent over the past five years. More than 80 percent of inhabitants of urban areas are exposed to unsafe levels of air pollution,” he said.

Hawking says that addressing pollution is a major concern, but we’ve yet to do so.

“The increase in air pollution and the emission of increasing levels of carbon dioxide. Will we be too late to avoid dangerous levels of global warming?” he said.

If humans don’t kill themselves with pollutions, according to Hawking, it will be the robots that do us in. When King asked Hawking about the dangers of artificial intelligence (AI), Hawking explained that when government is involved in technological evolution, the outlook is bleak.

“Governments seem to be engaged in an AI arms race, designing planes and weapons with intelligent technologies. The funding for projects directly beneficial to the human race, such as improved medical screening seems a somewhat lower priority.”

When King asked Hawking about his views on Ray Kurzweil’s theory of the singularity, Hawking shot it down as “too optimistic.”

“I think that his views are both too simplistic and too optimistic. Exponential growth will not continue to accelerate, something we don’t predict will interrupt it as has happened with similar forecasts in the past,” he said.

As he continued, Hawking alluded to the fears that some people hold about AI wiping humanity from the earth because of having differing goals.

Hawking said. “Once machines reach the critical stage of being able to evolve themselves, we cannot predict whether their goals will be the same as ours.”

King goes on to ask Hawking, “Will artificial intelligence ever go on to render human society obsolete?”

“Artificial intelligence has the potential to evolve faster than the human race. Beneficially AI could co-exist with humans and augment our capabilities. But a rogue AI could be difficult to stop.”

After claiming humans are stupid, greedy, and AI will destroy the world, Hawking noted that it is still important to pursue the cause of AI as it will be highly beneficial to humans in the future.

Via Free Thought Project


Time to Get Your Inner Genius Out

Genius: Can Anybody Be One?

Genius: Can Anybody Be One?

Genius can be defined as a high IQ, extreme creativity, or something else altogether.

Credit: DeepArt

What makes a genius?

Perhaps for athletes, a genius is an Olympic medalist. In entertainment, a genius could be defined as an EGOT winner, someone who has won an Emmy, Grammy, Oscar and Tony award. For Mensa, the exclusive international society comprising members of “high intelligence,” someone who scores at or above the 98th percentile on an IQ or other standardized intelligence test could be considered genius.

The most common definition of genius falls in line with Mensa’s approach: someone with exceptional intelligence.

In his new science series “Genius” on PBS, Stephen Hawking is testing out the idea that anyone can “think like a genius.” By posing big questions — for instance, “Can we travel through time?” — to people with average intelligence, the famed theoretical physicist aims to find the answers through the sheer power of the human mind.

“It’s a fun show that tries to find out if ordinary people are smart enough to think like the greatest minds who ever lived,” Hawking said in a statement. “Being an optimist, I think they will.”

Optimism aside, answering a genius-level question does not a genius make — at least, not according to psychologist Frank Lawlis, supervisory testing director for American Mensa.

“The geniuses ask questions. They don’t know the answers, but they know a lot of questions and their curiosity takes them into their fields,” Lawlis told Live Science. “[They’re] somebody that has the capacity to inquire at that high level and to be curious to pursue that high level of understanding and then be able to communicate it to the rest of us.”

You must statistically be a genius to qualify for Mensa, with a measured intelligence that exceeds 98 percent of the rest of the population. However, Lawlis said even these tests can exclude some of the most brilliant of thinkers.

“The way you put items together to test for intelligence is that you already know the answer,” Lawlis said. “That’s the whole point. You create questions that have real answers.”

For instance, Albert Einstein would have likely done poorly on IQ tests, Lawlis said.

“It really comes down to thinking outside the box, and you really can’t test that,” Lawlis said. “When they take these tests, instead of directing their attention to the correct answer, they think of a jillion other answers that would also work, so consequently they get confused and do very poorly.”

Consisting of a mixture of intelligence, creativity and contribution to society, genius is hard to pinpoint, said Dean Keith Simonton, a distinguished professor of psychology at the University of California, Davis.

In the Scientific American Mind magazine’s special issue on genius, Simonton hypothesized that all geniuses use the same general process to make their contributions to the world.

They start with a search for ideas, not necessarily a problem in need of a solution. From this search, geniuses will generate a number of questions, and begin a long series of trials and errors. They then find a solution, for a problem others may not have even been aware of.

“Talent hits a target no one else can hit. Genius hits a target no one else can see,” Simonton said, quoting the 19th-century German philosopher Arthur Schopenhauer.

“Exceptional thinkers, it turns out, stand on common ground when they launch their arrows into the unknown,” Simonton said.

In an attempt to “discern what combination of elements tends to produce particularly creative brains,” psychiatrist and neuroscientist Nancy Andreasen at the University of Iowa used functional magnetic resonance imaging (fMRI), which measures brain activity by detecting changes associated with blood flow.

Andreasen selected the creative subjects from the University of Iowa Writers’ Workshop, and a control group from a mixture of professions. The control group was matched to the writers based on age, education and IQ — with both test and control groups averaging an IQ of 120, considered very smart but not exceptionally so, according to Andreasen.

Based on these controls, Andreasen looked for what separated the creative’s brains from the controls.

During the fMRI scans of participants, the subjects were asked to perform three different tasks: word association, picture association and pattern recognition. The creatives’ brains showed stronger activations in their association cortices. These are the most extensively developed regions in the human brain and help interpret and utilize visual, auditory, sensory and motor information.

Andreasen set out to find what else, in addition to brain processes, linked the 13 creatives’ brains.

“Some people see things others cannot, and they are right, and we call them creative geniuses,” Andreasen wrote in The Atlantic, referring to participants in her study. “Some people see things others cannot, and they are wrong, and we call them mentally ill.”

And then there are people who fit into both categories.

What Andreasen found is that there is another common mark of creative genius: mental illness.

Through interviews and extensive research, Andreasen discovered that the creatives she studied had a higher rate of mental illness, which included a family history of mental illness. The most common diagnoses were bipolar disorder, depression, anxiety and alcoholism. The question now is whether the mental illness contributes to the genius or if it’s the other way around, she said.

In a study of the brain of one of the most famous geniuses in history, Einstein, scientists found distinct physical features, which may help to explain his genius, Live Science reported when the study came out in the journal Brain in 2012.

Previously unpublished photographs of the physicist’s brain revealed that Einstein had extra folding in his gray matter, the part of the brain that processes conscious thinking, the study researchers found. His frontal lobes, the brain regions tied to abstract thought and planning, had particularly elaborate folding.

“It’s a really sophisticated part of the human brain,” Dean Falk, study co-author and an anthropologist at Florida State University, told Live Science, referring to gray matter. “And [Einstein’s] is extraordinary.”

Be it high IQ, curiosity or creativity, the factor that makes someone a genius may remain a mystery. Though Mensa can continue to test for quantitative intelligence in areas such as verbal capacity and spatial reasoning, there is no test for the next Einstein, Lawlis said.

“I don’t know anybody that could really predict this extremely high level of intelligence and contribution,” Lawlis said. “That’s the mystery.”

Original article on Live Science.

– See more at:

Musk & Hawking On Dangers of AI

Don’t let AI take our jobs (or kill us): Stephen Hawking and Elon Musk sign open letter warning of a robot uprising

  • Letter says there is a ‘broad consensus’ that AI is making good progress
  • Areas benefiting from AI research include driverless cars and robot motion
  • But in the short term, it warns AI may put millions of people out of work
  • In the long term, robots could become far more intelligent than humans
  • Elon Musk has previously linked the development of autonomous, thinking machines to ‘summoning the demon’

Artificial Intelligence has been described as a threat that could be ‘more dangerous than nukes’.

Now a group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking, have signed an open letter promising to ensure AI research benefits humanity.

The letter warns that without safeguards on intelligent machines, mankind could be heading for a dark future.

A group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking (pictured), have signed an open letter promising to ensure AI research benefits humanity.

A group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking (pictured), have signed an open letter promising to ensure AI research benefits humanity.

The document, drafted by the Future of Life Institute, said scientists should seek to head off risks that could wipe out mankind.

The authors say there is a ‘broad consensus’ that AI research is making good progress and would have a growing impact on society.

It highlights speech recognition, image analysis, driverless cars, translation and robot motion as having benefited from the research.

‘The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable,’ the authors write.

Elon Musk previously linked the development of autonomous, thinking machines, to 'summoning the demon'

Elon Musk previously linked the development of autonomous, thinking machines, to ‘summoning the demon’

But it issued a stark warning that research into the rewards of AI had to be matched with an equal effort to avoid the potential damage it could wreak.

For instance, in the short term, it claims AI may put millions of people out of work.

In the long term, it could have the potential to play out like a fictional dystopias in which intelligence greater than humans could begin acting against their programming.

‘Our AI systems must do what we want them to do,’ the letter says.

‘Many economists and computer scientists agree that there is valuable research to be done on how to maximise the economic benefits of AI while mitigating adverse effects, which could include increased inequality and unemployment.’

Other signatories to the FLI’s letter include Luke Muehlhauser, executive director of Machine Intelligence Research Institute and Frank Wilczek, professor of physics at the Massachusetts Institute of Technology and a Nobel laureate.

The letter comes just weeks after Professor Hawking warned that AI could someday overtake humans.

Space X Founder Elon Musk: AI is our ‘biggest existential threat’


Google has set up an ethics board to oversee its work in artificial intelligence.

The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans.

One of its founders warned artificial intelligence is ‘number one risk for this century,’ and believes it could play a part in human extinction

‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.

Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the ‘number 1 risk for this century.’

The ethics board, revealed by web site The Information, is to ensure the projects are not abused.

Neuroscientist Demis Hassabis, 37, founded DeepMind two years ago with the aim of trying to help computers think like humans.

Speaking at event in London, the physicist told the BBC: ‘The development of full artificial intelligence could spell the end of the human race.’

This echoes claims he made earlier in the year when he said success in creating AI ‘would be the biggest event in human history, [but] unfortunately, it might also be the last.’

In November, Elon Musk, the entrepreneur behind Space-X and Tesla, warned that the risk of ‘something seriously dangerous happening’ as a result of machines with artificial intelligence, could be in as few as five years.

He has previously linked the development of autonomous, thinking machines, to ‘summoning the demon’.

Speaking at the Massachusetts Institute of Technology (MIT) AeroAstro Centennial Symposium in October, Musk described artificial intelligence as our ‘biggest existential threat’.

He said: ‘I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence.

‘I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.

‘With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and … he’s sure he can control the demon? Doesn’t work out.’

The letter issued a stark warning that research into the rewards of AI had to be matched with an equal effort to avoid the potential damage it could wreak

The letter issued a stark warning that research into the rewards of AI had to be matched with an equal effort to avoid the potential damage it could wreak

Hawking of No Black Holes

Stephen Hawking: There Are No Black Holes

by Ian O’Neill, Discovery News   |   January 25, 2014
Black Hole Simulation
This annotated image labels several features in the simulation, including the event horizon of the black hole.
Credit: NASA’s Goddard Space Flight Center/J. Schnittman, J. Krolik (JHU) and S. Noble (RIT)

On reading a new paper by Stephen Hawking that appeared online this week, you would have been forgiven in thinking the world-renowned British physicist was spoofing us. Hawking’s unpublished work — titled “Information Preservation and Weather Forecasting for Black Holes” and uploaded to the arXiv preprint service — declares that “there are no black holes.”

Professor Stephen Hawking speaks about "Why We Should Go into Space" for the NASA Lecture Series, April 21, 2008.
Professor Stephen Hawking speaks about “Why We Should Go into Space” for the NASA Lecture Series, April 21, 2008.
Credit: NASA/Paul Alers

Keep in mind that Hawking’s bedrock theory of evaporating black holes revolutionized our understanding that the gravitational behemoths are not immortal; through a quantum quirk they leak particles (and therefore mass) via “Hawking radiation” over time. What’s more, astronomers are finding new and exciting ways to detect black holes — they are even working on an interferometer network that may, soon, be able to directly image a black hole’s event horizon!


Black Holes Inforgraphic
Black holes are strange regions where gravity is strong enough to bend light, warp space and distort time.
Credit: Karl Tate, contributor

Has Hawking changed his mind? Are black holes merely a figment of our collective imaginations? Are all those crank theories about “alternative” theories of the Cosmos true?!

Fortunately not.

Stephen Hawking hasn’t changed his mind about the whole black hole thing, but he has thrown a complex physics paradox into the limelight, one that has been gnawing at the heart of theoretical physics for the last 18 months.

Black Hole Fight Club

It all boils down to a conflict between two fundamental ideas in physics that control the very fabric of our Universe; the clash of Einstein’s general relativity and quantum dynamics. And it just so happens that the extreme environment in and around a black hole makes for the perfect “fight club” for the two theories to duke it out. But what’s the first rule of the black hole fight club? Don’t talk about the firewall, lest you get sucked into an argument with a theoretical physicist.

At a California Institute of Technology (Caltech) lecture in April 2013, Hawking and other prominent theoretical physicists had an opportunity to describe the problem at hand. Caltech’s Kip Thorne, for example, described the firewall paradox as “a burning issue in theoretical physics.”

The very basis of this burning issue is the thing that makes black holes black — the event horizon. In its most basic form, the event horizon of a black hole is the point at which even light cannot escape the gravitational clutches of the massive black hole singularity. If light cannot escape, it stands to reason that it will appear as a black sphere in space. It is a cosmic one-way street: everything goes in, nothing comes out.

An Unlucky Astronaut

In the general relativity universe, for an astronaut who had the misfortune to fall toward a black hole, he or she wouldn’t notice anything untoward as they passed across the event horizon. It would be a fairly peaceful event, no drama. “Although later on you’re doomed and you’ll encounter very strong gravitational forces that will pull you apart,” noted Caltech physicist John Preskill at the 2013 Caltech event.

However, the quantum universe contradicts this “no drama” event horizon idea as predicted by general relativity.

In 2012, a group of physicists headed by Joseph Polchinski of the University of California in Santa Barbara revealed their finding that if black holes truly do not destroy information — a standpoint that Hawking himself reluctantly advocates — and that information can escape from the black hole through Hawking radiation, there must be a raging inferno just inside the event horizon they dub the “firewall.”

In this case, rather than falling into a “no drama” event horizon, our unlucky astronaut gets burnt to a crisp before getting ripped apart by tidal shear. This is the very antithesis of “no drama” and, therefore, a paradox.

This apparent conflict between what general relativity predicts and what quantum dynamics predicts — two very established fields in physics — is precisely what theoretical physicists are trying to understand. This appears to be yet another situation where gravity and quantum dynamics don’t play nice, the solution of which may transform the way we view the Universe.

Apparent Horizons

So, when Hawking, one of the key players in the great firewall debate, writes a short paper on the topic (regardless of whether or not it has been published) the world takes note.

Hawking’s solution to the paradox removes the black hole’s event horizon, thereby removing the paradox; no event horizon, no firewall. But we’re told all black holes have event horizons — the line you cannot cross or be forever lost inside the black hole — what gives?

Hawking thinks that the idea behind the event horizon needs to be reworked. Rather than the event horizon being a definite line beyond which even light cannot escape, Hawking invokes an “apparent horizon” that changes shape according to quantum fluctuations inside the black hole — it’s almost like a “grey area” for extreme physics. An apparent horizon wouldn’t violate either general relativity or quantum dynamics if the region just beyond the apparent horizon is a tangled, chaotic mess of information.

“Thus, like weather forecasting on Earth, information will effectively be lost, although there would be no loss of unitarity,” writes Hawking. This basically means that although the information can escape from the black hole, its chaotic nature ensures it cannot be interpreted, sidestepping the firewall paradox all together.

Needless to say, this paper has done little to convince Polchinski. “It almost sounds like (Hawking) is replacing the firewall with a chaos-wall, which could be the same thing,” he told New Scientist.

Much of the theoretical debate is hard to fathom and the result of calculations of physical events that we cannot possibly experience in our day to day lives. But don’t mistake this particular debate as solely a high-brow argument in the theoretical physics community. Its foundations are rooted in the growing discomfort we are feeling with the mismatch of general relativity and quantum dynamics (particularly what role gravity plays in the quantum world), a problem that cannot be solved with our current understanding of the universe.

It is, after all, these science problems that we build multi-billion dollar particle accelerators for.


Stephen Hawking on the Conquest of Space

Colonies on Mars will flourish and we will eventually conquer the universe, says Stephen Hawking

By Tamara Cohen

Last updated at 1:53 AM on 7th January 2012

Professor Stephen Hawking has predicted that humans will colonise Mars – but not for at least a century.

The physicist, who has decoded some of the greatest mysteries of the universe, said it is ‘essential’ for man to spread across the galaxy in case Earth is destroyed.

He suggested that it was ‘almost certain’ that a disaster ‘such as nuclear war or global warming’ would obliterate the planet within a thousand years.

Confident: Professor Stephen Hawking has said that we will one day colonise Mars - and beyondConfident: Professor Stephen Hawking has said that we will one day colonise Mars – and beyond

‘It is essential that we colonise space,’ he stressed.

‘I believe that we will eventually establish self-sustaining colonies on Mars and other bodies in the solar system, but not within the next 100 years.’

The Red Planet is considered to be the solar system’s most hospitable alternative to Earth. Although space agencies have made preparations for a manned mission to Mars, such an expedition is thought to be decades away.

A typical estimate of the length of a round trip is 450 days.

Adding that humanity’s extinction was ‘possible but not inevitable’, Professor Hawking said: ‘I am optimistic that progress in science and technology will eventually enable humans to spread beyond the solar system and into the far reaches of the universe.’

Professor Hawking also gave his views on the recent CERN claims that neutrinos can travel faster than the speed of light, saying that he didn’t believe the results of its experiments.

The possibility of multiple universes is something he does believe in, however, telling one listener: ‘Our best bet for a theory of everything is M-theory.

‘One prediction of M-theory is that there are many different universes, with different values for the physical constants.’

Answering questions from listeners of BBC Radio 4’s Today programme on the eve of his 70th birthday, Professor Hawking claimed that finding intelligent life elsewhere in the universe would be ‘the biggest scientific discovery ever’.

However, he warned that it would be ‘very risky to attempt to communicate with an alien civilisation’, adding: ‘If aliens decided to visit us, then the outcome might be similar to when Europeans arrived in the Americas. That did not turn out well for the Native Americans.’

Red alert: Hawking believes that we will one day colonise our cosmic neighbour. Pictured is the Disney film Mission To MarsRed alert: Hawking believes that we will one day colonise our cosmic neighbour. Pictured is the Disney film Mission To Mars

You’d be forgiven for thinking that Professor Hawking spends most of his waking hours thinking about these cosmic subjects.

But when he was asked by New Scientist recently what preoccupied him he replied: ‘Women. They are a complete mystery.’

He even hinted at regrets in his personal life after being asked about his biggest mistake.

He said that thinking information was destroyed black holes was his biggest blunder – ‘or at least my biggest blunder in science.’

Professor Hawking, who was diagnosed with motor neurone disease at 21, conducted the interview as he communicates – using a voice machine that picks up the twitching of his cheek.

His conversation with the magazine came ahead of an international conference held in his honour that started yesterday at Cambridge University, where he used to be the Lucasian professor of mathematics.

It will conclude on Sunday with talks from some of the world’s most prominent physicists, including Lord Rees, the Astronomer Royal, Saul Perlmutter, who won the Nobel prize for physics in 2011, and Kip Thorne from the California Institute of Technology.

Professor Hawking, who has made cameos in The Simpsons and Star Trek, was not expected to live for many years after being diagnosed, but has has a stellar career.

Brilliant brain: The 69-year-old, pictured at his wedding to ex-wife Elaine, said a mistake about black holes was his biggest blunder 'at least in science'Brilliant brain: The 69-year-old, pictured at his wedding to ex-wife Elaine, said a mistake about black holes was his biggest blunder ‘at least in science’

Colleagues have this week expressed their admiration for the talented cosmologist, who has contributed to theories of gravity and showed that black holes emit radiation and slowly disappear.

Professor Hawking married Jane Wilde in 1965, and she cared for him until 1991 when the couple separated, reportedly because of the pressures of fame and his increasing disability.

They had three children: Robert, Lucy – now a popular author, and Timothy.

The scientist then married his nurse, Elaine Mason (who was previously married to David Mason, the designer of the first version of Hawking’s talking computer), in 1995.

In October 2006, Hawking filed for divorce from his second wife.

In 2004, the scientist showed how a black hole’s information leaks back into our universe through an event horizon – a recantation of an earlier theory that lost him a bet made with fellow theorist John Preskill.

Professor Hawking also showed in the interview that he has not lost his passion for science, or his dreams of exciting future discoveries.

He said that if he was a young physicist starting out today, he would have a new idea that would open up an entirely new field.

Alien Attitude?

New Report: Aliens Will Fix Global Warming … Or Kill Us

Natalie Wolchover, Life’s Little Mysteries Staff Writer
Date: 19 August 2011 Time: 03:11 PM ET
If Aliens Exist,They May Come to Get Us, Stephen Hawking Says
An artist’s illustration of a potential alien attack as depicted in the science television series “Into the Universe with Stephen Hawking.”
CREDIT: Discovery Channel/Darlow Smithson Productions Ltd.

If or when intelligent extraterrestrials discover us, it’s anybody’s guess what they’ll do. They might befriend us. They might eat us. As Carl Sagan once noted, they might find amusement in some talent we have that they lack and use us as entertainment, just as we keep sea lions in captivity because of their remarkable ability to balance rubber balls on their noses.

All of these scenarios and more are fleshed out in a new article in the journal Acta Astronautica by researchers at Pennsylvania State University. One possibility they’ve raised has garnered more attention than any other: An extraterrestrial civilization might notice our planet by detecting changes in the spectral signature of Earth — the light radiated by our planet and atmosphere — caused by greenhouse gas emissions. And they might frown upon our behavior.

The group’s thinking goes like this: From the rate of change of the chemical composition in our atmosphere, the aliens will deduce our rapid expansion and, because of that, possibly view us as a threat, thinking we’ll soon pursue resources on other worlds

to read more, go to: