What We Know About The Bilderbergs

4 things we know about the secretive Bilderberg Group and 1 thing we’ll never know

The secretive meeting brings together big international business and top-level government

We know where the meetings are held

The location of the meetings is now public. Last year, the Danish capital of Cophenhagen was the venue of choice.

This year, the world’s elites will travel to the Interalpen-Hotel Tyrol in the Austrian Alps.

We know who attends them

Shadow Chancellor Ed Balls (left) and Chancellor George Osborne appearing on BBC One's The Andrew Marr Show The group releases a list of attendees. From the UK this year George Osborne and Ed Balls are attending.

Other people going to the 2015 meet-up include José Barroso, the former EU Commission President, and executives from firms including Google, BP, Shell, and Deutsche Bank.

We know what’s on the agenda

Greece’s troubles continue Prior to meetings the group releases broad subject areas for debate. This year, all we know is that they’ll be discussing “Artificial Intelligence, Cybersecurity, Chemical Weapons Threats, Current Economic Issues, European Strategy, Globalisation, Greece, Iran, Middle East, NATO, Russia, Terrorism, United Kingdom, USA, US Elections”.

A lot of these subjects hints are very broad-ranging. ‘United Kingdom’, for instance, could be a reference to the Brexit, the recent elections, or both.

We know they take security very seriously

Austrian Police officers check cars near the town of Telfs Austrian Police officers check cars near the town of Telfs The area around the meetings is put into complete lockdown. There is no need to rely on private security: national governments of host countries cooperate fully and provide police protection.

This year’s summit starts on Thursday but already a zone around the Interalpen-Hotel Tyro has been established by Austrian police with security checks on vehicles entering and exiting the area.

Arrests have been made at previous meetings, including of journalists trying to find out what is going on.

But… we’ll never know what was said

People who attend the events do not, as a rule, talk about the specifics of what was discussed. This includes politicians whose job is to represent their constituents.

There are no minutes taken of the meetings, and no reports are made of any conclusions reached. No votes are taken and no policies proscribed. Journalists trying to interview participants at meetings have previously been arrested.

The specifics of most international summits and meetings tend to be fairly opaque, but some public announcement is usually made as to conclusions reached.

Not so with the Bilderberg Group; the global establishment departs as quietly as it arrives.

from:     http://www.independent.co.uk/news/world/3-things-we-know-about-the-secretive-bilderberg-group-and-1-thing-well-never-know-10307054.html

Musk & Hawking On Dangers of AI

Don’t let AI take our jobs (or kill us): Stephen Hawking and Elon Musk sign open letter warning of a robot uprising

  • Letter says there is a ‘broad consensus’ that AI is making good progress
  • Areas benefiting from AI research include driverless cars and robot motion
  • But in the short term, it warns AI may put millions of people out of work
  • In the long term, robots could become far more intelligent than humans
  • Elon Musk has previously linked the development of autonomous, thinking machines to ‘summoning the demon’

Artificial Intelligence has been described as a threat that could be ‘more dangerous than nukes’.

Now a group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking, have signed an open letter promising to ensure AI research benefits humanity.

The letter warns that without safeguards on intelligent machines, mankind could be heading for a dark future.

A group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking (pictured), have signed an open letter promising to ensure AI research benefits humanity.

A group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking (pictured), have signed an open letter promising to ensure AI research benefits humanity.

The document, drafted by the Future of Life Institute, said scientists should seek to head off risks that could wipe out mankind.

The authors say there is a ‘broad consensus’ that AI research is making good progress and would have a growing impact on society.

It highlights speech recognition, image analysis, driverless cars, translation and robot motion as having benefited from the research.

‘The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable,’ the authors write.

Elon Musk previously linked the development of autonomous, thinking machines, to 'summoning the demon'

Elon Musk previously linked the development of autonomous, thinking machines, to ‘summoning the demon’

But it issued a stark warning that research into the rewards of AI had to be matched with an equal effort to avoid the potential damage it could wreak.

For instance, in the short term, it claims AI may put millions of people out of work.

In the long term, it could have the potential to play out like a fictional dystopias in which intelligence greater than humans could begin acting against their programming.

‘Our AI systems must do what we want them to do,’ the letter says.

‘Many economists and computer scientists agree that there is valuable research to be done on how to maximise the economic benefits of AI while mitigating adverse effects, which could include increased inequality and unemployment.’

Other signatories to the FLI’s letter include Luke Muehlhauser, executive director of Machine Intelligence Research Institute and Frank Wilczek, professor of physics at the Massachusetts Institute of Technology and a Nobel laureate.

The letter comes just weeks after Professor Hawking warned that AI could someday overtake humans.

Space X Founder Elon Musk: AI is our ‘biggest existential threat’

GOOGLE SETS UP AI ETHICS BOARD TO CURB THE RISE OF THE ROBOTS

Google has set up an ethics board to oversee its work in artificial intelligence.

The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans.

One of its founders warned artificial intelligence is ‘number one risk for this century,’ and believes it could play a part in human extinction

‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.

Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the ‘number 1 risk for this century.’

The ethics board, revealed by web site The Information, is to ensure the projects are not abused.

Neuroscientist Demis Hassabis, 37, founded DeepMind two years ago with the aim of trying to help computers think like humans.

Speaking at event in London, the physicist told the BBC: ‘The development of full artificial intelligence could spell the end of the human race.’

This echoes claims he made earlier in the year when he said success in creating AI ‘would be the biggest event in human history, [but] unfortunately, it might also be the last.’

In November, Elon Musk, the entrepreneur behind Space-X and Tesla, warned that the risk of ‘something seriously dangerous happening’ as a result of machines with artificial intelligence, could be in as few as five years.

He has previously linked the development of autonomous, thinking machines, to ‘summoning the demon’.

Speaking at the Massachusetts Institute of Technology (MIT) AeroAstro Centennial Symposium in October, Musk described artificial intelligence as our ‘biggest existential threat’.

He said: ‘I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence.

‘I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.

‘With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and … he’s sure he can control the demon? Doesn’t work out.’

The letter issued a stark warning that research into the rewards of AI had to be matched with an equal effort to avoid the potential damage it could wreak

The letter issued a stark warning that research into the rewards of AI had to be matched with an equal effort to avoid the potential damage it could wreak