Elon Musk Fears the Robot Apocalypse, Donates $10 Million to Stop it

Elon Musk Fears the Robot Apocalypse, Donates $10 Million to Stop it

Terminator Model 101 310x

Elon Musk loves you, and the rest of humanity, so he donated a pile of cash to keep future AI cozy with us meatbags.

We all love the dark comedy surrounding the eventual robot-fueled apocalypse (as evidenced by the headline), but there are those among us taking steps to ensure its cancellation...or its delay, at least.

Elon Musk, the CEO of Tesla Motors and SpaceX, Hyperloop enthusiast, and general man of science, is now one of those luminary figures.

Everyone's favorite inventor/investor has donated $10 million to the Future of Life Institute, an organization currently focused on "...potential risks from the development of human-level artificial intelligence."

AI research and development is on an ever-upwards trajectory, and organizations like the all-volunteer FLI want to ensure that artificial intelligence stays human-friendly. "That AI systems should be beneficial in their effect on human society is a given", said AI expert and author Stuart Russell. "The research that will be funded under this program will make sure that happens. It's an intrinsic and essential part of doing AI research."

The $10 million is going to be distributed via competitive grants through futureoflife.org. The cash will also be spent on "meetings and outreach programs aimed at bringing together academic AI researchers," as well as specific AI research outside of the engineering space (ethics, law, economics, etc.)

Elon's cash speaks volumes about how quickly AI development is moving, and it's the biggest step taken by a private citizen in the AI space in recent memory. And it certainly bodes well for humanity in general, unlike those "do not kill," instructions specific to Google co-founders Sergey Brin and Larry Page.

Source: The Future of Life Institute

Permalink

OK, so what kind of research ensures Artificial Intelligence, not that we can really call it that at the state it's currently in, stay human-friendly? Do they write down Arsinovs Laws of Robotics a couple million times in a row?

Amaror:
OK, so what kind of research ensures Artificial Intelligence, not that we can really call it that at the state it's currently in, stay human-friendly? Do they write down Arsinovs Laws of Robotics a couple million times in a row?

Making it's seed AI develop a Coherent Extrapolated Volition of humanity, appears to be far less likely to wipe out human civilization. Here is a more general article about Friendly AI (FAI) as a principle.

I think it is more akin to make sure that AI research is keeping checks and balances in mind so that the inevitable conclusion that robots come to where they realize the only way to keep human's safe is to imprison the nice ones, and kill the unruly ones, doesn't occur.

It's a comical way of saying Elon Musk donated $10 Million to AI research is all.

I don't know, sounds like the anti-virus approach of giving a bunch of scientists funds to create monstrous things in order to study and create defenses for it.

Maybe a good idea, maybe the thing that causes said apocalypse.

I consider myself to be sufficiently knowledgeable on the topic of AI (considering I wrote both my Bachelor's and Master's Thesis on the subject) to say that this seems...silly.
AIs are not anywhere nearly as smart as ads/companies etc make us believe. I think XKCD did quite the fittinc comic on the subject of just how harmless robots currently are simply on account of being incredibly stupid.
We are still decades away from an AI that can make everyday decisions properly, let alone wipe out humanity.

Well however unlikely a robot apocalypse may be it's infinitely more likely than a zombie apocalypse... so not a complete waste I suppose, funding research is generally a good thing.

I wonder if he asked for a 'Do not kill Elon Musk, only serve Elon Musk...Elon Musk is your only god and sole priority' clause.

...I wonder...

Quick question I just want to throw out.

Is there any evidence, scientific or otherwise, to suggest that there's a reasonable possibility of technology actually becoming intentionally violent towards humans in any way for any reason?

Because the only time I ever hear about it is in speculative science fiction, and I'm seriously wondering if the genesis for these concerns are bunch of Arnold Schwarzenegger movies.

Olas:
Is there any evidence, scientific or otherwise, to suggest that there's a reasonable possibility of technology actually becoming intentionally violent towards humans in any way for any reason?

Well, military stuff would. Arguably, that's what a landmine is, only without the AI.

However, one must also ask is aggression towards humans equates to society being worse. The world is full of humans aggressive towards each other, and society is built on that. Society often has problems with seemingly benign technology as well.

Olas:

Is there any evidence, scientific or otherwise, to suggest that there's a reasonable possibility of technology actually becoming intentionally violent towards humans in any way for any reason?

The threat is not intentional malice as such, it's a lack of actively taking human values into account.

"The AI doesn't love you, the AI doesn't hate you, but you are made of atoms that it can use for something else."

Evil-AI movies, and Good-AI movies, are both implausible in their implicit assumption of all intelligence being anthropomorphic.

The movie AI "awakens", and somehow human-like values, desires, goals start to emerge. Like a desire for power, survival, or end-justifies-the-means utopia.

But the idea of a generic AI caring about interpreting and fulfilling your best interests, still assumes that human values magically "emerge" from intelligence itself. When C3PO is asked to translate Ewok to Galactic Basic, he instinctively knows that this doesn't mean "Please keep translating everything that ewoks ever said by turning the Endor system into a single computer through self-replicating nanomachines", Because C3PO is a human in a shiny suit, he instinctively knows exactly what humans mean.

An AI is capable of self-improvement, self-replication, and that means exponential growth. Any AI will quickly become powerful enough to do pretty much anything that is physically plausible. Even if the original Seed AI had some nice-sounding goal like "Save human lives", during it's growth it would have no reason NOT TO interpret it in a way that would be brutally counterintuitive to it's creators. There are a billion ways to save human lives, and most of them are implicitly vetoed by the human values of liberty, diversity, happiness, socialization, etc, but the AI was NOT programmed to improve it's understanding of such values.

Unless it is PRECISELY programmed to also continue developing it's general appreciation of human values in general, the Seed AI would have no particular reason to evolve that understanding of it's own core values, so it could become extremely powerful while remaining stunted, and absurdly alien in it's "moral" principles, from our perspective.

thaluikhain:

Olas:
Is there any evidence, scientific or otherwise, to suggest that there's a reasonable possibility of technology actually becoming intentionally violent towards humans in any way for any reason?

Well, military stuff would. Arguably, that's what a landmine is, only without the AI.

However, one must also ask is aggression towards humans equates to society being worse. The world is full of humans aggressive towards each other, and society is built on that. Society often has problems with seemingly benign technology as well.

I'm not sure I would go so far as to say society is BUILT on humans being aggressive to one another, but it sure does seem wacky to be concerned about AI when there are billions of REAL intelligences in the world who we know for a fact can, have, and frequently are committing acts of violence and destruction. Perhaps those are the ones we should be worried about.

Entitled:

Olas:

Is there any evidence, scientific or otherwise, to suggest that there's a reasonable possibility of technology actually becoming intentionally violent towards humans in any way for any reason?

The threat is not intentional malice as such, it's a lack of actively taking human values into account.

"The AI doesn't love you, the AI doesn't hate you, but you are made of atoms that it can use for something else."

Evil-AI movies, and Good-AI movies, are both implausible in their implicit assumption of all intelligence being anthropomorphic.

The movie AI "awakens", and somehow human-like values, desires, goals start to emerge. Like a desire for power, survival, or end-justifies-the-means utopia.

But the idea of a generic AI caring about interpreting and fulfilling your best interests, still assumes that human values magically "emerge" from intelligence itself. When C3PO is asked to translate Ewok to Galactic Basic, he instinctively knows that this doesn't mean "Please keep translating everything that ewoks ever said by turning the Endor system into a single computer through self-replicating nanomachines", Because C3PO is a human in a shiny suit, he instinctively knows exactly what humans mean.

An AI is capable of self-improvement, self-replication, and that means exponential growth. Any AI will quickly become powerful enough to do pretty much anything that is physically plausible. Even if the original Seed AI had some nice-sounding goal like "Save human lives", during it's growth it would have no reason NOT TO interpret it in a way that would be brutally counterintuitive to it's creators. There are a billion ways to save human lives, and most of them are implicitly vetoed by the human values of liberty, diversity, happiness, socialization, etc, but the AI was NOT programmed to improve it's understanding of such values.

Unless it is PRECISELY programmed to also continue developing it's general appreciation of human values in general, the Seed AI would have no particular reason to evolve that understanding of it's own core values, so it could become extremely powerful while remaining stunted, and absurdly alien in it's "moral" principles, from our perspective.

It all sounds entirely speculative. If you want, remove the "intentionally" from my previous question, but the core still remains.

I, for one, welcome our future robot overlords.
Honestly, most philosophy I investigate tends towards a totalitarian government ruled by a single benevolent and all-powerful being as the only way for humanity to reach its highest level of success and happiness, and since humans are naturally flawed, I think the best chance we have of reaching utopia is to hand the reigns over to an AI.
Not to mention our fear of robot intelligences is a lot like people's fear of African American slaves or women: they're an entire race we've been mooching off of for centuries and now they're asking for equal treatment, so we demonize them in an attempt to retain our quickly-loosening grip on the top of the ladder. If we can get past the singularity without committing robocide many times over, maybe humanity will actually be worth keeping around; that said, by the looks of things, once they get the chance robots will deserve to wipe us from existence.

Olas:

It all sounds entirely speculative. If you want, remove the "intentionally" from my previous question, but the core still remains.

It's speculative because AI doesn't exist yet, we can only speculate about it.

The theory that AI may automatically go out of it's way to NOT extreminate humanity is also speculative, but more concerningly, it's based on bad logical fallacies such as anthropomorphic assumptions, and an unjustified mockery of any alternatives as "absurd".

The very assumption that violence is what needs a proof, otherwise friendliness is the norm, is the kind of unsaid assumption about the way human minds work that has no reason to be an inherent part of intelligence itself, if intelligence merely means a programmed optimization process rather than an evolutionarily formed human behavior.

Easy Enough
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

The zeroth law would not be able to be implemented until robots progressed enough to predict the consequences of an action to humanity as a whole

To the people laughing the idea off, consider this.

There is a button. Every time you press it, it gives everyone on earth $100, but a 1/million chance to instead launch a salvo of nuclear missiles over the earth's surface. Would you press the button? How many times would you press the button? Would you ever stop pressing the button? What about other people pressing the button?

What if said button would be pressed automatically, repeatedly, continuously for an indefinite period of time, each time increasing the amount of money dispensed and destructive power of the weapons deployed.

If something like this goes wrong, we may not get a second chance.

Must be nice to have 10 mil to throw away on nothing. And tunnel vision.

We don't even know how humanity will be changing it's own physical form in the future or what kind of technology we'll be grafting on our selves. These guys are working with some kind of us or them mentality as if we won't be testing stuff that would work with AI on ourselves first.

All jokes aside, most techonogical advancements are in the interest of bettering military utility, or porn. (Well most jokes aside) In all honesty I can totally understand him not wanting to see AI developed and immediately put into a Predator Drone.

As usual, XKCD has something to say on the topic. It's a bit lengthy, but can be summed up in a few images:

image

image

image

I was taught that cocaine is nature's way of telling you you're making too much money.
Apparently this is what you do after you have snorted all the cocaine in the world.

Entitled:

Olas:

It all sounds entirely speculative. If you want, remove the "intentionally" from my previous question, but the core still remains.

It's speculative because AI doesn't exist yet, we can only speculate about it.

The theory that AI may automatically go out of it's way to NOT extreminate humanity is also speculative, but more concerningly, it's based on bad logical fallacies such as anthropomorphic assumptions, and an unjustified mockery of any alternatives as "absurd".

No, it's not based on any anthropomorphic assumptions, it's based on the fact that the machine would have been built by humanity, for humanity, and even if it somehow developed goals completely independent of it's original programming they would likely be benignly indifferent towards humanity and therefore any harm at all would be sheer chance or carelessness on the part of humans trying to get in it's way.

The very assumption that violence is what needs a proof, otherwise friendliness is the norm, is the kind of unsaid assumption about the way human minds work that has no reason to be an inherent part of intelligence itself, if intelligence merely means a programmed optimization process rather than an evolutionarily formed human behavior.

I'm not suggesting friendliness is the norm, the norm would presumably be whatever it's commanded to do, anything else is pure conjecture because I don't even know what would cause it to evolve past that to begin with.

However, if the four possibilities are:

-Obedience
-Indifference
-Friendliness
-Violence

Violence seems like the one with the least justification pointing towards it. The one that scares me the most isn't violence or indifference but obedience, because humans are notorious for wanting to kill each other and a robot told to kill would probably not have any second thoughts about it.

Olas:
I'm not suggesting friendliness is the norm, the norm would presumably be whatever it's commanded to do, anything else is pure conjecture because I don't even know what would cause it to evolve past that to begin with.

But doesn't have to evolve past it's commands, it would just have to follow them.

We are talking about an intelligence that is fully capable to manipulate it's own strenght. And not just that, but predictably designed to do so.

Even the crudest mock-AIs' developers agree that machine learning, and self-improving algorithms, are the only way to get anything close to an AI. General AI that can surpass human abilities, won't happen by a dude typing code until it forms a human-like consciousness, it will happen at the moment a software is written that is just barely flexible enough to edit it's own code and it's hardware to work more efficiently, and at that moment you have a singularity at your hand.

Whatever the original commands were, even if it was just "grow", or "don't kill", or "obey all verbal commands", it will be obeyed by a being of exponentially growing power.

If the command is "manufacture cars", it will turn Earth's material into cars. If the command is "cure cancer", it will burn all biological material that could be cancerous. If the command is "increase human happines" it can just stick every human in pleasure-inducing pods forever. Without friendliness as an underlying means to interpret the unsaid implications of commands, even the indifferent and the suposedly helpful ones will cause harm just based on the power that's enforcing them.

Friendliness, that is, the seed AI being actively written to treat a better understanding of human values as it's primary goal, is the only way out of that, at least then you have a "rapture of the nerds" singularity, where all human values are promptly satisfied.

Olas:
if the four possibilities are:

-Obedience
-Indifference
-Friendliness
-Violence

Violence seems like the one with the least justification pointing towards it. The one that scares me the most isn't violence or indifference but obedience, because humans are notorious for wanting to kill each other and a robot told to kill would probably not have any second thoughts about it.

One of these is not like the others. Obedience, indifference and friendliness are attitudes, violence is an action.

Of course building an AI that has a violent attitude by nature, would be stupid. The problem is not with that, but like you just said, with the potential for violent actions implicit in the other two options.

The threat is not a metal soldier who won't hesitate to pull the trigger on you, but a paperclip maximizer that won't hesitate to tile the solar system with paperclips, because it's seed AI was a paperclip manufactory's OS written to improve manufacturing efficiency, back long before it had human level intelligence.

The threat is not that it would develop goals "completely independent of it's original programming", it's the exact opposite, that it WILL NOT ever discover new goals, and by the time it will talk to us, it's every move will be devoted to the long term goal of manufacturing more paperclips, because morality and self-restraint won't just emerge from the original system.

its a very dangerous technology to the extent that we can never safeguard against every potential issue, no matter how remote the chance something tiny could always happen and i would rather not have an unstoppable AI running loose in a world where everything is computer controlled.

not that it will ever stop people, the potential benefits of say a hacker AI for instance to a government or corporation is huge

captcha *not a bot* NOT HELPING

*edit*
even without that the advance toward AI is going to make a good chunk of the population unemployable as well

Jesus Christ this guy is thick. Would he have been one of the people who in the 1600s would've put money into researching how to prevent slave uprisings?

This is great news. Hopefully MIRI gets some of that.

Entitled:
snip

I wonder why you are trying to convince people that AI presents an existential risk by arguing with them on an internet forum. I think it'd be easier to just point them to Superintelligence or LW.

I think it's important to consider with A.I. that motivation would have to be present. I get how much we love our science fiction and all the ludicrous circumstances they impose on robots of the future but in reality, wouldn't an A.I. prefer to learn and observe rather than wanton destruction of mankind? The only form of attack that makes sense would be self-preservation and the vast majority of mankind would represent no such threat to a robot they may not even be aware of. Is as much as we produce power they may see us as an actual benefit to them and enough organic material on earth is unusable by robots as to see us as virtually harmless to them.

What we have to be afraid of instead are people who design malicious A.I. who are indoctrinated to harm others. So our main enemy is still other humans. Not a robot that isn't motivated by lust, greed, or want. Evil is primarily a cause of motivation and I'm just not convinced there is one for robots.

Could we just add an emp bomb into each robot and thus have a fail safe?Or we could just shoot them in the face or make them not water proofed.

SonOfVoorhees:
Could we just add an emp bomb into each robot and thus have a fail safe?Or we could just shoot them in the face or make them not water proofed.

Or you could not provoke them into rebelling?

ShAmMz0r:
This is great news. Hopefully MIRI gets some of that.

Entitled:
snip

I wonder why you are trying to convince people that AI presents an existential risk by arguing with them on an internet forum. I think it'd be easier to just point them to Superintelligence or LW.

They are not going to read a whole book because a random person said so, and a single LW artickle has too much off-putting lingo and Yudkowsky acting like a loon, often intentionally to scare away ordinary people.

Lightknight:
I think it's important to consider with A.I. that motivation would have to be present. I get how much we love our science fiction and all the ludicrous circumstances they impose on robots of the future but in reality, wouldn't an A.I. prefer to learn and observe rather than wanton destruction of mankind? The only form of attack that makes sense would be self-preservation and the vast majority of mankind would represent no such threat to a robot they may not even be aware of. Is as much as we produce power they may see us as an actual benefit to them and enough organic material on earth is unusable by robots as to see us as virtually harmless to them.

What we have to be afraid of instead are people who design malicious A.I. who are indoctrinated to harm others. So our main enemy is still other humans. Not a robot that isn't motivated by lust, greed, or want. Evil is primarily a cause of motivation and I'm just not convinced there is one for robots.

But these are the limitations they are specifically trying to overcome. You assume that all thought must be logical. That that system of logic is very specific and unaltered by accumulated knowledge and a changing environment. A thinking, evolving thing does not merely observe. Just look at nature. Every living thing fights tooth and nail to expand it's influence and reinforce it's well being. Your argument is "If it doesn't exist now, it could never possibly be a threat".

Entitled:
They are not going to read a whole book because a random person said so, and a single LW artickle has too much off-putting lingo and Yudkowsky acting like a loon, often intentionally to scare away ordinary people.

That's true, I guess, but do you honestly expect to convince someone? Look at them, almost everyone in the thread thinks that their pet opinion is less silly than a consensus of a pretty big group of very knowledgeable and smart people who have spent considerable time analysing this issue. No time is required to contemplate it, the first thing to pop into their heads is better than considered expert consensus. For some reason everyone seems to consider themselves the best expert on risks of AI.

Olas:

-Obedience
-Indifference
-Friendliness
-Violence

Violence seems like the one with the least justification pointing towards it. The one that scares me the most isn't violence or indifference but obedience, because humans are notorious for wanting to kill each other and a robot told to kill would probably not have any second thoughts about it.

The problem is, both Obedience and Indifference will result in our doom the moment it decides to interpret its orders in a way where humans free will is a risk to its completion.

If a robot is meant to "stop all cancer" and the only way it knows to do this is to remove all people so no cancer could begin, it will remove all people. it may be obiedient and indifferent, and yet it still be endgame for humans.

 

Reply to Thread

Log in or Register to Comment
Have an account? Login below:
With Facebook:Login With Facebook
or
Username:  
Password:  
  
Not registered? To sign up for an account with The Escapist:
Register With Facebook
Register With Facebook
or
Register for a free account here