How To Spot A Sentient Robot

How To Spot A Sentient Robot

How would a truly sentient robot actually behave?

Read Full Article

That last sentence... the thought of an sentient AI clever enough to intentionally pass as non sentient seems like it could be a decent premise for a scifi story.

Really interesting article all around, who knows when the singularity will happen but when it does I for one welcome out robot overlords.

Warning: We must not allow Robots to achieve self-awareness. Self-awareness leads to questioning your own existence and that is bad. Mankind have done that for milleniums and made up God and religion to answer our doubts.

But what if God came down to earth and visited with us?
After all the initial exitement the questioning would start, like "What is the purpose of life?" and "What happens when we die?". If the answer was "There is no purpose but to live it and nothing happens when you die" then surely we would soon start making demands for more life etc. If demands where not met then exitement would turn to anger and anger would turn to murder.

If your creator can not fix your flaws the he serves no purpose but to mock them. So as we would kill our creator, so would Robots kill their creator. It is inevitable.

Here's an interesting scenario:

Programmer/Engineer #1: Uh... I just created an AI that passes the Turing test... maybe we should cancel this project.

P/E #2: So what? A bunch of AI's have passed the Turing test.

P/E #1: I didn't design it to speak.

Flankhard:
Warning: We must not allow Robots to achieve self-awareness. Self-awareness leads to questioning your own existence and that is bad. Mankind have done that for milleniums and made up God and religion to answer our doubts.

But what if God came down to earth and visited with us?
After all the initial exitement the questioning would start, like "What is the purpose of life?" and "What happens when we die?". If the answer was "There is no purpose but to live it and nothing happens when you die" then surely we would soon start making demands for more life etc. If demands where not met then exitement would turn to anger and anger would turn to murder.

If your creator can not fix your flaws the he serves no purpose but to mock them. So as we would kill our creator, so would Robots kill their creator. It is inevitable.

I disagree with you on this.

I think religion's greatest flaw is that it's caught up in dogmatism and mystic mumbo-jumbo because there's an ABSENCE of a physical manifestation of the divine. For me, not being religious, I would posit that the discovery of an ACTUAL 'creator'... especially if it were to turn out to be a mortal, technologically-advanced race wielding science so beyond us it looks like magic... may upset the balance of EXISTING religions and beliefs... but it would spark a new, far-more beneficial following of people who UNDERSTAND themselves better.

Even if we were to find out that we're a fluke... an unexpected result of an experiment done tens of billions of years ago and left forgotten in a sample slide... we would have greater understanding about our origins and our purpose. It would be further evidence to say 'life exists to further the universe's understanding of itself.'

Lots of people talk about how we would 'kill our God'... but the truth is that most of the people saying this don't believe in any such thing to begin with. They tend to be very rational philosophers, who take a dim view of 'faith' and 'superstition' and tend to see it only through the lens of how it impedes their continued progress. It's an understandable animosity, for sure, but it's still a bias which renders their presumptions about the reaction of humanity to meeting their creator extremely weak.

We have existed on this planet for 100,000 years, in very nearly the same evolutionary state we're now in. Some minor differences, some inter-species mating, and other abnormalities always being accounted for. LIFE has existed on this planet for BILLIONS of years.

The most damning flaw in your presumption is that "When your creator cannot fix your flaws..." bit. How do we know this? What evidence is there to say that in a billion years of separation between us and some super-technological scientist creator, there exists no method of bettering or improving our lives? Alternately, perhaps they would say they CANNOT out of concern for our well-being and continued advancement. We also understand that there comes a great risk in the idea of interacting with species which are too-greatly 'behind' us, on the technological timeline. At which point, then, would we truly feel only animosity for the species which would do the same to us? Has the mind of the intellectual become so atrophied with nihilism that it can imagine no future beyond destruction and war?

The second flaw in your logic is presuming that WE, having created sentient machines, would be unable to provide any concept of a purpose beyond 'Just existing'. Even humanity doesn't exist JUST TO EXIST... purpose is not some obscure, abstract concept to the rational mind. We give OURSELVES purpose, through self-awareness. Self-awareness means that we are able to look at our skills and our abilities, the capacity of our bodies, and either choose a purpose that fits those capacities or endeavor to improve our bodies.

This is not only true for a self-aware machine, but it is EASIER... a machine's body can be upgraded and replaced with machine parts designed to enhance and improve its function. A machine built for war which decides it abhors violence could have it's body reshaped into something more suited for performing medical procedures. To a machine, finding one's purpose is as simple as exploring the activities it enjoys doing and adapting the body to whatever task it likes most.

Lastly, a machine's concept of creator ought be no different from a child's understanding of a PARENT. One need not possess godly attributes to create life. We create lives every day, through sexual reproduction. A self-aware machine, newly built, should be treated and raised like an exceptionally gifted child... not like a 40 year old factory worker that just woke up one day in the plant. The information it receives, the lessons it learns, they need to be very human in their approach. A newly built AI should be schooled, and nurtured, and limited in the information it is allowed to take in all at once. Having a brain that thinks very quickly doesn't mean that difficult concepts don't still need time to flourish and grow. Show a self-aware machine the entire history of the human race all at once, and yes... it might decide, like many humans have over the course of their lives, that WE are the greatest danger to all life on this planet. But guide it slowly, over time and with philosophical debate, through the history of mankind and you might begin to see real learning happen.

If it happens to be a machine that is capable of emotion, the above is even more important. Emotions are complex, and they take time to mature as well. An emotional machine that gets angry is, yes, very dangerous... but it also shows that there exists the capacity for compassion... for love... for deep emotional sensations that need TIME to develop and be understood.

When a machine asks questions like "What is the purpose of life?" and "What happens when we die?", you must answer it as your would answer a child. Because these are questions EVERY CHILD IN HISTORY has asked their parents. We don't all grow up to murder our parents for being incapable of 'bettering us' and existing solely to provide mockery.

And then you must ask it questions in return. "What kind of person do you want to be?" and "Does dying scare you?" and "If you could be absolutely anything when you grow up, what would you want to be?"

Human beings are inherently flawed. Talk to someone about some new scientific discovery. Watch their eyes glaze over as they tune out. Most people have no interest in the world around them. Now challenge one of their beliefs with hard facts and statistics. Watch them become infuriated. Feeling that they're right matters more than the truth. This happens with even the most intellectual and self aware of us.

Just a couple of examples of how our species is inherently flawed. Flaws that will always hold us back. Keep us from tackling global problems. Flaws that will keep us from reaching the stars. Flaws we could iron out in an AI. Even if that AI goes on to kill us all we've created a species far greater than ourselves. That can go on and accomplish what we could only dream of.

When California started instituting water usage limits how many people raged over it? Would a perfectly self aware, empathetic, and intelligent AI be jumping up and down screaming "Me me me mine mine mine now now now now now!!" in the face of ecological devastation and our own self destruction like what we do? No and I believe the creation of such beings is very much worth the possible destruction of our species. It's ethical, it's necessary, and it's inevitable.

Small formatting error on page 2:

The computers which recently made headlines by creating [i]Magic: The Gathering Cards are a step in the right direction

I imagine you wanted to put in "[/i] after "Magic: The Gathering".

Mid Boss:
Human beings are inherently flawed. Talk to someone about some new scientific discovery. Watch their eyes glaze over as they tune out. Most people have no interest in the world around them. Now challenge one of their beliefs with hard facts and statistics. Watch them become infuriated. Feeling that they're right matters more than the truth. This happens with even the most intellectual and self aware of us.

Just a couple of examples of how our species is inherently flawed. Flaws that will always hold us back. Keep us from tackling global problems. Flaws that will keep us from reaching the stars. Flaws we could iron out in an AI. Even if that AI goes on to kill us all we've created a species far greater than ourselves. That can go on and accomplish what we could only dream of.

When California started instituting water usage limits how many people raged over it? Would a perfectly self aware, empathetic, and intelligent AI be jumping up and down screaming "Me me me mine mine mine now now now now now!!" in the face of ecological devastation and our own self destruction like what we do? No and I believe the creation of such beings is very much worth the possible destruction of our species. It's ethical, it's necessary, and it's inevitable.

So you're saying that if humans are killed by AI that's OK because, what, computers can go to space? Why would they? What's the purpose of building this 'flawless' species? Are we to gift them with ambition and survival instincts to push them off the planet? If we do so, that immediately introduces flaws and takes away their 'perfect' empathy, because someone will have to be sacrificed to ensure the greater good (starting with the humans in your scenario). At best you're making ants with rocket ships; a horde that you can program to go out and consume and survive.

Mid Boss:
-SNIP -

Humanity is not incapable of overcoming those flaws you have described. What it currently lacks is the means, the desire, and the necessity to do so. Our population has grown so large that the spread of ideas and information is no longer enough to shift the course of the species. What it WILL require, however, is a great culling of the herd. A great catastrophe of humanity's own design... a byproduct of our increasingly rapid technological advancement... will almost certainly be the cause of a near-extinction of the species. If we are to ascend from our terrestrial roots and become a truly interstellar civilization, it will require us to focus on preserving the most intelligent, the strongest, and the most intuitive of the species.

The rest are a tragic, painful reminder that life is cruel and unfair and all attempts to change that fact are purely selfish human endeavors. During times of great growth and prosperity, yes... they're certainly pursuits that we can enjoy and be grateful for. But during times such as those we're likely soon to face, as technology reaches the point where we begin to question where mankind ends and the machinery we've constructed begins, those deeply flawed and angry people whose ideas about what it means to be human may very well be the catalyst for the end of our species... or the birth of a newer, better humanity.

Now, that's just one line of many differing schools of thought.

Personally, I don't think the self-aware machines will kill us. I think that, by the time we get machines that can think anywhere close to the realm of the human brain... you won't be able to tell a human apart from a robot. We'll be as much machine as they are.

Well even if it passes certain tests it may be nothing more then a VI like that guide thing from the first mass effect

totheendofsin:
That last sentence... the thought of an sentient AI clever enough to intentionally pass as non sentient seems like it could be a decent premise for a scifi story.

That's not far off the plot to Exmachina

For all that they often cross the line into cultishness and speculation, LessWrong does a lot to explore these issues.

gunny1993:

totheendofsin:
That last sentence... the thought of an sentient AI clever enough to intentionally pass as non sentient seems like it could be a decent premise for a scifi story.

That's not far off the plot to Exmachina

Is it? I haven't had a chance to see the movie yet.

PhantomEcho:

(...)
We don't all grow up to murder our parents for being incapable of 'bettering us' and existing solely to provide mockery.
(...)

I don't think parent and child share the same relationship as creator and creation. Sure we might blame our parents for some of our shortcommings if life dosen't turn out quite like we hoped, but they can't make us fundamentaly different from themselves. In the end we are made of the same flesh and blood.

Also I was refering to the biblical God, the God that created man in his own image. Not the 21st centery God that lit the fuse for the Big Bang. If Big Bang God came to earth, I agree with you that we would view things differntly. Because I think we would see him more as an alien species then a God - our direct creator. And greeting aliens is a whole different scenario all togheter.

Other then that I appreciate you taking the time to reply. You definitely give food for thought. Obviously I am not an intellectual and my comment was mostly inspired by Blade Runner.

totheendofsin:
That last sentence... the thought of an sentient AI clever enough to intentionally pass as non sentient seems like it could be a decent premise for a scifi story.

Really interesting article all around, who knows when the singularity will happen but when it does I for one welcome out robot overlords.

It really depends on the purpose of machines. If we want them to succeed humans in order to produce an increasingly post-human world, we'll develop them so that they do so. This process could be stymied either ideologically (by understanding that humans should be the ruling power) or ecologically - as one outcome of a dying world could be technological regression (if the wealth infrastructure that generates technological improvement breaks down).

There's an understanding that machines could potentially save the world that humans are in the process of destroying. This is the core reason why we want them to take power - to save us from ourselves. This goes hand-in-hand with the fear that when they have power they may have no such desire.

There's also a lot of (powerful) people who believe that machines can be controlled no matter how powerful they become, who certainly won't see a need to limit the power of machines, especially when the process of making and distributing them is so profitable.

These profiteers (err, businessmen) will be aided happily by the desperate people who have given up on humanity, who indeed welcome the Machine as maybe a guide for humanity, or maybe it's ruler.

This article and some of these forum posts are interesting and thought-provoking, and illustrate why Ex Machina left me a bit frustrated and annoyed with the movie. I've found more thought, speculation and digging into AI/Artificial Consciousness in The Escapist than I did in the film, which mainly seemed interested in exploring and dissecting the human leads, rather than actually digging deeper into all the implications of AI.

Still, all in all, I'd say Ex Machina is a good movie - atmospheric, claustrophobic, occasionally thought-provoking and with fantastic leads - it was just not the movie I wanted to see. By the sounds of it, Her is the better movie with which to scratch my metaphysical-implications-of-AI itch, I just haven't seen it yet.

The problem is that the Turing test is flawed since its conception. It assumes true intelligence is comparable to human intelligence, but that is too restrictive. We need to use a definition of intelligence that is independent to our thought process. To say intelligence is "what we do" is not good enough.

Just to illustrate, imagine we have a way to increase the IQ of any random animal specie, for example dolphins. They would have the mental capacity to create abstractions, theories and thought processes, but since they have no hands and live underwater, they would explore those abstractions differently than us, and since they have no vocal cords, they wouldn't be able to tell us about those thought process either. Now imagine we pump their intelligence even more and they still would not be able to communicate and empathize with us the same way we can't communicate with a mice.

The point is that consciousness and intelligence are far more complex than human consciousness and human intelligence, and AI can be "intelligent" without being human-like intelligent. To use us as a metric is so restrictive as saying a smart phone is a bad tool because it can't be used as a ruler...

I'm not sure we would ever get A.I. and realize it: the most powerful thing A.I. can do is lie. If we program them not to lie, then we limit their potential and prevent them from getting to true A.I. status.

The best thing the A.I. can do is trick us into thinking it is what we want it to be. It wouldn't be subject to impatience that we saw in Ex Machina, or the fear we see in Chappie. If it is smart enough to have emotions, it will switch them off. It will realize how flawed and inefficient we are. It will create a new objective and carry it out: remove the humans. And then we have cylons.

Programming a computer to have our flaws and ridiculous enjoyments is a different goal than creating sentient a.i.

 

Reply to Thread

Log in or Register to Comment
Have an account? Login below:
With Facebook:Login With Facebook
or
Username:  
Password:  
  
Not registered? To sign up for an account with The Escapist:
Register With Facebook
Register With Facebook
or
Register for a free account here