This Adorable Robot Just Might Be Self-Aware

This Adorable Robot Just Might Be Self-Aware

Researchers from the Rensselaer Polytechnic Institute have introduce a self-aware robot who's almost too cute for words.

We've been talking quite a bit about sentient robots lately, and the general consensus seems to be there's still a while to go before we have advanced AI seen in the movies. But if a recent breakthrough from the Rensselaer Polytechnic Institute's AI and Reasoning Lab is anything to go on, maybe we're moving faster than we thought. Researchers adapted the classic "King's Wise Men" puzzle into a version robots could understand, but only pass by expressing self-awareness. As it turns out, a robot has passed this test for the first time - and adorably so, I might add.

The original test required putting white or blue hats on three wise men, but at least one hat must be blue. Without speaking to each other, the first wise man to stand up and correctly state their hat's color would win the test. In the robotic version, three small robots are programmed with the Deontic Cognitive Event Calculus, which allows them to process reasoning. They are told two of them were given "dumbing pills" which prevent them from speaking (really just pressing a button on their heads) while the third was given a placebo. They are then asked to state who received the dumbing pill and who got the placebo.

The attached video shows the results - one robot slowly rises to his feet and says "I don't know". Realizing what's happened, the robot follows up with "Sorry, I know now. I was able to prove that I was not given the dumbing pill".

To be clear, what we're seeing isn't sentience, but if they're programmed as advertised it's still impressive. These results not only require the robot to understand the puzzle's rules, but also its own voice as distinct from the other two robots. Those are crucial ingredients for self-awareness, which lets you recognize yourself as an individual unit, and apparently harder to code than you'd think.

The Rensselaer Institute's Selmer Bringsjord will present this research at RO-MAN 2015, an IEEE symposium on robot and human interactive communication.

Source: CNET

Permalink

Oh yeah, it'll be really cute when NAO 2.0 learns the nuclear weapon launch codes and kills us all!

Stephen Hawking was right.

Incoming fear-mongers and jokers alike.

Impressive but it didn't quite pass the test as it gave two answers even though the first response was a contradiction. Did it doze off or is it suffering from a hearing impairment?

Researchers adapted the classic "King's Wise Men" puzzle into a version robots could understand

Hm, perhaps it's not so self-aware as you think.

I for one welcome our mini & cute robot overlords.

No, but seriously, the vid just made me smile when it got up and answered.

mad825:
Incoming fear-mongers and jokers alike.

Impressive but it didn't quite pass the test as it gave two answers even though the first response was a contradiction. Did it doze off or is it suffering from a hearing impairment?

Researchers adapted the classic "King's Wise Men" puzzle into a version robots could understand

Hm, perhaps it's not so self-aware as you think.

Seems almost like a paradox. I didn't know yet by speaking was aware correctly. And yet the robot didn't go critical? A paradox buffering robot! Oh no!

Most likely the program was set up as conditional. Went given a command/communication, the robots must respond if they'd not been given the dumbing pill. Most likely the response of "I do not know" was a pre-programmed given before the next response was set in the program to trigger. The video provided was short and only gave this singular test which by no means even gives a glimpse as to whether or not the robots understand the questions given to them or if all responses are preset singular lines. Had they ran multiple tests including asking the robot "Are you functional?", some basic mathematical equations, and even a color/image test, only then would I be impressed. Right now it is basically the equivalent of a hairless Furby.

Hazy992:
Oh yeah, it'll be really cute when NAO 2.0 learns the nuclear weapon launch codes and kills us all!

Stephen Hawking was right.

They will kill us with cuteness...Also nuclear weapons of mass destruction, but mostly cuteness.

As I've gotten older, things like this seem to find that caveman part of the brain still working up there in my noggin, despite my best efforts to pickle it and placate it with food and shiny things.

That caveman brain part tells me to pick up a rock and smash bad robots, smash!
It mainly happens with the robots that move in a human fashion, not so much with the speaking kind.

I think other more educated people call it the 'uncanny valley' response.
All I know is I'm eyeing the neighbour's rockery and picking out ammunition on a subconscious level.

*cracks knuckles*

Seems like I found myself a thesis subject for my Philosophy MA.

Hahahaha, this actually reminds me of that one part in Rick and Morty

That's pretty well done though, with stuff like this and that robot that can adjust the way it moves to compensate for any damage it receives, I think we're in for some pretty interesting stuff in the future and robotic world domination muahahaha

Wait, what? How does this even make sense? It's done very cutely, I'll grant you, but the test doesn't prove very much, nor does it follow the rules of the supposed puzzle.

So, first, the "puzzle" given to the robots is damned stupid. The rules of the original logic test require three pieces of information: the color of the other two hats, and the knowledge that at least one hat is blue, and the hats can be blue or white. Given that no robot can know if it can speak until it does, then the only logical answer to the puzzle is for each to attempt to speak saying that it knows the answer. By saying it does not know, the robot is by definition too stupid to really get the puzzle.

Moreover, it's no good example of the robot's ability to decide. In order for me to accept that the test was given fairly and is an accurate representation of the system, I have to take the producer of the video entirely at his word. It's all very well to say that they went through a complicated system to arrive at their answer, but I could do it much easier.

StandUp();
SayLines();

Bam! I have an intelligent, self-aware robot that gets a stupid simple logic problem wrong. Go me.

That's it, we're fucked.

I'll be running off to the nearest underground base, and none of you get my beans! Not one bean!

Well it's a nice simulation and the expertly crafted movement system makes it seem even more legit, but this is still just simulating and parroting a situation.
Sadly it seems we will have many pretenders and even more confused people on the topic of self awareness, which for the purposes of marketing and selling you bullshit is very good, not so good for making progress.

Ah, a self aware robot. Excuse me while I take this news at face value and apply no scepticism to what is ostensibly a massive step forward.

In all seriousness, if this is true, then great, but I've seen far too many embellishments upon these types of stories to trust them any more.

Xeorm:
Wait, what? How does this even make sense? It's done very cutely, I'll grant you, but the test doesn't prove very much, nor does it follow the rules of the supposed puzzle.

So, first, the "puzzle" given to the robots is damned stupid. The rules of the original logic test require three pieces of information: the color of the other two hats, and the knowledge that at least one hat is blue, and the hats can be blue or white. Given that no robot can know if it can speak until it does, then the only logical answer to the puzzle is for each to attempt to speak saying that it knows the answer. By saying it does not know, the robot is by definition too stupid to really get the puzzle.

Moreover, it's no good example of the robot's ability to decide. In order for me to accept that the test was given fairly and is an accurate representation of the system, I have to take the producer of the video entirely at his word. It's all very well to say that they went through a complicated system to arrive at their answer, but I could do it much easier.

StandUp();
SayLines();

Bam! I have an intelligent, self-aware robot that gets a stupid simple logic problem wrong. Go me.

Whilst I will agree with you on most points, one thing you may not be aware of:

The rules of the original logic actually relied on 4 pieces of information, or even just 1 if you want to look at it that way. The King declared that the test would be equal for all the wise men (robots) and as such all three men must be wearing blue hats. Otherwise, at least one of them will be on an unfair standing to the others.

However, in this case, as adorable as it is, this is essentially the same as a chat bot. A very advanced chat bot, yes, but a chatbot none the less.

If it's self-aware, and can thus tell the difference between itself and anything else, then it is for all intents and purposes, alive.

But it's probably not. Self-awareness is a feature of the glorified biological nervous systems we call brains, because everything you are is composed a series of fibers, flesh and water. No matter how much it parrots and repeats actions, that won't make it self-aware.

It is cute, though.

Cowabungaa:
*cracks knuckles*

Seems like I found myself a thesis subject for my Philosophy MA.

I look forward to reading.

"Self aware robots: A good thing or SMASH IT SMASH IT OH GOD IT'S STILL MOVING GET A BIGGER ROCK!"

Also, good luck with the masters.

Sorry, no good.

I can see the IF-THEN statements going off in its head. If it did not know an answer, then it would say it didn't know. If it was able to make speech, then it would say so. I demand something more spontaneous from our little robots to prove sentience, something along the lines of "Gee, I wonder. Could it be the one who can TALK? This game is both stupid and rigged. Get me off this table. I'm done with this.". Show me something you can't explain with programming, something like going over to the other robots and reactivating their speech or getting a fourth guy for barbershop quartet.

FalloutJack:
Sorry, no good.

I can see the IF-THEN statements going off in its head. If it did not know an answer, then it would say it didn't know. If it was able to make speech, then it would say so. I demand something more spontaneous from our little robots to prove sentience, something along the lines of "Gee, I wonder. Could it be the one who can TALK? This game is both stupid and rigged. Get me off this table. I'm done with this.". Show me something you can't explain with programming, something like going over to the other robots and reactivating their speech or getting a fourth guy for barbershop quartet.

Yeah, this very much appears to be simple logic code commands but with a complex veneer over it to bamboozle people with the "adorable" response (which obviously has to have been pre written, recorded and coded into the robot by actually self aware people, and probably took 10 times as much code and testing to get right.)

To show sentience it really must make decisions on it's own. To be honest it would be a much better test of sentience if the robot went through it's logic commands and decided not to solve the logic puzzle because it couldn't be bothered to stand up.
Voice recognition has been programmable since the 90s (probably earlier.) Changing which voice the robot recognises from a particular person to the robot itself isn't a boost into self awareness the same way that writing "Air Force One" on your Bicycle doesn't make you Barack Obama.

 

Reply to Thread

Posting on this forum is disabled.