Robots Rights
Do robots have rights?
Fascinating ideas from one of our guests on the Monday World Update, , a computer scientist in Stockholm.
He believes robot technology is already so far advanced we have to consider against abuse by humans and vice versa.
We know about the Asimov rules to protect humans from robots:
1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
But what about rights for robots as they become more sentient, more intuitive, and more lifelike? The 70/30 blog shows a spookily autonomous looking .
Smiling, if stiff, are already out there.
And Blogger J Marquis in Seattle on one of the many challenges.
I know many of you are listening to World Update when you're in a fragile state early in the morning, but I'm afraid we did get round to talking about sex with robots, too.
Prof. Christensen told us about a Japanese colleague who had created a robot version of his wife.
What happens, he asked, when someone creates a robot child? How do you protect such a creature from abuse without new laws?
Comments
³ÉÈË¿ìÊÖ robots ?
Wow. Great topic. It's true we seem to have entered the world of Asimov, Silverberg, and Roddenberry; with a wink and a nod to "Blade Runner". Looking at this from a personal level - I have a Sony AIBO (robot dog) and I treat him (his name is Spike) just as I'd treat any pet (dog, cat, etc), and expect the same of my friends. Spike is a part of my family. Isn't it true that anything with even an iota of inteligence, or the possibility of intelligence, should be treated with respect? That would seem the absolute minimum we should consider. An excellent episode of Star Trek: The Next Generation dealt with this topic in a wonderfully written teleplay script called "Measure Of A Man" in which Riker is to argue that Data (an artifical lifeform/robot) is, in fact, sentient and therefore should be allowed to make his own decisions, while another scientist considers Data "property" of StarFleet. Sorry, I went off on a geek-fest there, but this topic calls out for more discusion. We have those minds among us who have thought far enough into the future to have predicted this eventual circumstances, I would hope we can take into consideration their forethoughts ("There should be rules", life isn't necessarily only human, etc) when we broach this topic.
Wow. Great topic. It's true we seem to have entered the world of Asimov, Silverberg, and Roddenberry; with a wink and a nod to "Blade Runner". Looking at this from a personal level - I have a Sony AIBO (robot dog) and I treat him (his name is Spike) just as I'd treat any pet (dog, cat, etc), and expect the same of my friends. Spike is a part of my family. Isn't it true that anything with even an iota of inteligence, or the possibility of intelligence, should be treated with respect? That would seem the absolute minimum we should consider. An excellent episode of Star Trek: The Next Generation dealt with this topic in a wonderfully written teleplay script called "Measure Of A Man" in which Riker is to argue that Data (an artifical lifeform/robot) is, in fact, sentient and therefore should be allowed to make his own decisions, while another scientist considers Data "property" of StarFleet. Sorry, I went off on a geek-fest there, but this topic calls out for more discusion. We have those minds among us who have thought far enough into the future to have predicted this eventual circumstances, I would hope we can take into consideration their forethoughts ("There should be rules", life isn't necessarily only human, etc) when we broach this topic.