What The Hell - Rights for Everyone
And make sure Robocop has a good
pension while we are at it.
(Gizmodo) - Films and TV shows like Blade Runner, Humans, and Westworld, where highly advanced robots have no rights, trouble our conscience. They show us that our behaviors are not just harmful to robots—they also demean and diminish us as a species. We like to think we’re better than the characters on the screen, and that when the time comes, we’ll do the right thing, and treat our intelligent machines with a little more dignity and respect.
With each advance in robotics and AI, we’re inching closer to the day when sophisticated machines will match human capacities in every way that’s meaningful—intelligence, awareness, and emotions. Once that happens, we’ll have to decide whether these entities are persons, and if—and when—they should be granted human-equivalent rights, freedoms, and protections.
We talked to ethicists, sociologists, legal experts, neuroscientists, and AI theorists with different views about this complex and challenging idea. It appears that when the time comes, we’re unlikely to come to full agreement. Here are some of these arguments.
We already attribute moral accountability to robots and project awareness onto them when they look super-realistic. The more intelligent and life-like our machines appear to be, the more we want to believe they’re just like us—even if they’re not. Not yet.
But once our machines acquire a base set of human-like capacities, it will be incumbent upon us to look upon them as social equals, and not just pieces of property. The challenge will be in deciding which cognitive thresholds, or traits, qualify an entity for moral consideration, and by consequence, social rights. Philosophers and ethicists have had literally thousands of years to ponder this very question.
“The three most important thresholds in ethics are the capacity to experience pain, self-awareness, and the capacity to be a responsible moral actor,” sociologist and futurist James Hughes, the Executive Director of the Institute for Ethics and Emerging Technologies, told Gizmodo.
“In humans, if we are lucky, these traits develop sequentially. But in machine intelligence it may be possible to have a good citizen that is not self-aware or a self-aware robot that doesn’t experience pleasure and pain,” Hughes said. “We’ll need to find out if that is so.”
Read More . . . .