Are we ready to welcome intelligent robots into the human family?

|

At some point in the future, artificial intelligence (AI) may become so advanced that some computer minds may achieve sentience—consciousness and self-awareness. Whether the underlying technology is electronic or—as imagined by Isaac Asimov and adapted by Star Trek–“positronic”, a synthetic, sentient computer mind would have an ego. It would have a sense of its own existence as an individual, distinct from the humans that created it and from other computer minds.

Such a sentient, artificial mind could develop interests–preferences for thinking about and researching certain topics. If able to communicate with humans and other intelligent machines, the AI mind would express itself in a way that would develop based on its individual experiences. It would have a personality. If mobile, this robot might come to prefer certain physical activities, or hobbies. And, if unable to pursue those interests, it could experience a human emotion — unhappiness. It even raises issues about whether or not these thinking machines would need to be accorded something akin to human rights. It may be mind-boggling to contemplate how the establishment of “human” rights for machines may come about, but one thing is clear: The advent of sentient machines will inaugurate a new era, not only for ethicists and philosophers, but for lawyers and judges too.

Positronic brain: Are sentient androids really on the horizon?

If you’re thinking that sentient machines will not be an issue until centuries into the future, think again. An article in National Defense Magazine suggested that the Defense Advanced Research Projects Agency (DARPA) is building robots with “real brains”. From the report, it’s not clear how close the project is to building an actual sentient brain, but the term “positronic” brain has already been applied by some writers. Of course, the sci fi terminology won’t make sentient android appear any sooner than they would otherwise. As with all DARPA projects, the Department of Defense has practical reasons for developing synthetic brains — perhaps for military drones. So, the first sentient robots might not be androids walking around with humans, but rather flying creatures.

The DARPA approach does not involve conventional, electronic computing technology. Instead, the budding positronic brains are based on chemistry and what DARPA calls “physical intelligence”. Here is an excerpt from the article.

What sets this new device apart from any others is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and is capable of remembering information, [UCLA Professor of chemistry, James K.] Gimzewski said. Each connection is a synthetic synapse. A synapse is what allows a neuron to pass an electric or chemical signal to another cell. Because its structure is so complex, most artificial intelligence projects so far have been unable to replicate it.

Despite failures to recreated human reasoning in AI projects over the last several decades, the physical intelligence program is described as “an off the wall approach”. What are the implications if it actually works?

Android relationships

Having experiences, interests, and desires in common with other beings–android or human–sentient androids could develop friendships, alliances, even romantic relationships with one another and possibly with humans, bringing legal declarations like this into the realm of possibility:

According to the records at the NorthAm Robotics Company, the robot also known as Andrew Martin, was powered up at 5:15 pm on April 3rd, 2005. In a few hours, he’ll be 200 years old, which means that with the exception of Methuselah and other biblical figures, Andrew is the oldest living human in recorded history. For it is by this proclamation, I validate his marriage to Portia Charney, and acknowledge his humanity.

The passage is from the 1999 film Bicentennial Man, staring the late Robin Williams and based on The Positronic Man, a novel by Isaac Asimov and Robert Silverberg. It sounds like the judge made a logical decision that should not bother anybody, but what would happen if real-life were an android and a human to fall in love and wish to live as a couple, with the legal rights this usually entails? You might think that 25, 50, or 75 years from now nobody in a society advanced enough to create sentient androids should be subject to prejudice. But the history of marriage rights –like the history of civil rights in general– has been a struggle against those trying to keep different groups of people apart. Today, there are people alive who remember days when marriage between people of different races was not legal in many states, we’re really just in the early stages of marriage rights for same sex couples. Let’s not even get into the state of marriage rights in certain other countries where religious law reigns free. So yes, at some point after sentient androids are created, we can expect that any movement to allow them to marry, with one another and with humans, will be met with resistance. And resistance has never been futile.

Dealing with human fears

When it comes to rights being trampled, speculation through science fiction has raised concerns that synthetic beings — whether androids or cyborgs — will take over, bring an end to the human era. The infamous computer “Hal” comes to mind, from the 1968 movie 2001: A Space Odyssey. But it’s only pop culture that has expressed concerns about a machine takeover. Professor Stephen Hawking noted to the BBC that, “The development of full artificial intelligence (AI) could spell the end of the human race.”

Aside from Hawking, big, non-positronic brains like Elon Musk and Bill Gates are worried about it too. “I am in the camp that is concerned about super intelligence.” Gates wrote. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive, if we manage it well. “A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Gates’ mention of the entrepreneur, inventor, and SpaceX investor Musk was in reference to one of Musk’s investments: $10 million from his personal funds to support research aimed and making sure that AI develops so that it’s friendly to humans.

Treating other robots with respect

Assuring from the early stages that AI beings will think of humans as friends sounds prudent, but there’s a flipside to the issue — who will protect the androids, from one another and from humans? Hutan Ashrafian, a surgeon at Imperial College London, asked this question in the prestigious journal Nature

“Academic and fictional analyses of AIs tend to focus on human–robot interactions,” Hutan wrote. “[But] we must consider interactions between intelligent robots themselves and the effect that these exchanges may have on their human creators…If we were to allow sentient machines to commit injustices on one another..this might reflect poorly on our own humanity.” Turning to science fiction, he goes on to point out that even Asimov’s fictional, but famous “Three Laws of Robotics” would not pose an adequate model for the real laws we’ll need to devise.

Three Laws of Robotics devised by science-fiction writer Isaac Asimov: robots may not injure humans (or through inaction allow them to come to harm); robots must obey human orders; and robots must protect their own existence. But these rules say nothing about how robots should treat each other. It would be unreasonable for a robot to uphold human rights and yet ignore the rights of another sentient thinking machine.

Thus, we’ll have to grant intelligent machines the same rights that are granted to biological people, regardless of what type of being–biological or machine—might be in a position to do harm.

David Warmflash is an astrobiologist, physician and science writer. Follow @CosmicEvolution to read what he is saying on Twitter.

  • petergkinnon

    Most folk consistently overlook the reality that distributed “artificial superintelligence” has actually been under construction for over three decades.

    Not driven by any individual software company or team of researchers, but rather by the sum of many human requirements, whims and desires to which the current technologies react. Among the more significant motivators are such things as commerce, gaming, social interactions, education and sexual titillation. Virtually all interests are catered for and, in toto provide the impetus for the continued evolution of the Internet.

    By relinquishing our usual parochial approach to this issue in favor of the overall evolutionary “big picture” provided by many fields of science. the emergence of a new predominant cognitive entity (from the Internet, rather than individual machines) is seen to be not only feasible but inevitable.

    The separate issue of whether it well be malignant, neutral or benign towards we snoutless apes is less certain, and this particular aspect I have explored elsewhere.

    Stephen Hawking, for instance, is reported to have remarked “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,”

    This statement reflects the narrow-minded approach that is so common-place among those, like those featured in these captions, who make public comment on this issue. In reality, as much as it may offend our human conceits, the march of technology and its latest spearhead, the Internet is, and always has been, an autonomous process over which we have very little real control.

    Seemingly unrelated disciplines such as geology, biology and “big history” actually have much to tell us about the machinery of nature (of which technology is necessarily a part) and the kind of outcome that is to be expected from the evolution of the Internet.

    This much broader “systems analysis” approach, freed from the anthropocentric notions usually promoted by the cult of the “Singularity”, provides a more objective vision that is consistent with the pattern of autonomous evolution of technology that is so evident today.

    Very real evidence indicates the rather imminent implementation of the next, (non-biological) phase of the on-going evolutionary “life” process from what we at present call the Internet. It is effectively evolving by a process of self-assembly.

    The “Internet of Things” is proceeding apace and pervading all aspects of our lives. We are increasingly, in a sense, “enslaved” by our PCs, mobile phones, their apps and many other trappings of the increasingly cloudy net.

    We are already largely dependent upon it for our commerce and industry and there is no turning back. What we perceive as a tool is well on its way to becoming an agent.

    There are at present an estimated 2 Billion Internet users. There are an estimated 13 Billion neurons in the human brain. On this basis for approximation the Internet is even now only one order of magnitude below the human brain and its growth is exponential.

    That is a simplification, of course. For example: Not all users have their own computer. So perhaps we could reduce that, say, tenfold. The number of switching units, transistors, if you wish, contained by all the computers connecting to the Internet and which are more analogous to individual neurons is many orders of magnitude greater than 2 Billion. Then again, this is compensated for to some extent by the fact that neurons do not appear to be binary switching devices but instead can adopt multiple states.

    Without even crunching the numbers, we see that we must take seriously the possibility that even the present Internet may well be comparable to a human brain in processing power.

    And, of course, the degree of interconnection and cross-linking of networks within networks is also growing rapidly.

    The emergence of a new and predominant cognitive entity that is a logical consequence of the evolutionary continuum that can be traced back at least as far as the formation of the chemical elements in stars.

    This is the main theme of my latest book “The Intricacy Generator: Pushing Chemistry and Geometry Uphill” which is now available as a 336 page illustrated paperback from Amazon, etc .

  • Rowan Taylor

    Fascinating topic but why is it that sentient robots excite such a question when sentient animals do not – except among fringe intellectuals and activists who are typically ignored or ridiculed?