SingularityNet addresses an engaging topic: The future of AI and Ethics

November 02 15:35 2018

The Right to Disclosure – Exploring the dangers of deception by artificial intelligence and sophisticated machines

Editor’s Note: This article was originally published here and written by our team with input from Andreas, Ibby, Ahmad and I.

Coincidentally, Futurism published an article in a tangential vein myopically focussed on Sophia’s intelligence. Regrettably however, Dan the author, fails to mention the numerous disclosures on our part, and surprisingly, avoids analyzing the broader landscape of deceptive design practices that sow significant confusion about AI’s capability, and lead to what Dan refers to ‘as anthropomorphizing machines’.

To save our readers time, the general TL:DR summary for a dull critique of Sophia or her AI generally follows the recipe outlined below:

  1. Reference a founder’s statement on Sophia’s intelligence. The more extemporaneous the better, especially if it’s on TV.
  2. Selectively quote and avoid mentioning the various disclosures that we’ve made publicly about Sophia’s intelligence.
  3. Conclude with moral platitudes about how the media misrepresents AI’s potential while ironically contributing to the selfsame misrepresentation by doing point (2) above, thus preventing an educated discourse from taking place.
  4. Repeating and recycling the above 3 points ad-infinitum

We provided a full quote by Dr. Goertzel that was sent to Dan on the 2nd of October 2018 and are reproducing it below.

Public perception of Sophia in her various aspects — her intelligence, her appearance, her lovability — seems to be all over the map, and I find this quite fascinating. Sophia and the other Hanson robots are intensely interdisciplinary creations, with inputs from experts in robotics hardware and software, AI, animation and gaming, narrative and sculpture, and so on. My own role has been mainly on the AI and software side, though I’ve also dabbled here and there with putting content into her mind.

I’ve already outlined before the mix of software systems used to control Sophia in various contexts, in an article I wrote in 2017. There have been some changes since then but the basic outline is the same. On the AI side, we are now using our OpenCog AGI engine to control Sophia in some public interactions. We used OpenCog to control her at the RISE event in Hong Kong earlier this year, and at the RAADFest conference on life extension biology… and she’ll be running on OpenCog at the Web Summit in Lisbon in November as well. More interestingly we used OpenCog to guide Sophia’s behavior in a series of human trials studying the impact of Sophia as a meditation and consciousness guide. We’ll be publishing a paper on this, but I can say for now that the preliminary results are pretty exciting. On average interacting with the OpenCog-powered Sophia meditation guide improved various psychological and physiological measures of human wellness, and for a certain percentage of participants, it provided a really impactful psychological transformation.

We’ve also been working on using our SingularityNET blockchain-based AI platform on the back end of Sophia, interacting with OpenCog to supply her with a variety of different intelligence and knowledge capabilities, much like Alexa can be supplied with various skills (but more interesting and complex in that the different AI agents in SingularityNET can all interact with each other, they are not fully discrete modules like Alexa skills). We plan to premiere Sophia’s use of SingularityNET at the Web Summit in Lisbon in early November.

Sophia and the other Hanson robots are not really “pure” as computer science research systems, because they combine so many different pieces and aspects in complex ways. They are not pure learning systems, but they do involve learning on various levels (learning in their neural net visual systems, learning in their OpenCog dialogue systems, etc.). They are fascinating to use in studying human-robot interaction, and in studying the real-world aspects of robot intelligence and consciousness in everyday human situations.

Some people think Sophia is more intelligent and conscious and self-aware than she is, and need to be counseled on the way this kind of system really works in 2018 — some of what she says comes out of a genuine understanding, and some is more a matter of her repeating content that some human told her or supplied her with. On the other hand, some people think she is much less intelligent and self-aware than she is — they think she’s puppeteered or remote-controlled, or operates entirely off scripted content, etc. They need to be informed that there are significant elements of learning and reasoning underlying Sophia — more so when she is operating using OpenCog and SingularityNET, but also to a lesser degree when she is using her simpler control systems.

These various misunderstandings don’t really bother me, I think what’s valuable is that we’re opening up a dialogue with a broad swath of the public regarding humanoid robots and their level of intelligence, consciousness and empathy. The robots get more intelligent, more conscious and more empathic every year, and the dialogue gets correspondingly more and more interesting. — Dr. Ben Goertzel

Now that our readers have some context, the real debate needs to be around ethical disclosures both regarding AI’s capability and their deceptive humanness,as we demonstrate in this article. If Sophia’s intelligence and our disclosures about her AI continue to educate our readers and the broader public about the importance of this topic, then at least we’d have the pleasure of contributing meaningfully to the discourse that needs to happen sooner rather than later in society, given that we are all in terra incognita.

Introducing human-like entities to the world.

On May 8, 2018, in a keynote led by Sundar Pichai, Google revealed the Google Duplex, a groundbreaking AI assistant set to carry out ‘natural conversations’ in the “real world”. In order to complete simple tasks that are usually trivial and time-consuming for its human user, the assistant would have the ability to call a hair salon for instance and book an appointment.

After watching the video of the announcement, many were left stunned by the live scenarios in which the AI was interacting with human beings. The voice of the assistant was indistinguishable from a real person, the responses were clear, its understanding was seamless (80% success according to the developers), and to top it all the voice even included human-like hesitationsin its responses. In other words, nothing like the voice assistants on our phones.

On the Internet, nobody knows you’re an AI.

Despite the impressive display, there was something off about it all: the person on the other end of the line had no idea who, or rather what, she was talking to. The fully automatic computer system had not disclosed its real identity. Moreover, Pichai was openly duping the hair stylist and restaurant employee in the live scenarios, at the delight of a sniggering audience.

Trust and consent

The purpose of this article is not to tackle all philosophical questions that sprout from this innovation. Instead, it aims at highlighting the drawbacks of immoderate deception while acknowledging the value offered by realistically depicted human-like robotics. After all, SingularityNET and Hanson Robotics themselves have confronted similar controversies in the past.

By duping a human being into thinking they are interacting with another human being, you also dupe a core element of their humanity, namely, their emotions. Envision the following: a person having a hard day at work, receiving their first agreeable call of the week, hearing an angelic voice on the other end, the voice is kind to them, maybe even joking. It is not hard to imagine that the caller’s voice may occupy part of the employee’s mind during the day, and maybe even awaken light emotional attachment. After all, the voice could have been associated with the employee’s sole moment of solace of the week.

The same thought experiment takes different proportions when applied to more vulnerable targets. What about children interacting with these? What kind of repercussions can this sophisticated deception engender?

Examples of deception and manipulation by sophisticated systems are out there already: the famous CGI Instagram model Lil Miquela, revealed to its large following that she was fake only two years after coming into existence online. If you go back to the comments of that fake persona’s account, you will see real people commenting words of encouragement, admiration, and empathy at the ‘‘difficult times’’ she would supposedly go through.

Lil Miquela  —  the CGI Instagram Model.

The question here is different: should they not have had the right/opportunity to first know in what type of relationship they were committing themselves to? I am often deceived by people, it’s a reality we all live with, but I am not ready to get deceived by a smart toaster, with online access, trying to subtly convince me to buy a product. Indeed, Miquela is still used today to promote politically loaded agendas and clothing lines. You can imagine how easy -and dangerous- it is to perfectly fit a piece of clothing on a fake body and sell deceitful silhouette’ standards to millions of eager buyers without them being aware of the trickery.

On a similar note, researchers at the Facebook Artificial Intelligence Research (FAIR) developed advanced chatbots trained to negotiate. This resulted in the bots developing cunning techniques. In fact, the bots taught themselves to lie in order to accomplish goals. Whether or not these bots are fine-tuned to avoid lying prior to their public release is secondary to the imperative disclosure mechanism they should be integrated with – more so because they are now open source.

A related issue at the center of this discussion is “data”. The EU’s GDPR requires all data controllers to retrieve explicit consent from data subjects before collecting the data they generate and process it. However, what will happen in countries with loose regulations regarding data protection? Will the data generated during a phone conversation between an AI and a human be used for the purpose of machine learning? Not only would the person get duped into perceiving another person on the other side of the line, but she/he might even find a data double of themselves being created in a company’s model they had never consciously interacted with.

Realistic Features and Preventing Abuse

When faced with the increasing physical and intellectual refinement of Sophia, with many human characteristics being added to the robot, we made an unspoken pledge: people will never be fooled into thinking Sophia is a human. That does not mean, however, that robots should be kept cartoonish looking and with limited social responsivity. According to a survey conducted in 2005 on the acceptability of human-like robots, ‘viewers showed no sign of repulsion’ towards realistic robots. As David Hanson contends:

‘…anthropomorphic depictions can be either disturbing or appealing at every level of abstraction or realism. People simply get more sensitive with increasing levels of realism.’

Yet, as that level of realism grows, it is crucial that everyone is given the right to access information that will allow her/him to assess her/his feelings towards a given artificial entity.

Sophia’s mechanical brain shows through a transparent window. (Photograph by Giulio Di Sturco)

For the moment, while disclosure is not integrated in her speech, Sophia’s engineering team took measures to visually disclose her robotic nature: instead of hair they decided to display a transparent window facing her mechanical brain; her voice was made to sound off-key; and her body was left incomplete.

On June 27, after a period of heated online debates, Google revealed a new demo of its AI-powered calling service. In the video, the AI assistant starts its conversation with a clear acknowledgment:

Hi! I’m the Google Assistant… this automated call will be recorded’.

Google also issued a statement explaining it was looking to ‘incorporating feedback’ for the development of the product. Naturally, there might be disadvantages in always disclosing an AI’s identity, such as impoliteness by the human listener. Many would not hang up the phone on another person trying to do their job and would show some courtesy. On the other hand, when confronted with an emotionless entity many might quickly hang up and thus defeating the utility of the assistant itself.

Ultimately, realistic renderings of humans — be it via an integration framework of AI, mechanical engineering, or graphic design — have the potential to unlock many mysteries about social intelligence and should be given the appropriate exposure for civil discourse to take place. Essential societal questions should not be left at the discretion of engineers and UI/UX designers alone. Hence, we at SingularityNET make a case for the right to disclosure and urge for it to be a commitment that companies make when creating sophisticated machines intimately interacting with human beings.

How can you get involved?

SingularityNET has a passionate and talented community which you can connect with by visiting our Community Forum. Feel free to say hello and to introduce yourself here. Exchange ideas, chat with others and share your knowledge with us and other community members on our forum.

We are proud of our developers and researchers that are actively publishing their research for the benefit of the community; you can read the research here.

For any additional information, please refer to our roadmaps and subscribe to our newsletter to stay informed about all of our developments.

Media Contact
Company Name: KuCoin
Contact Person: Media Relations
Email: Send Email
Phone: 0871979465
Country: Singapore
Website: kucoin.com

  Categories: