Artificial Intelligence: Progress Brings New Ethical Challenges

14 Mar 2019

Nicolas Perony is the Co-founder and CTO of OTO, a startup based in San Francisco and Zurich. OTO’s mission is to allow computers to better understand human emotions by helping them to interpret the tremendous amount of information included within vocal intonation. Nicolas will be speaking at the March 28, 2019 Techdebates.org in Zurich discussing whether or not artificial intelligence (AI) can be trusted in the face of biased data. I had the chance to chat with Nico for a few minutes to get a sneak preview of his thoughts on the topic and some of his views on AI progress in general.

Before we dive into the AI discussion, could you offer a brief overview of your business?

Traditional dialog systems capture a single dimension of a conversation, focusing solely on the words exchanged (Natural Language Processing). OTO is moving beyond speech-to-text, pioneering the first multi-dimensional conversational system with the ability to merge both words and intonation (Acoustic Language Processing).

When we talk to one another, there is a lot of meaning embedded within our tone. In fact, researchers have found that the ratio of information that is included in intonation, versus the words alone, is about five to one.

Right now, OTO’s primary applications are within call centers processing millions of customer conversations in order to help businesses unlock valuable data from voice interactions. However, our business is not about replacing call center agents, it is about augmenting and enhancing the capabilities of human team members.

AI is in its early days. What can AI do? What can it not do? How is it improving?

Today, when we talk about artificial intelligence (AI), we are primarily talking about machine learning/deep learning. Machine learning is a set of tools and methods to recognize patterns in data. Deep learning is a subset of machine learning that uses large neural networks—mathematical algorithms that are loosely inspired from the workings of the human brain. Because there are lots of different connections in the network, when supported by very large datasets computers tend to be quite good at recognizing complex patterns in unstructured data (such as images).

Pattern recognition within a supervised learning construct is what AI is really good at today. Supervised learning is an approach in which samples are introduced to “teach” the computer how to make conclusions based on data patterns. Today’s AI cannot reason or think abstractly; it is simply very good at sophisticated pattern recognition and does not generalize well outside of the prediction domain.

The more complex the data, the more training data you need and the more diverse the training data must be. To allow the computer to make decisions on complicated data, such as sounds and images, enormously large data sets are often required to facilitate the teaching. This is one of the weaknesses of AI today.

Unsupervised learning, learning from data without labels, is a focus area of improvement for numerous researchers and practitioners. The AI community is also working on few-shot or zero-shot learning to allow computers to make decisions or come to conclusions based on just a few examples, rather than large data sets.

In the case of zero-shot learning, computers will learn to infer the solution to a problem without having seen that specific class of solution by interpolating from similar solutions. For example, zero-shot learning will allow a computer to recognize a house cat after being shown pictures of lions, then being told how house cats are both similar to and different than lions. Researchers are also working to improve reinforcement learning, which allows computers to continuously learn from experience based on a reward function.

What are some of the risks of biased AI data and models?

We must remember that AI, and its algorithms and models, sometimes make bad decisions—sometimes catastrophically bad. For instance, the Chicago police were using a “threat score” to determine the individuals most likely to commit a felony. The algorithms used to generate threat scores were found to be rife with “systemic endemic structural racism,” according to a July 2018 article in The Washington Post. Recently, a Tesla autopilot car crashed into a truck that it did not detect, and the truck driver died.

I do like to remind people that the costs of these AI mistakes are not necessarily more frequent or more costly than human mistakes. Emotionally, however, these errors feel so much worse.

Take self-driving vehicle data, for example. There are far fewer lethal accidents per mile caused by self-driving vehicles than caused by human drivers. Self-driving cars are already safer than human drivers, but the deaths caused by driverless cars feel so much more catastrophic to us. AI will have to be a great deal safer than human operation for us to accept it. So the question becomes, how much safer?

To my mind, the important question is, how can we take advantage of the increasing power of AI, while avoiding as many negative outcomes as possible? My solution is generally to have a human “in the loop.” Today’s AI serves a greater purpose when augmenting human intelligence than when replacing it. Especially if a decision affects human life or human safety, a human should be “pulling the trigger” in my opinion.

How does bias become entrenched in AI algorithms and models and how can we correct for this?

AI is as biased as the data it is trained on. We have a great many cognitive/judgment biases as humans, and we transfer these to machines. Therefore, a key question is whether computers can be less biased than humans are. In my opinion, for many problems, the answer to that question can be yes.

An excellent way to tackle the issue of potential bias is for the workings of AI algorithms to be more transparent. Those touched by a given AI technology should have access to a degree of model explainability which would offer insights as to how an application is reaching the decisions it does.

What does the future hold for AI?

I think a better question is, what does expanding AI capabilities mean for our future? Maybe it’s because I’m in this industry, but I think AI can be even more transformative than, say, electricity. Electricity brought about a decentralization of production in general. Electricity-driven machines replaced huge steam-powered machines, so it was much easier to build things, which led to a revolution in manufacturing. The internet has brought about a generalization in communication that radically changed the way we develop relationships and do business. But I think AI is more than simply a generalization.  This technology will allow us to offload or decentralize cognition representing a pretty fundamental shift in society.

AI will deliver a lot of value to society, but we are going to have an ongoing set of new ethical considerations to deal with. For example, what will the legal status of highly intelligent, autonomous AI workers be? Can these very intelligent machines be owned by people, or will that come to be seen as slavery?

To learn more about rapidly evolving AI technology and the ethical considerations that accompany these advancements, join us for the March 28, 2019 TechDebate in Zurich, Switzerland. Lisa Falco, Director of Data Science for Ava AG and Nico will continue this fascinating discussion.