Do AIs make ethical decisions?

By: Kieran Penfold

23, November, 2018

Categories:

IoT -

  •  
  •  
  •  
  •  
  •  
  •  
  •  

Written by – Emrah Gultekin

The scientist Isaac Asimov laid out the rules for Robots in I, Robot with the “Three Laws of Robotics” in 1950. Ideas in this realm have been common among intellectuals and scientists for quite some time. The issue now is that we think these hypothetical cases are slowly becoming reality as AI trickles into our lives. So how are these ethical issues addressed today?

 

Despite all the hype surrounding AI, there is very little information for the general public on how it really works. Unfortunately, the tech industry named this next generation network and database tool “artificial intelligence,” which makes it even more confusing. AI theories and algorithms have been around for decades. So, what’s changed? What’s all the hype about? In short, compute power has increased astronomically with GPUs (graphics processing unit, as opposed to CPU, central processing unit), and open source frameworks were created and made public to developers over the past few years. A very simplified description of this next generation network and database tool is it extracts features, converts them to vectors (numbers) in layers and stores them for easy recall. To complicate things further we call these models “neural networks”, subliminally implying a brain and some biology – also very misleading. In fact, it’s a machine, just like the software on your computer, in your car and on your phone today. The only difference is it works on a more complicated set of assumptions. The suppositions and training are all conducted by humans and experiences of the system. Here is where the ethics issue becomes a real problem.

 

Artificial intelligence is a very broad field but when I say artificial intelligence I’m referring mainly to machine and deep learning because these are the overarching techniques today.  There are two major components of artificial intelligence: Training and Predictions.

 

For instance, your developers have created an AI platform (sort of like an empty brain) that doesn’t know how to do anything yet. You want to teach it to do something. You want to teach it to recognize broken tibia bones. So, you have a list of tibia fracture types along with corresponding x-rays and images. You feed your system with the images and then it learns how to recognize the bones and fractures, right? Yes, to a certain extent. As long as you feed it the objects you’ve trained it for, it will do a pretty good job recognizing them. But if you feed it an x-ray of a fractured humerus, it’s going think it’s a type of tibia fracture, because that’s all it has seen for all its existence. So, there you have the first problem with training – bias. Now bias comes in two main forms. One is de facto based on the model you’ve trained and the context thereof. In other words, since you’ve focused on training the AI for a certain task, by definition you’ve omitted other possible tasks. This creates bias. The second bias is due to the type of data you’ve fed. For example, you may have good quality data for tibia fractures, but not such great data for cuneiform fractures. This creates a bias for the AI to put more weight on tibia fractures because it’s been trained better on tibia fractures. Now imagine this across thousands of tasks, especially ones which are critical in life and death matters such as in autonomous vehicles.

 

So how do companies address the biases in training today? First, they make sure once a model is trained, it’s only deployed and used within context. For example, you do not feed a humerus x-ray to a tibia fracture model. This sometimes involves humans in the loop at initial deployment and other automation techniques. Second, they make sure the training data is clean, standard and sufficient across concepts. This requires a lot of human involvement and often lots of experts in the field. Third, they experiment with different types of neural net layers to see which ones work best for that particular task. The second component of AI are predictions. Once you have a model trained you need it to make predictions. The problem with predictions is accuracy or false positives. If you’ve selected the right framework, created the platform correctly, selected your data and trained everything properly during the training phase, your output should be relatively coherent to a certain degree. However, you still need to establish thresholds. The higher the threshold, the less false positives and the more true negatives you will possibly get. Depending on the application of the model, the thresholds will be key in resolving accuracy when making the predictions.

 

You will notice all these issues are adjustable by humans at any given moment, so there’s nothing to fear. It’s like your computer or phone. You can adjust it, as long as you know how and are not confused by all the settings. But here’s the critical part: how are humans supposed to know when something has gone wrong?  Let’s say an autonomous vehicle skipped a stop sign because it was covered in ice and it wasn’t very legible. Unless the vehicle was involved in an accident you’ll never correct that mistake. The training data never had an ice covered stop sign image, so the AI didn’t recognize it.  A feedback loop needs to be generated both by the AI and by humans to send back information to the system to use in re-training. This is being done to some degree but is nowhere near as effective and universal as it needs to be. The AI has to know what it doesn’t know, and when it makes a mistake to report it back. That’s difficult.

 

Another obvious ethical challenge in AI is privacy. Where do you get your training data from? Sky News advertised they were using facial recognition during Harry and Meghan’s wedding. The first reaction from the community was, where did they get the training data from, and did they get consent from the people they were recognizing? It’s not only a privacy issue but also a legal issue. For example, US law currently restricts companies from collecting people’s biometric data without their consent but the definition of biometric data is quite vague. The technology has gotten so good that a simple portrait is enough to extract enough features to complete facial recognition for a person, and many images are public, including those that are stored in your social media profiles. Does that qualify as biometric data? It’s up for debate. Nevertheless, we’ll see regulations and execution evolve over time on privacy and related issues, but as always, expect technology to be several steps ahead of regulation.

 

To go back and answer the question, do AIs make ethical decisions? Yes, we can say AIs make “ethical” decisions, in so far as they get decision making tendencies from their trainers. In other words, an AI will most resemble the people building and training it, so in effect will be a crude reflection of those people’s biases and beliefs. However, these tendencies are not written in stone and can be changed at any time. In fact, compared to an adult human who will have a harder time changing their behavior after a certain time, it’s not that difficult to change or even to decapacitate an AI today.

 

How do we make sure there are more good AIs and not bad AIs? It’s going to come down to two major factors (in no particular order) – 1) hate to say it, government regulation and 2) market components. Regulations will need to make sure the right rules are in place in order to shut down by court order an AI that is breaking the law. The owner, manufacturer or trainer can possibly be penalized as well. This will not be easy to enforce so we always hope the market fixes itself. In this case, there could be a market failure of consequential size, but probably not. Market components definitely need to be standard, which has a lot to do with companies reaching scale and high market share at an earlier period in the cycle.  Pre-trained components will have a lot of these “danger” or “ethics” controls in it because that’s what the market will demand and they will be very difficult to crack. For example, you wouldn’t be able to accidentally train your AI to be a thief, shooter or a stalker just because you’ve watched a lot of bad stuff. But you should be careful how you train yourself!

 

In conclusion, is there anything to really fear? I don’t think so, since people are still in charge. Some of you may think that’s the problem. Remember we are facing a lot of issues and challenges today. If AI can help us overcome some of these challenges, it’ll be a net positive. It’s important to understand how AI fundamentally works and we need to try and anticipate where the problems and opportunities will arise in the future. Humans have been obsessed with physical robots, as in I, Robot, for generations but most robots of the future will live in the cloud and be important extensions of our capabilities and expressions, on earth.

 

Written by – By Emrah Gultekin