Lord Clement-Jones: On regulation and ethical and moral dilemmas in artificial intelligence
The potential of artificial intelligence (AI) to revolutionise the business landscape is almost infinite – but there is a huge amount of work to be done before ethical and societal issues are ironed out.
That’s the view of Lord Timothy Clement-Jones (centre), who co-chairs the All-Party Parliamentary Group on AI and who will be speaking at the AI Expo event in Berlin on June 1-2.
The group’s immediate plans may have been scuppered somewhat by the upcoming UK general election on June 8, but so far, after the inaugural get-together last November, there have been two meetings. Those present included Professor Noel Sharkey, emeritus professor of artificial intelligence and robotics at the University of Sheffield, Kumar Jacob, CEO of Mindwave Ventures, as well as representatives from the University of Oxford, PwC, EDF Energy, and more.
The first meeting aimed primarily to define AI, but subsequently a lot of the discussion has been around the ethical and governance aspect. With an extensive background in communications and the law – one of the many hats Lord Clement-Jones wears is as partner of global law firm DLA Piper – it’s something which is particularly resonant to him, as well as vital to the group as a whole.
“Naturally speaking, a parliamentary group would be particularly interested in the standards, values, governance, ethics, and moral questions that surround AI, as well as the societal implications,” he explains. “If we’re talking in parliamentary terms, parliamentarians are very interested in how and whether we need to have any compulsory or voluntary regulation in this sort of area.”
So what are those ethical and moral questions? Lord Clement-Jones brings up the example of Tay, the public-facing AI chatbot from Microsoft which opened and closed within a week in March last year due to the racist and sexist content it was churning out. One report described it as both “a complete PR disaster” and “artificial intelligence at its very worst” – and it’s a topic which has also been discussed by the group.
“We had one of our speakers raise the issue – why should you think human values are so wonderful? Are we really going to instil human values in our AI? Do we want to?” he says. “If we want to instil the worst aspects of human behaviour, which we seem to be able to do in cases like Tay, or indeed inflict violent behaviour on military robots…we should be thinking about values in a rather different way.”
This naturally leads on to the next question over not how far we could go with artificial intelligence, but how far we should go – and it’s a subject Lord Clement-Jones is planning to discuss at AI Expo next month. “I do talk a little bit in my talk about the kinds of skills that are going to be required,” he says. “We’re going to need creative skills, data using skills, innovation skills, but we may well not need quite so much in the way of analytical skills in the future, because that will be done for us.”
Long term, the solution may be down the road of ethical advisory boards, but with limits – and a warning for young people at the start of their working lives.
“The big issue, in this hollowing out, is how you’re going to get experience to get to that sort of situation where you’re valued for your experience,” he explains. “In your middle career, it’s going to be AI that’s going to do so much of the work.
“There’s a real conflict here, and that’s why again I think ethics advisory boards [would] have to draw lines in terms of what they think is appropriate to be done within a business, because change can be as rapid or as infinite as you want.”
Picture credit: http://www.lordclementjones.org