AI in the docks: Are the believers forgetting something?

A hot topic in technology right now is Artificial Intelligence (AI). Many will have seen Google’s recent Duplex demo or the work that IBM has been doing in healthcare. Vendor keynotes have been packed with AI capabilities and customer success stories. However, there are some other stories that should sound a note of caution, and raise important questions: how good is today’s AI and how should we really think about applying it?

There were three pertinent stories in the press back at the end of May 2018 that should make people ask these questions. One concerns Amazon Alexa, how it recorded and sent a couple’s private conversation to a third party. The other two concern self-driving cars, where failure had far more serious consequences.

Whether it’s Alexa or Uber, there are two key issues that the industry needs to seriously consider.

How capable is Artificial Intelligence today (really)?

I suggest that aspects of it are not that good. Just consider personal assistants like Alexa – others include Google, Microsoft Cortana, Apple Siri – the experience of using these doesn’t often give a sense of there being an intelligence behind the scenes.

Yes, the ability to understand speech has evolved considerably and in some cases work remarkably well. However, behind that trick there is in many cases little more than an old-fashioned if-then-else tree based on keywords. Hence why they often get the answer wrong because they mostly rely on people saying the right words in the right order.

At Box World Tour in London at the end of May, Box CEO and Co-founder Aaron Levie said that the company was taking its time with AI because it wanted to make sure that the experience was consistent across user interactions. This is a great point. A carefully crafted keynote demo can be fabulous, but in the clumsier reality of life the experience can often be different.

We are in the early days of this technology and so expectations should be relatively low. Against reasonable benchmarks, many AI-based services perform well. Providing that I consider the words I use; my Google Home is a very useful device. The problem is that some in the industry seem to believe that the technology is more capable than it is.

While IBM’s work in healthcare with its IBM Watson technology undoubtedly holds enormous promise, the company has reportedly fallen out with some of its partners because the technology has not delivered to their expectations – expectations they claim were set my IBM.

Is this discrepancy between capability and reality starting to literally have lethal consequences?

With respect to self-driving cars, the technology clearly is not yet capable. Someone in a major car company told me that unless an area has been HD mapped and the various driving conditions (whether, traffic, etc.) are perfect, their vehicles have little chance of self-driving in a reliable (safe) way. And yet we have test vehicles roaming urban areas as if the industry is a stone’s throw away from realising every 80’s child’s dream of KITT from Knight Rider.

Given this situation, the next question is important for anyone considering deploying an AI solution.

What should customers be thinking about when applying AI today?

This question is less about use cases and more about processes wrapped around AI use. For example, Google’s Duplex demo – where its AI service called a hair salon – was awesome, but within no time there were questions about privacy and a person’s right to know if they are speaking with a machine.

In one of today’s self-driving car stories, it’s reported that the emergency brake on the Uber car that killed a pedestrian had been disabled. This is not a failure of technology but a failure of process. Surely, anyone who works in safety critical industries will find this story (if true) astonishing. Despite dystopian fiction, it is not AI that’s killing people – it’s more likely to be people doing dumb things with AI.

It’s not like the car industry isn’t very aware of risk, regulation and standards. But then many of the organisations pushing self-drive are not from this industry – Google, Uber, even Tesla from a certain perspective. Their view of R&D comes from software and ideologies like “fail fast”, except in this case failure can have terrible consequences. Regulators are often behind the game with respect to technology and governments (who often lack understanding) are desperate to not be anti-technology and so risk being led by the believers.

The discrepancy between hope and reality, and the fervour of certain technology companies, is possibly leading to things like basic safety protocols not being applied. If Uber had to disable the emergency brake, then did they let the driver know? Given the lack of attention that the driver was paying to the road it’s possible they did not.

Similarly, Tesla delivered its Autopilot to market in a way that has clearly been misunderstood and misused by customers. In the UK, one individual was recently convicted after being filmed sitting in the passenger seat while his Tesla was driving.

It’s not that Tesla has failed to be clear in public statements as to the nature of its Autopilot function, nor that the software doesn’t provide some guidance to users. But the company has completely ignored reality in believing that it should launch a capability – that it chose to call “Autopilot” – without considering how it may be misunderstood or misused.

For example, my car warns me if it thinks I’m falling asleep at the wheel, how is it that a cutting-edge Tesla can’t detect when it’s in Autopilot but there’s no one sitting in the driving seat?

The Autopilot name comes from the feature in aircraft that enables them to fly on their own. Hence some people will interpret Tesla’s Autopilot to mean self-drive. But in the case of aircraft, they are flown by trained pilots who have been taught what the autopilot can and cannot do. In the case of commercial flights there are at least two pilots with one at the controls always. They didn’t just get behind the wheel one day, have a message flashed at them about a new feature called autopilot, and then off they flew.

Perhaps those pushing the boundaries of new technology like AI, should look to some of the older organisations and industries who have been managing risk for many decades. Safety critical processes require people familiar with the issues and tested solutions. Software can sometimes push these people out (we’ve ignored security people for years) in favour of shipping the next cool feature. That’s fine for WhatsApp but not a couple of tons of car that can do well over 100mph.

When it comes to AI, we need to be smart

We’re still a long way from the AI of Science Fiction and there’s no problem with that because it’s a journey and we are in the early phases. What’s critical is that technology providers and customers understand the capabilities and limitations of today’s AI and put in place the necessary processes to make sure that any solutions are operating within sensible and safe parameters.

Decisions made by AI need to be constantly validated and any automated actions measured against risk. For organisations this would be sensible as even in mundane AI scenarios they would surely want to make sure that solutions are delivering the correct outcomes and value. For example, predictive maintenance is not better for a business if the AI is sending engineers out too often. Blind faith in Artificial Intelligence could lead to poor business outcomes and unfortunately in some cases more serious results.

Amazon, Uber, Tesla and others (remember Microsoft’s abusive Twitter bot?) are learning some hard lessons. However, these are lessons that I think an increasing number of organisations will need to learn – hopefully without further unfortunate outcomes.

By |2018-06-09T16:06:46+00:009 June 2018|Analysis, Analytics, Artificial Intelligence|0 Comments

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.