map   Lumo Index TM Map

Four ways to spot bad AI

Posted by Bala Chandran on Nov 27, 2019 10:23:09 AM

Sophia Cringe GIF - Sophia Cringe AI GIFs

I gave a brief talk at the Beat Live in New York in September on how to spot bad AI. It bugs me that the words AI and Machine Learning get thrown around so loosely and are rarely scrutinized, especially in travel. Also, while other industries are starting to have discussions around the ethics of machine learning and AI, travel's conversations around AI have stayed pretty superficial. A version of this was published in The Beat (subscription required).

First of, let's get this out of the way: I am an AI believer. From automating simple cognitive tasks to seeing patterns in data that are humanly impossible to discern, AI is here to change entire industries, including travel. However, when I go to travel conferences, I hear a lot about chatbots and personalization, about how AI is going to transform the industry (see what I did there?), but very little about challenging bad AI. This is my attempt to help separate bad AI from good AI, to help us all focus on the technologies that are truly transformative, and to take control of how AI will be deployed in our industry.

Bad AI comes in several forms, and some of it is not even AI. To assess AI, we need to ask ourselves four questions.

Do You Really Need AI To Do This?

An expense management software company that said, "We check for outlier behavior on your expense report using AI." AI almost seemed like an afterthought. Whenever I hear AI, the first question I ask is: Do you really need AI to do that? Bookkeeping companies have done this for a long time. Could the same thing be done in spreadsheet? Is the "AI" just a bunch of hard-coded business rules/logic? I'd estimate that at least half the AI I encounter falls into this category.

Do You Have The Data To Do What You Want To Do?

Suppose Mezi told me that they can help me book a better flight. I believe they can do it because they are sitting on gobs of American Express data. They know everything about me, including past purchases, flights I have taken, what I bought on those flights, etc. 

If a rental car company came to me and said: "We have AI to predict which kind of car you want." Do they really have that data? Or if an airline said, "I'm going to use AI to predict what kind of seat you want." They don't really have that data. If anything, if they look at my past history—and with a startup budget, I'm always in the middle seat—they don't have the right data to predict what kind of seat I want, and they don't have much contextual data around what else I buy.

Is The Data Biased?

This is where ethics come in. If we're going to make decisions based on data, we need to ensure that the data is clean.

A classic example of data bias is in criminal justice reform. States are trialing AI to give judges recommendations on the probability of recidivism of inmates. They found that is inherently biased. The AI believed that Black people were inherently more likely to commit crimes than others simply because it was learning from bad, biased data.

Another example is in data for healthcare. If you are trying to predict whether someone is having a heart attack based on their symptoms but train an algorithm on data from men alone, that could be a problem. Men and women exhibit different symptoms when experiencing a heart attack, although there is some debate on this, but it reminds us to be careful about the potential biases of underlying data.

Now on to travel. Let's say we have a recommendation that said women only stay at hotels and not Airbnb—because that is in the data. The underlying issue isn't that women prefer hotels to Airbnb but that they choose hotels for safety and security reasons. Rather than fixing that underlying issue, what the AI in this case would do, because it is inherently biased, is to further the bias by making the recommendation without addressing the underlying issue.

What Is The Worst That Could Happen?

Every AI machine-learning algorithm is inherently probabilistic, so there is some level of error one expects. The question is: What happens when the algorithm is wrong?

At Lumo, we predict flight delays. The worst case is that we tell you that your flight has a high risk of delay, and you change your flight but your original flight is on time. That's a bad outcome for sure, but it's not the end of the world as long as you make an educated choice. If I told you that there was a 90% chance of a severe delay vs a 20% chance, your decision-making should factor that into account. Being transparent with the likelihood of outcomes and potential errors is important.

Let's say your hiring decisions are based on AI. The worst case is you're unfairly impacting someone's career or life and denying yourself talent in the process.

In travel, what's the worst case if an airline started to use AI for pricing based on everything it knows about a user? You can start including biases, regional biases, that start to look like Uber's surge pricing if you're not careful. More recently, Apple launched a credit card with much fanfare but ran into issues because the models that calculate what your credit limit should be weren't transparent. 

As the New Distribution Capability evolves and dynamic airline pricing takes off, if all offers are personalized and it's not manually possible to create the offers, there's going to be some algorithm that is going to generate offers based on who you are, which could end up being inequitable.

So to recap, when you hear AI promising amazing results, be skeptical and ask yourself these four questions:

  1. Do you really need AI to do this?

  2. Do you have the data?

  3. Is the data biased?

  4. What's the worst that could happen?

Tags: machine learning, AI

The Latest from Lumo

From the latest news and product releases to information about flight disruptions and thoughts from our engineering team, there's something in the Lumo blog for everyone.

Subscribe to email updates

Recent Posts