Fotolia

Blind faith in AI risks flawed decisions

AI is the new snake oil. Mathematician and TV presenter, Hanna Fry believes humans should be cautious when using algorithms in critical decision making

Hanna Fry, lecturer in the mathematics of cities at the centre for advanced analytics, UCL believes humans should not put all their trust in algorithms.

Speaking at the IP Expo’s Digital Transformation stream, Fry used her presentation to demonstrate to the audience how a music algorithm developed by Dave Cope could fool an audience into thinking it was a genuine Bach Symphony.

The audience at the show were split 50:50, when asked if a piece of music being played was created by the algorithm or genuine Bach.

The inability of people to distinguish between an algorithmic and a human decision could have a profound effect if people do not take into account the limitations of the AI machine.

“We are seeing machines match us,” said Fry. However, algorithms can be unpredictable.

“We have to accept algorithms will make mistakes. No matter how you train them, they don’t see the world in the same way as us, e.g. in image recognition.”

As an example, Fry showed how an image recognition algorithm on Microsoft Azure wrongly identified a landscape photograph as a landscape with cows, even though no farm animals were actually in the image; with a picture of a child holding a lamb, the algorithm assumed the lamb was stuffed.

Read more about AI limitations

While these may seem trivial examples, Fry warned: “When you put flaws in machines in a position of authority, something will go wrong.”

She also used her talk to touch on the subject of ethics, especially as machines are increasingly being used by people in authority to make decisions. Often, these people put blind faith in the algorithm.

Recalling a talk she gave in Berlin describing a project she had done with the Met Police in 2011 in London, to identify how rioters would move around the city, Fry said: “You can’t just put algorithms on a shelf and say ‘that’s great’. You have to think how they will be used by people.”

Pointing to the ethics of the algorithm, she said software to pick up unruly behaviour could be applied to silence legitimate demonstrations.

Read more on IT innovation, research and development