Moogsoft CTO Cappelli: hats off to understanding AI understanding
The Computer Weekly Developer Network team spoke this month to Will Cappelli in his role as CTO for EMEA and VP of product strategy at Moogsoft on the subject of just how far we need to understand Artificial Intelligence’s ability to understand, reason and infer on the world around it.
Moogsoft AIOps is an AI platform for IT operations that aims to reduce the ‘noise’ developers and operations teams experience in day to day workload management — the technology aims to detect incidents earlier and fix problems.
Cappelli writes as follows…
Opinion makers and, more unforgivably, academics tend to confuse two very different ideas about the source of AI’s lack of transparency [and how it understands what it understands].
Any complex algorithm (including the ones that run your accounts payable systems, for example) are difficult to understand — not because of any inherent complexity, but because they are made up of 100s and 1000s of simple components put together in simple ways.
The human mind balks at the so-called complexity only because it cannot keep track of so many things at once. AI systems, like most software systems, have this kind of complexity. But some AI systems have another kind of complexity.
Neural obsessions
The market is currently obsessed with deep learning networks i.e. multi-layer neural networks — a 1980s vintage technology that does a great job recognising cat faces on YouTube.
Neural networks are notable because no one has yet been able to figure out, from a mathematics perspective, what makes them work so well.
There are hints here and there but, at the end of the day, there is no way of mathematically showing how neural networks arrive at the results they arrive at.
This is not a question of limited human powers of memory and concentration. This is a question of a seeming lack of basic mathematical structure from which one can infer the effectiveness of neural network mechanics.
Beyond cat-face recognition
Since many, at the moment, identify AI with deep learning, they slide from a genuine lack of intelligibility attributable to a very specific way of doing AI to more general statements about AI transparency.
Now, everything should be made clearer to users but, at the end of the day, how many people who drive cars or use mobile phones can actually tell you how they work? Is it really that important?
I do think that the mathematical lack of intelligibility of neural networks IS a long term issue – not the fact that users don’t understand their behaviours – and do harbour the suspicion that their effectiveness beyond cat-face recognition has been way over-stated.