AI In Code Series: Kore.ai - The intuitive ontology in smart bots
We users use Artificial Intelligence (AI) almost every day, often without even realising it i.e. a large amount of the apps and online services we all connect with have a degree of Machine Learning (ML) and AI in them in order to provide predictive intelligence, autonomous internal controls and smart data analytics designed to make the end user User Interface (UI) experience a more fluid and intuitive experience.
That’s great. We’re glad the users are happy and getting some AI-goodness. But what about the developers?
What has AI ever done for the programming toolsets and coding environments that developers use every day? How can we expect developers to develop AI-enriched applications if they don’t have the AI advantage at hand at the command line, inside their Integrated Development Environments (IDEs) and across the Software Development Kits (SDKs) that they use on a daily basis?
What can AI do for code logic, function direction, query structure and even for basic read/write functions… what tools are in development? In this age of components, microservices and API connectivity, how should AI work inside coding tools to direct programmers to more efficient streams of development so that they don’t have to ‘reinvent the wheel’ every time?
This Computer Weekly Developer Network series features a set of guest authors who will examine this subject — this post comes from Prasanna Arikala, CTO at Orlando Florida-based intelligent virtual assistant company Kore.ai.
Arikala explains how his team approaches the task of understanding user intent. They use a machine learning model-based engine, a semantic rules-driven model and a domain taxonomy and ontology-based model. Arikala writes as follows…
The thing about metaphysical ontology
This (above technique) helps bot developers significantly in developing bots that can engage in complex human conversation with almost no coding requirements, as explained below.
Let’s look at Intelligent Dialogue Turn Management first. We work to empower virtual assistants to handle virtually all nuances related to human conversations, including interruptions, clarifications and more. Here, developers gain control in defining the ‘dialogue turn’ and context switching experience for users.
Next, let’s look at Hold and Resume power. Developers need to be able to develop virtual assistants that can account for hold and resume functions – this will allow users to pause a task, start and complete another task… and then seamlessly return to the original task – all without losing important contextual data and conversation continuity.
Now let’s look at Multiple Intents. Platform capabilities in this space need to be able to handle multiple intents at the dialogue management layer. This eliminates the need for users to carefully communicate one command, or intent, at a time – which in turn makes the chatbot sound less robotic and stale and more like a real person.
Example: when two intents given in one sentence like… “Book a cab for me from airport to home” and “Book a ticket from London to Mumbai”; the bot developer need not tell the software he is she is developing which event needs to necessarily occur first — as it happens, the platform intelligence takes care of the flight ticket booking event first… and then cab booking accordingly, the end user remains blissfully unaware, but completely served.
Let’s turn to Context Handling: Context handling is an AI platform’s ability to understand complex contexts. With enhanced context management, bots now can carry over context across intents involving both dialogue tasks and knowledge collection. Even in case of timeouts or conversations-on-hold, if the user begins a fresh session and asks for old intent, or context from the previous sessions, the bot can remember.
Developers need not code for that or define any special scenarios, platform intelligence should be able to understand the context and act accordingly.
For Digression Handling: Programmers need to be able to use sentiment analysis capabilities to design sentiment thresholds (ex: if the user sentiment is negative (below 3.0) it will trigger the bot to hand-off to a live agent).
A bot discussion, in any langauge
Into auto language detection and spelling/grammar correction: Many times, users initiate a discussion in any language they are comfortable with. It’s tedious for a bot developer to program to detect the language and tune in accordingly. Moreover, they may make several spelling errors, typos, or even grammatical errors. We have built our platform to offer in-built multi-engine Natural Language Processing (NLP) intelligence to understand and auto-detect/correct such errors.
Developers need not worry about how to handle such deviations.
Turning to Testing and Debugging: One of the biggest challenges for bot developers is to handle bot testing and debugging the problems for failed intent understanding or responses.
We thought about this when designing the Kore.ai Platform and so we have in-built capabilities to trace the sentence detection and where the drop-off happened. With its AI capabilities, the platform offers a regression testing module, which runs automatically to ensure new changes have not disturbed the previously loaded intents, workflows, or training.
For none of this, bot or virtual assistant developers have to do anything. We’re talking about software development platform power (in this case, ours) that has sufficient intelligence (AI) to support all these variations and much more than that.