NLP Powered Virtual Assistants
for humanlike interactions
Natural language processing is a computer’s ability to understand and process human language. In the realm of virtual assistants, NLP is used to determine a user’s intention, extract information from an utterance, and carry on a conversation with the user in order to execute and complete a task.
Kore.ai’s proprietary natural language processing (NLP) technology detects intent of users in the given context & extracts entities with very high accuracy by processing user inputs against three different engines.
Utterances
Utterances refer to anything a user says. With virtual assistants, utterances can consist of multiple sentences processed individually, in a logical order, or simultaneously based on the overall request of the user.
Intent Recognition
Intents refer to what a user wants to accomplish. Most intents are simple, discrete tasks like “Find Product,” “Transfer Funds,” “Book Flight” - typically describes verb and noun combination.
Intent recognition matches a user utterance with its correctly intended task or question; initiates a dialog to process the request. The platform determines a user’s intention through several training models.
Entity Extraction
Entities are anything defining, shaping, or modifying the user's intent and are required to carry out the intent, such as dates, times, and locations.
Entity extraction identifies elements needed to complete the task. These elements can be simple items like numbers and dates to complex items like addresses and airport names to user-defined categories.
Multi-Engine NLP Approach
for accurate intent recognition
The platform takes a unique hybrid approach to understand user intent. It uses a machine learning model, a semantic rules-driven model, and a domain taxonomy and ontology-based model. This approach allows the virtual assistants to not only understand a user’s input with a high degree of accuracy, but also to intelligently handle complex human conversations.
Machine Learning Engine
The machine learning (ML) engine uses statistical modeling and deep neural networks to train an intent prediction model from a set of sample sentences for each intent.
The ML model evaluates all the training utterances against each task and plots them into one of these quadrants of the task: True Positive (True +ve), True Negative (True -ve), False Positive (False +ve), False Negative (False -ve).
Features:- Uses deep neural network based text classification algorithm features including n-grams, entity marking, lemmatization, stop word exclusion, and synonyms
- Uses conditional random fields for named entity recognition (NER) and extraction with an option to use Deep neural network based NER too
- Trained using sample utterances for each intent & entities
- Supports supervised learning to monitor the bot performance and manually tune where required
- Can be visualized and fine-tuned to get the best outcome
- Allows auto training of utterances from the user's conversation
- Customizable Machine Learning pipeline
- Supports unsupervised ML to build analytics on the usage of the intents, flows, dropouts etc.
- Uses deep neural network based text classification algorithm features including n-grams, entity marking, lemmatization, stop word exclusion, and synonyms
- Uses conditional random fields for named entity recognition (NER) and extraction with an option to use Deep neural network based NER too
- Trained using sample utterances for each intent & entities
- Supports supervised learning to monitor the bot performance and manually tune where required
- Can be visualized and fine-tuned to get the best outcome
- Allows auto training of utterances from the user's conversation
- Customizable Machine Learning pipeline
- Supports unsupervised ML to build analytics on the usage of the intents, flows, dropouts etc.
Fundamental Meaning Engine
The fundamental meaning (FM) model considers parts of speech and inbuilt concepts to identify each word in the user utterance and relate it with the intents the virtual assistant can perform. It creates a form of the input with the canonical version of each word in the user utterance.
Features:- Deterministic model that uses semantic rules and language context to determine the intent match
- Can be trained using synonyms, built-in and custom concepts and patterns
- Scores using various semantic rules including:
- Grammar
- Parts of speech
- Word match, work coverage, word position
- Sentence structure
Knowledge Graph Engine
The knowledge graph (KG) model enables you to create a hierarchical structure of key domain terms and associate them with context-specific questions and their alternatives, synonyms, and machine learning-enabled classes.
Features:- Turns static frequently asked questions (FAQ) text into an intelligent, personalized conversational experience
- Uses domain terms and relationships
- Requires lesser training
- Enables word importance and lesser false positives for terms marked as mandatory
- Capability to enable the ontology weighted features whenever ML gets confused
- Automatic conversational dialog for resolving appropriate answer
Ranking and Resolver: To Determine the Winning Intent
The ranking and resolver engine of the platform is used to determine the winning intent based on the user utterance. Based on the ranking and resolver, the winning intent between the engines is ascertained.
Features:- Determines the best possible intent match based on the scores from all the models
- Definitive match vs possible match from each engine is ranked against each other
- In case of non-conclusive match, a disambiguation dialog is triggered
Advantages of Using Kore.ai’s Multi-Engine NLP
The individual engines have many specialized capabilities but also have their own limitations. Kore’s proprietary NLP technology overcomes the weakness of any one individual NLP model. The three engines complement each other with different perspectives. Their results are correlated and resolved to accurately identify intents. This method is unique to Kore.ai, while most other solutions depend solely on one.
Capability | Only ML Engine | Only FM Engine | Only KG Engine | Multiple Engine Approach (Kore.ai) |
---|---|---|---|---|
Learning based on sample user utterances | ||||
Minimal training using task names and patterns | ||||
Use of synonyms | ||||
Use of canonical forms | ||||
Use of grammar and parts of speech | ||||
Decisions based on domain term priority and relationships |