Articles

Since the seminal paper “Attention Is All You Need” of Vaswani et al, transformer models have become by far the state of the art in NLP technology. With applications ranging from NER, text classification, question answering, or text generation, the applications of this amazing technology are limitless.

More specifically, BERT — which stands for Bidirectional Encoder Representations from Transformers — leverages the transformer architecture in a novel way. For example, BERT analyses both sides of the sentence with a randomly masked word to make a prediction. In addition to predicting the masked token, BERT predicts the sequence of the sentences by adding a classification token [CLS] at the beginning of the first sentence and tries to predict if the second sentence follows the first one by adding a separation token [SEP] between the two sentences.

Source de l’article sur DZONE

From a Natural Language Processing perspective, a chatbot normally consists of many parts — small talk, QnA (question and answer — included intent prediction and entity extraction), context handling (user, session etc.), question completion, personalization, sentiment analysis, and so on. Not every chatbot needs all the above-mentioned capabilities. You can have just small talk and QnA and create a chatbot or assemble small talk, QnA and personalization and handle most of the user’s queries. But every chatbot needs QnA.

In this article, we focus on only the Q part of QnA. This is the most complex part of any chatbot framework and needs expertise in Machine Learning, Natural Language Processing and in some cases Deep Learning. Intent Prediction and Entity Extraction are 2 major components of the Q part, which helps the system understand the user query in terms of the answer repository. Answer repository is the domain for which we have built the chatbot. The answer repository can be as simple as a set of FAQs or an excel file or as complex as a database, a SAP system, or a knowledge base.


Source de l’article sur DZONE (AI)