Automatic Extraction of Domain Specific Latent Beliefs in Customer

Automatic Extraction of Domain Specific Latent Beliefs in Customer

The Workshops of the Thirty-Second AAAI Conference on Artificial Intelligence Automatic Extraction of Domain Specific Latent Beliefs in Customer Complaints to Help Tailor Chatbots Amit Sangroya, C. Anantaram, Pratik Saini, Mrinal Rawat TCS Innovation Labs, Tata Consultancy Services Limited, ASF Insignia, Gwal Pahari, Gurgaon, India (amit.sangroya, c.anantaram, pratik.saini, rawat.mrinal)@tcs.com Abstract we measure the severity of the complaint and accordingly Understanding a customer’s personal opinion is extremely im- update customer’s opinion. portant to initiate and maintain a meaningful conversation. In • Secondly, we propose an algorithm that make use of RNNs this paper, we propose an approach to extract latent emotional to classify fine grain opinions using a combination of do- beliefs of customers and use them to tailor a chatbot’s conver- main knowledge and information extraction. sation. We present a machine learning based mechanism to process customer complaints and extract sentiments like cus- • Lastly, we use the fine grain domain oriented opinion tomer is sad, happy, upset, etc. Further, we also train a model information in tailoring the dialog of conversational that extracts more fine grain sentiments like the customer is system. irritated, harassed etc. in context of a particular complaint scenario. This information helps to tailor the dialog according to customer’s emotional state and hence improve the overall effectiveness of the dialog system. Domain Specific Latent Belief Extraction Customer complaints expressed in natural language form can Introduction be quite complex. For example, in an automobile domain, an irritated car customer might describe a specific problem as: During dialog with a customer for addressing his/her com- my car just died on me. No warning no check engine. Car just plaint the chatbot may pose questions or observations based out of extended warranty all maintenance up to date. Had on its underlying model. Sometimes the questions or observa- issues with charcoal canister, and shift lever. engine croaked tions posed may not be relevant given the nature of complaint without overheating, no warning, no check engine. and the current cognitive beliefs that the customer holds. For Therefore, we first take a complaint and try to categorize example, if a chatbot fails to understand the customer’s emo- it into possible category C such as a "engine failure" or tional situation and responds mechanically with an irritated "transmission". To do this, we build our first machine learning customer, then the chatbot may fail to achieve its primary model ML1model that gives output O1 and helps us to focus objective e.g. to address the customer’s problem in a man- on specific part of a domain; e.g. for "engine failure" category, ner that customer feels positive (satisfied) at the end of the the chatbot would primarily focus on "engine" related issues. conversation. In order to train model ML1model, we take labeled data Traditional machine learning approaches train a system with set of customer complaints and their categories. with extremely large dialog corpus that covers a variety of scenarios. Another approach is to build a system with a com- Thereafter, we extract information and opinions through a O2 plex set of hand-crafted rules that may address some specific combination of tools which leads to output . The example O2 instances. Both approaches may be impractical in many real- of are opinions like { StrongNegative, Negative and Posi- world domains. In this paper, we propose a methodology that tive } and information such as "engine_dead". Now, applying uses a combination of machine-learning mechanism and do- domain rules over extracted information and opinions, we get O3 C = main specific knowledge extraction to understand the severity the next output . For example, { if category engine negatives > X of customer’s complaint. This helps to understand the cus- failure and some number , then customer = O3 O2 ML1model tomer’s latent emotional beliefs while giving the complaint. irritated }. Finally, using , and , we ma- Our model then evaluates the beliefs to tailor the dialog and chine learn a model for latent belief estimation, which we ML2model make it consistent with the set of beliefs of the customer. This call as . process then helps drive the conversation in a meaningful way. Figure 1 illustrates the proposed methodology for extract- We make following key contributions in this paper. ing customer’s opinions. Its inputs include the customer com- plaints and a domain ontology. We assume automobile com- • First, we present a novel approach to evaluate a customer’s plaints domain and that we have already categorized the personal opinion in context of a particular domain. Here, complaints into categories like Transmission, Gear,Windows- Copyright c 2018, Association for the Advancement of Artificial Windshield, Engine-failure, etc. (Anantaram and Sangroya Intelligence (www.aaai.org). All rights reserved. 2017). 731 Figure 1: Domain Specific Latent Belief Extraction Process Figure 2: An example of Car Ontology Step 1: Extracting Customer’s Opinions opposite in the other case. Our domain ontology consists of a large RDF graph and we use optimized techniques for faster We start by finding positive or negative opinions from the analysis of nearest neighborhood based semantic analysis. customer complaints. Most of the complaints have a large number of negative opinions. However, sometimes customers Step 2c: Using Information Extraction and Rules An- include positive opinions as well. Therefore, it becomes chal- other component of our system is based upon information lenging for an automatic opinion extraction system to judg- extraction using latent beliefs analysis. In some specific sit- mentally extract the actual overall opinion of the user. For uations, customers may express an opinion for a particular this purpose, one can also use available tools such as Opin- product or service. To handle such situations, we use informa- ionFinder (OF 2005). tion extraction techniques and rules to understand the context in which a particular opinion is expressed. For example, a Step 2: Updating Opinion Weights customer may say he/she visited the garage three times vs. he/she had an engine failure three times. Our mechanism evaluates the context in which opinions have Once we have the fine grain opinions, we use them to tailor been expressed and adjusts the opinion weights accordingly. the chatbot as demonstrated in next section. Initially all the complaints are assigned equal weights. How- ever, after following a three step approach, our system up- dates the opinion weights. These steps are: 1) after categoriz- Learning the Model for Latent Belief ing the complaints; 2) using knowledge from domain ontol- Extraction ogy and 3) Using information extraction. This is explained We now train a LSTM (Long Short Term Memory) network as follows. to build a model that can automatically categorize the com- Step 2a: Using Complaint Category If a complaint be- plaints based upon the information described in previous longs to more critical categories such as engine failure and section (See Algorithm 1). Like many other studies of LSTM transmission, it is assigned a higher weight as compared to on text, words are first converted to low-dimensional dense category such as body paint. This is intuitive that customer word vectors via a word embedding layer. The first layer is with a more critical problem will feel harassed as compared therefore the embedding layer that uses 32 length vectors to to a smaller problem. The complaint categorization is also represent each word. The next layer is the LSTM layer with done withe the help of machine learning in an automatic 100 units. Finally, we use a dense output layer with 5 neurons fashion. (5 classes/labels) and a softmax activation function to make the predictions. We used Categorical_Cross_Entropy as the Step 2b: Using Domain Ontology We make use of knowl- loss function (in Keras) alongwith ADAM optimizer. For edge mining derived using the domain ontology to update the regularization, we employ dropout to prevent co-adaptation. opinion weights. As shown in Figure 2, an ontology consists We run the experiment for 20 epochs with a batch size of 64. of a large knowledge graph expressing the information about In our experiments we consider complaints about car faults a domain. For example, this includes the terms and their re- from http://www.carcomplaints.com. We consider complaints lationships in a particular context. Now, using an automated across six categories such as Transmission Problems, Gear nearest neighborhood approach, we extract the severity of Problems, Windows-Windshield Problems, Engine failure a problem in a particular context. For example, nodes that Problems, Wheels-Hubs Problems and AC-Heater Problems. are closer to sensitive nodes are also considered sensitive Total number of complaints after data processing were 13,797 and hence leads to a positive update in opinions whereas it is (Figure 3). The clean-up process involves converting text to 732 Algorithm 1 Algorithm for Latent Belief Extraction Require: Complaints dataset T Ensure: Complaints and their Opinion Categories for all review r in T do Remove Stopwords and Punctuations Convert to Lowercase Tokenize Mark special name-entities for all sentence m in r do Extract customer opinions neg, strongneg etc. Using Complaint Category engine, transmission, accessories etc., update opinion weight w Category Critical if = then Figure 4: Training and Test Accuracy w ++ end if Using Domain Ontology and Nearest Neighborhood approach, update w can observe that with just less than 20 epochs, the accuracy Using Information Extraction, update w reaches close to 90%. end for end for Example: Tailoring the Chatbot We parse a complaint description through Dependency parsers (such as Stanford-CoreNLP, GATE, MITIE etc.) and extract triples from the description by focusing on the depen- dencies identified among nouns and verbs. For example, for a description ”my car just died on me", triples such as (my-car, just-died-on, me) are extracted.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us