Optimized Web-Crawling of Conversational Data from Social Media and Context-Based Filtering

Optimized Web-Crawling of Conversational Data from Social Media and Context-Based Filtering

Optimized Web-Crawling of Conversational Data from Social Media and Context-Based Filtering Annapurna P Patil Rajarajeswari S Department of Computer Science Department of Computer Science and Engineering and Engineering Ramaiah Institute of Technology Ramaiah Institute of Technology Bangalore, India Bangalore, India [email protected] [email protected] Gaurav Karkal Keerthana Purushotham Department of Computer Science Department of Computer Science and Engineering and Engineering Ramaiah Institute of Technology Ramaiah Institute of Technology Bangalore, India Bangalore, India [email protected] [email protected] Jugal Wadhwa K Dhanush Reddy Department of Computer Science Department of Computer Science and Engineering and Engineering Ramaiah Institute of Technology Ramaiah Institute of Technology Bangalore, India Bangalore, India [email protected] [email protected] Meer Sawood Department of Computer Science and Engineering Ramaiah Institute of Technology Bangalore, India [email protected] Abstract introducing bias and with flexible strictness of classification. Building Chatbot’s requires a large amount of conversational data. In this paper, a web crawler is designed to fetch multi-turn 1 Introduction dialogues from websites such as Twit- ter, YouTube and Reddit in the form of Real-world data remains a necessary part of train- a JavaScript Object Notation (JSON) file. ing system models. The digital streams that indi- Tools like Twitter Application Programming Interface (API), LXML Library, and JSON viduals produce are quite useful in the Data Anal- library are used to crawl Twitter, YouTube ysis domain, like natural language processing and and Reddit to collect conversational chat data. machine learning. Social networking applications The data obtained in a raw form cannot be like Twitter, YouTube and Reddit contain a large used directly as it will have only text metadata volume of data that are quite useful for various al- such as author or name, time to provide more gorithms. Naturally, the need to make information information on the chat data being scraped. easily accessible to all leads to deploying a conver- The data collected has to be formatted for sational agent. In order to build a chat model, a a good use case, and the JSON library of python allows us to format the data easily. The huge volume of conversational text data is required. scraped dialogues are further filtered based Twitter is a microblogging service that allows on the context of a search keyword without individuals to post short messages called tweets 33 Proceedings of 17th International Conference on Natural Language Processing: Workshop, pages 33–39 Patna, India, December 18 - 21, 2020. ©2020 NLP Association of India (NLPAI) that appear on timelines. These tweets were limited cerning the context. This data obtained is then used to 140 characters, which has been later expanded to train a classifier that would detect the context to 280 characters and prone to change again in the of documents and allow classification of them into future. Tweets consist of two kinds of metadata categories based on the link distance from a query that are entities and places. Tweet entities are hash- to target. Features of a crawler are- tags, user- mentions, images, and places in the real world’s geographical locations. Metadata and short • Politeness prose add to fewer than 280 characters can link to • Speed Webpages, Twitter users. Twitter timelines are cat- egorized into the home timeline and user timeline. • Duplicate Content Timelines are collections of tweets in chronologi- cal order. Twitter API uses Representational State Each website comes with the inclusion of a Transfer (REST) API to crawl and collect a random file known as robot.txt. It is a standardized prac- set of sample public tweets. The API allows users tice where robots or bots are communicated with to explore and search for trending topics, tweets, the website through this protocol. This standard hashtags, and geographical locations. provides the necessary instructions to the crawler YouTube, a video-sharing website, allows about the status of the website and whether it is al- users to view, upload, rate, report on videos. It lowed to scrape the data off of the website. This is contains a wide variety of videos such as TV show used to inform crawlers whether the website can be clips, music videos, documentaries. It also provides crawled either partially or fully, if the website can- a platform for users to communicate and describe not be crawled as per the robot.txt then the server their thoughts about what they watch through com- blocks any such requests and can even lead to block- ments. ing of IP’s. Reddit is a social news platform where regis- Websites have robot.txt, which prevent the tered users may submit links, images, text, posts use of crawlers that attempt to scrape large data and also upvote or downvote the posts posted by from their website. Any request for a large amount other users. Posts are organized based on boards of data is blocked almost immediately. To pre- created by the user called subreddit. It is also a vent such a case where the crawler should not be platform for web content rating and discussions. blocked, a list of publicly available proxy servers Reddit stores all of its content in json format which as described in Achsan(2014) is used to scale and can be viewed on the browser by extending the crawl the website. Twitter is a popular social me- reddit link with the extension ’.json’. dia platform used for communication. Crawling Real-time datasets are needed to build a model such a website can be useful to gain conversational to generate accurate output. As the available data, and such information can be targeted based datasets are insufficient and do not contain real- on the topic; one such example is a perception on istic examples, there is a need to build a crawler the internet of things as shown in Bian J(2017) or which would scrape conversational data. Building Assessing the Adequacy of Gender Identification crawlers for each website would allow collection of Terms on Intake Forms as described in Hicks A conversational data. This would help in the creation (2015). of datasets of conversational data. 3 Proposed System 2 Literature Survey 3.1 Twitter Crawler A Focused crawler is designed to crawl and retrieve Twitter API provides the tweets encoded in JSON specific topics of relevance. The idea of the focused format. The JSON format contains key-value pairs crawler is to selectively look for pages that are as the attributes along with their values. Twitter relevant while traversing and crawling the least handles both the users and as well the tweets as number of irrelevant pages on the web. objects. The user object contains attributes includ- Context Focused Crawlers (CFC) use a lim- ing their name, geolocation, followers. The tweet ited number of links from a single queried docu- object contains the author, message, id, timestamp, ment to obtain all relevant pages to the document, geolocation etc. The JSON file can also contain and the said obtained documents are relevant con- additional information in the media or links present 34 in the tweets, including the full Uniform Resource tweets are outputted to a json file as seen in Locator(URL) or link’s title or description. Figure 1. Each tweet object contains various child ob- jects. It contains a User object describing the au- thor of the object, a place object if the tweet is geo-tagged, and entities object, which is an array of URLs, hashtags, etc. Extended tweets include tweets with longer text fields exceeding 140 charac- ters. It also contains a complete list of entities like hashtags, media, links, etc. They are identified by the Boolean truncated field equals true, signifying the extended tweet section to be parsed instead or the regular section of the tweet object. The retweet object contains the retweet object itself as well as the original tweet object. This is contained in the retweeted status object. Retweets contain no new data or message, and the geoloca- tion and place is always null. A retweet of another retweet will still point to the original tweet. Quote tweets contain new messages along with retweeting the original tweet. It can also con- Figure 1: The above figure depicts a tweet object. It tain a new set of media, links or hashtags. It con- contains all attributes contained within a tweet tains the tweet being quoted in the quoted status section. It also contains the User object of the 2. Crawling for tweets while searching and mon- person quoting the tweet. itoring a list of keyword The Twitter REST API method gives access to core Twitter data. This includes update timelines, For searching and monitoring, a list of key- status data, and user information. The API methods words, search/tweets is used. Streaming API’s allow interaction with Twitter Search and trends status/filters are not used as it does not pro- data. vide previous tweets at all. Search/tweets are The Twitter streaming API obtains a set of used to crawl and provide tweets from at most public tweets based upon search phrases, user IDs a week back. This function continues to end- as well as location. It is equipped to handle GET lessly crawl for tweets matching the query. and POST requests as well. However, there is a Another advantage of using this method is limitation on the number of parameters specified that it does not limit the number of keywords by GET to avoid long URLs. Filters used with this it can track, unlike statuses/filters, which re- API are- quire separate or new instances to search for different keywords. • follow- user IDs of whom to fetch the statuses 3.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us