
An Introduction to Information Retrieval Draft of February 17, 2007 Preliminary draft (c)2007 Cambridge UP Preliminary draft (c)2007 Cambridge UP An Introduction to Information Retrieval Christopher D. Manning Prabhakar Raghavan Hinrich Schütze Cambridge University Press Cambridge, England Preliminary draft (c)2007 Cambridge UP DRAFT! DO NOT DISTRIBUTE WITHOUT PRIOR PERMISSION © 2007 Cambridge University Press Printed on February 17, 2007 Website: http://www.informationretrieval.org/ By Christopher D. Manning, Prabhakar Raghavan & Hinrich Schütze Comments, corrections, and other feedback most welcome at: [email protected] Preliminary draft (c)2007 Cambridge UP DRAFT! © February 17, 2007 Cambridge University Press. Feedback welcome. v Brief Contents 1 Information retrieval using the Boolean model 1 2 The dictionary and postings lists 17 3 Tolerant retrieval 39 4 Index construction 51 5 Index compression 65 6 Scoring and term weighting 85 7 Vector space retrieval 97 8 Evaluation in information retrieval 109 9 Relevance feedback and query expansion 129 10 XML retrieval 147 11 Probabilistic information retrieval 163 12 Language models for information retrieval 177 13 Text classification and Naive Bayes 189 14 Vector space classification 213 15 Support vector machines and kernel functions 233 16 Flat clustering 247 17 Hierarchical clustering 269 18 Dimensionality reduction and Latent Semantic Indexing 291 19 Web search basics 301 20 Web crawling and indexes 317 21 Link analysis 331 Preliminary draft (c)2007 Cambridge UP Preliminary draft (c)2007 Cambridge UP DRAFT! © February 17, 2007 Cambridge University Press. Feedback welcome. vii Contents List of Tables xv List of Figures xvii Table of Notations xxiii 1 Information retrieval using the Boolean model 1 1.1 An example information retrieval problem 2 1.2 A first take at building an inverted index 5 1.3 Processing Boolean queries 9 1.4 Boolean querying, extend Boolean querying, and ranked retrieval 11 1.5 Referencesandfurtherreading 13 1.6 Exercises 14 2 The dictionary and postings lists 17 2.1 Document delineation and character sequence decoding 17 2.1.1 Obtaining the character sequence in a document 17 2.1.2 Choosing a document unit 18 2.2 Determining dictionary terms 20 2.2.1 Tokenization 20 2.2.2 Dropping common terms: stop words 23 2.2.3 Normalization (equivalence classing of terms) 24 2.2.4 Stemming and lemmatization 28 2.3 Postings lists, revisited 31 2.3.1 Faster postings merges: Skip pointers 31 2.3.2 Phrasequeries 32 2.4 Referencesandfurtherreading 36 2.5 Exercises 37 3 Tolerant retrieval 39 Preliminary draft (c)2007 Cambridge UP viii Contents 3.1 Wildcard queries 39 3.1.1 Generalwildcardqueries 40 3.1.2 k-gramindexes 42 3.2 Spelling correction 43 3.2.1 Implementing spelling correction 43 3.2.2 Forms of spell correction 44 3.2.3 Editdistance 44 3.2.4 k-gramindexes 46 3.2.5 Context sensitive spelling correction 47 3.3 Phonetic correction 48 3.4 Referencesandfurtherreading 50 4 Index construction 51 4.1 Construction of large indexes 51 4.2 Distributed indexing 54 4.3 Dynamic indexing 57 4.4 Other types of indexes 59 4.5 Referencesandfurtherreading 60 4.6 Exercises 61 5 Index compression 65 5.1 Statistical properties of terms in information retrieval 65 5.2 Dictionary compression 68 5.2.1 Dictionary-as-a-string 68 5.2.2 Blocked storage 70 5.3 Postings file compression 72 5.3.1 Variablebytecodes 73 5.3.2 γ codes 74 5.4 Referencesandfurtherreading 80 5.5 Exercises 81 6 Scoring and term weighting 85 6.1 Parametricandzoneindexes 85 6.1.1 Weighted zone scoring 87 6.2 Term frequency and weighting 87 6.2.1 Inversedocumentfrequency 88 6.2.2 tf-idf weighting 90 6.3 Variants in weighting functions 90 6.3.1 Sublinear tf scaling 91 6.3.2 Maximum tf normalization 91 6.3.3 The effect of document length 92 6.3.4 Learning weight functions 92 6.3.5 Query-term proximity 94 Preliminary draft (c)2007 Cambridge UP Contents ix 7 Vector space retrieval 97 7.1 Documents as vectors 97 7.1.1 Innerproducts 97 7.1.2 Queriesasvectors 100 7.1.3 Pivoted normalized document length 101 7.2 Heuristics for efficient scoring and ranking 102 7.2.1 Inexacttop K document retrieval 103 7.3 Interaction between vector space and other retrieval methods 105 7.3.1 Query parsing and composite scoring 106 7.4 Referencesandfurtherreading 107 8 Evaluation in information retrieval 109 8.1 Evaluating information retrieval systems and search engines 110 8.1.1 Standardbenchmarksforrelevance 111 8.1.2 Measuresofretrievalperformance 111 8.2 Evaluation of ranked retrieval results 114 8.3 From documents to test collections 119 8.4 A broader perspective: System quality and user utility 120 8.4.1 System issues 121 8.4.2 User utility 121 8.4.3 Document relevance: critiques and justifications of theconcept 122 8.5 Results snippets 123 8.6 Conclusion 125 8.7 Referencesandfurtherreading 126 9 Relevance feedback and query expansion 129 9.1 Relevance feedbackand pseudo-relevancefeedback 130 9.1.1 The Rocchio Algorithm 132 9.1.2 Probabilisticrelevancefeedback 136 9.1.3 Whendoesrelevancefeedbackwork? 137 9.1.4 RelevanceFeedbackontheWeb 138 9.1.5 Evaluationof relevancefeedbackstrategies 139 9.1.6 Pseudo-relevancefeedback 139 9.1.7 Indirectrelevancefeedback 140 9.1.8 Summary 140 9.2 Global methods for query reformulation 141 9.2.1 Vocabulary tools for query reformulation 141 9.2.2 Queryexpansion 141 9.2.3 Automatic thesaurus generation 143 9.3 Referencesandfurtherreading 145 Preliminary draft (c)2007 Cambridge UP x Contents 10 XML retrieval 147 10.1 BasicXMLconcepts 148 10.2 Challenges in semistructured retrieval 150 10.3 AvectorspacemodelforXMLretrieval 153 10.4 EvaluationofXMLRetrieval 157 10.5 Text-centric vs. structure-centric XML retrieval 160 10.6 Referencesandfurtherreading 162 10.7 Exercises 162 11 Probabilistic information retrieval 163 11.1 Probability in Information Retrieval 163 11.2 The Probability Ranking Principle 164 11.3 TheBinaryIndependenceModel 166 11.3.1 Deriving a ranking function for query terms 167 11.3.2 Probability estimates in theory 168 11.3.3 Probability estimates in practice 169 11.3.4 Probabilistic approaches to relevance feedback 170 11.3.5 PRPandBIM 171 11.4 An appraisal and some extensions 173 11.4.1 OkapiBM25: anon-binarymodel 173 11.4.2 BayesiannetworkapproachestoIR 174 11.5 Referencesandfurtherreading 175 11.6 Exercises 176 11.6.1 Okapi weighting 176 12 Language models for information retrieval 177 12.1 The Query Likelihood Model 180 12.1.1 Using Query Likelihood Language Models in IR 180 12.1.2 Estimating the query generation probability 181 12.2 Ponte and Croft’s Experiments 183 12.3 Language modeling versus other approaches in IR 183 12.4 Extended language modeling approaches 185 12.5 Referencesandfurtherreading 187 13 Text classification and Naive Bayes 189 13.1 The text classification problem 191 13.2 NaiveBayestextclassification 192 13.3 The multinomial versus the binomial model 198 13.4 PropertiesofNaiveBayes 199 13.5 Feature selection 200 13.5.1 Mutual information 200 13.5.2 χ2 feature selection 202 13.5.3 Frequency-based feature selection 204 Preliminary draft (c)2007 Cambridge UP Contents xi 13.5.4 Comparison of feature selection methods 205 13.6 Evaluation of text classification 206 13.7 Referencesandfurther reading 208 13.8 Exercises 209 14 Vector space classification 213 14.1 Rocchio classification 214 14.2 k nearest neighbor 219 14.3 Linear vs. nonlinear classifiers and the bias-variance tradeoff 222 14.3.1 More than two classes 227 14.4 Referencesandfurther reading 230 14.5 Exercises 230 15 Support vector machines and kernel functions 233 15.1 Support vector machines: The linearly separable case 233 15.2 Soft margin classification 239 15.3 NonlinearSVMs 240 15.4 Experimentaldata 243 15.5 Issues in the categorization of text documents 245 15.6 Referencesandfurther reading 245 16 Flat clustering 247 16.1 Clustering in information retrieval 248 16.2 Problem statement 251 16.3 Evaluation of clustering 252 16.4 K-means 255 16.4.1 Cluster cardinality in k-means 259 16.5 Model-basedclustering 261 16.6 Referencesandfurther reading 265 16.7 Exercises 266 17 Hierarchical clustering 269 17.1 Hierarchical agglomerative clustering 270 17.2 Single-link and complete-link clustering 273 17.2.1 Time complexity 277 17.3 Group-average agglomerative clustering 280 17.4 Centroid clustering 281 17.5 Cluster labeling 283 17.6 Variants 285 17.7 Implementation notes 286 17.8 Referencesandfurther reading 287 17.9 Exercises 288 Preliminary draft (c)2007 Cambridge UP xii Contents 18 Dimensionality reduction and Latent Semantic Indexing 291 18.1 Linearalgebrareview 291 18.1.1 Matrix decompositions 294 18.2 Term-document matrices and singular value decompositions 295 18.3 Low rank approximations and latent semantic indexing 296 18.4 Referencesandfurtherreading 299 19 Web search basics 301 19.1 Background and history 301 19.2 Webcharacteristics 303 19.2.1 Spam 305 19.3 Advertising as the economic model 307 19.4 Thesearchuserexperience 309 19.4.1 Userqueryneeds 309 19.5 Index size and estimation 310 19.6 Duplication and mirrors 313 19.6.1 Shingling 314 19.7 Referencesandfurtherreading 315 20 Web crawling and indexes 317 20.1 Overview 317 20.1.1 Features a crawler must provide 317 20.1.2 Features a crawler should provide 318 20.2 Crawling 318 20.2.1 Crawlerarchitecture 319 20.2.2 DNS resolution 322 20.2.3 TheURLfrontier 323 20.3 Distributing indexes 326 20.4 Connectivityservers 327 20.5 Referencesandfurtherreading 330 21 Link analysis 331 21.1 Thewebasagraph 331 21.1.1 Anchortext 332 21.2 Pagerank 334 21.2.1 Markovchainreview 335 21.2.2 The Pagerank computation 338 21.2.3 Topic-specificPagerank 341 21.3 Hubs and Authorities 343 21.3.1 Choosing the subset of the web 346 21.4 Referencesandfurtherreading 347 Preliminary draft (c)2007 Cambridge UP Contents xiii Bibliography 349 Index 369 Preliminary draft (c)2007 Cambridge UP Preliminary draft (c)2007 Cambridge UP DRAFT! © February 17, 2007 Cambridge University Press.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages398 Page
-
File Size-