Appendix: Glossary

This glossary defines such relevant concepts that are drawn from law (whether or not they were treated in the chapters of this book), or from computing (especially artificial ), or argumentation, provided that they were mentioned in some relevant chapter. Tools are defined, if they had been mentioned or discussed in the book. So are abstract concepts, but not names of persons. Importantly, with a few exceptions (in biometrics and concerning facial com- posites), this glossary does not cover forensic science or engineering, or forensic or medicine. Much nomenclature had been introduced in the chapters about data mining or the forensic disciplines. Here and there, some passage in the glossary has appeared in sections in Nissan (2008a). This is the case of entries con- cerning: character evidence, logic and law, the doctrine of chances, mens rea, and hearsay. There was little point to try and replicate, here, definitions for concepts from forensic science that can be found in Brenner’s Forensic Science Glossary (2000). Even in such areas that were covered, there is no claim of completeness in this glossary. Sometimes a concept was developed more, simply because the present author was so inclined. It is hoped at any rate that this glossary will prove helpful, supplementing or presenting in a different manner such material that was expounded in the chapters of this book. There being a detailed Subject Index is intended to much facilitate access. Abductive inference A mode of inference, theorised by Charles Peirce. It departs from deductive inference. See Section 2.2.1.6 above. “Abduction,orinference to the best explanation, is a form of inference that goes from data describing some- thing to a hypothesis that best explains or accounts for the data. Thus abduction is a kind of theory-forming or interpretive inference” (Josephson & Josephson, 1994,p.5). ABDUL/ILANA A tool developed by computational linguists. It was an AI program that used to simulate the generation of adversary arguments on an international conflict (Flowers et al., 1982). Actus reus The actual performance of a forbidden action, or that action itself, as opposed to the intention (mens rea).

E. Nissan, Computer Applications for Handling Legal Evidence, Police Investigation 1021 and Case Argumentation, Law, Governance and Technology Series 5, DOI 10.1007/978-90-481-8990-8, C Springer Science+Business Media Dordrecht 2012 1022 Appendix: Glossary

Ad hominem argument Such an argument that attacks the person who is claim- ing of a proposition, in order to attack that proposition. Ad hominem arguments are the subject of Walton (1998b). Adjudicative fact-finding Judicial decision-making (returning a verdict, as distinct from the later stage, of sentencing). Sometimes different kinds of courts can be alternative venues for adjudication for the same case, apart from the option of turning to arbitration rather than a court of justice. Moreover, sometimes alter- native venues are known to have a tendency to adjudicate differently. A U.S. taxpayer disagreeing with the Inland Revenue Service (IRS) has four venues of appeal. Two of these require that before turning to them, the taxpayer pay up front what the IRS demands, whereas the other two venues can be approached with- out paying in advance the disputed tax bill, but these other venues are known to be more biased against the taxpayer. Without paying in advance, a taxpayer can request a conference with an IRS appeals officer, but such officers are Treasury employees, and clearly favour the IRS. Also without paying in advance, a tax- payer can appeal to the U.S. Tax Court, but in 1989 it issued split decisions 55% of the time (i.e., the finding was a compromise), and that kind of court only decided 4% of cases in favour of taxpayers. But if taxpayers pay up front what the IRS requests and then sue in order to recover, then two venues are open: the U.S. Claims Court (which in 1989 favoured taxpayers in 8% of cases), and the U.S. District Court: in 1989 it found for the taxpayers in 18% of cases, and so appears to be the venue least unfavourable to taxpayers (Topolnicki & MacDonald, 1991, p. 84). In a different domain, contracts are sometimes drawn by specifying which geographic jurisdiction is to adjudicate in case of litigation, and not infrequently (especially when the parties are from different countries), the parties expect that adjudication at one’s own place may prove to be more favourable. Thus, sometimes the parties reason as though as the ideal of perfect objectivity of adjudicators was an ideal at variance with actual practice. Admissionary rules Typically in the U.S. law of evidence: rules about which kinds of evidence can be admitted and heard in court. As opposed to exclusionary rules. ADR See alternative dispute resolution. Adversarial A type of criminal procedure, which is typical of Anglo-American jurisdictions. As opposed to the inquisitorial system. During the 1990s, some countries on the European Continent with an inquisitorial system have to some degree shifted towards an adversarial system. In Britain, public inquiries are in theory inquisitorial, rather than adversarial. See inquisitorial. Adversary argument One of two classes of arguments (the other class being per- suasion arguments), “depending on the goals and expectations of the participants. [...]In[...] adversary arguments, neither participant expects to persuade or be persuaded: The participants intend to remain adversaries, and present their arguments for the judgment of an audience (which may or may not actually be present). In these arguments, an arguer’s aim is to make his side look good while making the opponent’s look bad” (Flowers et al., 1982, p. 275). The ABDUL/ILANA program models such arguers (ibid.). Appendix: Glossary 1023

ADVOKATE A computer system for the evaluation of the credibility of eyewitness evidence (Bromby & Hall, 2002). It is described in Section 4.4 in this book. Age-progression software A kind of computer graphic software, useful to the police for the purposes of locating missing people, in that it predicts how a given person (based on an old photograph) would have aged meanwhile. See Section 8.2.3. Agent beliefs Models from AI for treating them were applied to modelling the rea- soning about legal evidence, in Ballim et al. (2001) and Barnden (2001). This area is called attribution in psychology. See Section 3.4 AI See Artificial intelligence. AI & Law Artificial intelligence as applied to law, this being an established discipline both within legal computing and within artificial intelligence. ALIAS A particular multi-agent architecture, with abductive logic-based agents. It was applied to the modelling of reasoning on the evidence in a criminal case, in Ciampolini and Torroni (2004), using LAILA, a language for abductive logic agents. See Section 2.2.1.5. Alibi In the proper, legal sense of the term, an alibi states an alternative location. Loosely speaking, one sometimes uses the term more generally, to refer to an alternative, exonerating account provided by a criminal suspect, or by a defen- dant being tried. It disconfirms a claim which is essential for the accusers for them to prove the charge. “The defence of alibi presupposes that the accused was somewhare else when the offence happened. If he does not remember where he was, then he can give no particulars. If he was alone at the time, he must still give such particulars as he can of where he was and when” (Osborne, 1997, p. 135). See Section 2.2.2.8. ALIBI A computer system developed by Nissan and his students in various pro- totypes as early as Kuflik et al. (1989). In an AI perspective, it is a planner which produces alternative explanations, and with respect to an input accusation, it seeks exoneration or a lesser liability. See Section 2.2.2. Alternative dispute resolution (ADR) In civil cases, case disposition (q.v.) includes, among the other options: the court finding for one of the parties, or settlement out of court, or alternative dispute resolution. The latter may be either arbitration,orbinding or nonbinding mediation. Ambiguity aversion “Ambiguity aversion is a person’s rational attitude towards probability’s indeterminacy. When a person is averse towards such ambigu- ities, he increases the probability of the unfavorable outcome to reflect that fear. This observation is particularly true about a criminal defendant who faces a jury trial” (from an extended abstract of Segal & Stein, 2006). “Because most defendants are ambiguity-averse, while the prosecution is not, the crim- inal process systematically involves and is thoroughly affected by asymmetric ambiguity-aversion” (ibid.). Indeed: “The prosecution, as a repeat player, is pre- dominantly interested in the conviction rate that it achieves over a long series of cases. It therefore can depend on [...] general probability as an adequate predictor of this rather. The defendant only cares about his individual case and cannot depend on this general probability”. Because of the ambiguity, from the 1024 Appendix: Glossary

defendant’s perspective, of his individual probability of conviction, “[t]he defen- dant consequently increases this probability to reflect his fear of that ambiguity” (ibid.). “Asymmetric ambiguity-aversion foils criminal justice. The prosecution can exploit it by forcing defendants into plea bargains that are both inefficient and unfair. Because plea bargain is a predominant method of case-disposition across the United States, this exploitation opportunity is particularly pernicious” (ibid.). Amicus curiae In some countries, an expert witness above the parts, appointed by the court. Anchored narratives (or AN for short) The theory of anchored narratives was proposed by Wagenaar et al. (1993). The central idea of this approach is that juridical proof is organized around plausible narratives where “plausibility” is determined by the relationship between the story offered at trial and the back- ground knowledge/common sense of the decision maker. A shortcoming of the theory of anchored narratives is that has no operationalization of “not guilty”. In the theory of anchored narratives, narrative (e.g., the prosecution’s claim that John murdered his wife) is related to evidence (e.g., John’s fingerprints on the murder weapon) by a connection that must be satisfactory for the narrative to hold once the evidence is accepted: “[...] triers of fact [i.e., judges or, in some countries, the jury] reach their decisions on the basis of two judgments; first an assessment is made of the plausibility of the prosecution’s account of what happened and why, and next it is considered whether this narrative account can be anchored by way of evidence to common-sense beliefs which are gen- erally accepted as true most of the time” (Jackson, 1996, p. 10). For the story to be comprehensively anchored, each individual piece of evidence need be not merely plausible, but safely assumed to be certain, based on common-sense rules which are probably true (see generalisations). Critics point out that this begs the question: generalizations only hold with some degree of probability, if this can be pinpointed. Moreover, Ron Allen pointed out that in the theory of anchored narratives, there is no operationalization of “not guilty” or “not liable”. His own related, but earlier work is free from that fault. See Section 5.1.2. Anti-forensics Strategies to evade computer forensic investigations, in: digital anti- forensics (q.v.). See Section 6.2.1.5. Appeal The right to appeal against a judgment is an ancient Roman principle. It was renewed during the Middle Ages. For example, in Montpellier, a town (now in southwestern France) which obtained privileges of autonomy from the Crown of Aragon, “appeal procedure was used from 1204 [and the town] enacted in 1212 a statute fixing a time limit for first and second appeals exclusively to their [local authorities]. But the kings in France, the counts in Provence swiftly monopolised the competency concerning “final appeals”; and by the end of the XIIIth cen- tury, it became impossible for the towns to keep such powers” (Gouron, 1992, pp. 34–35). The reverse of the medal, if the right of some judiciary authorities not to have their judgment appealed against: historically in the United States, one of various meanings that freedom of proof – “a slippery term” (Twining, 1997, p. 462) – used to have (unlike the current sense: see free proof) was this Appendix: Glossary 1025

one: “freedom from hierarchical controls over fact-finding: for example freedom of triers of fact from appeal or review by a superior authority” (ibid., p. 448). Countries have a hierarchy of courts, and appeal is to a higher court. Appeal to expert opinion See Expert opinion, Appeal to. Applicant In some kinds of trial, this is the plaintiff; then the name for the defen- dant is respondent. The term applicant is used in the procedure of employment tribunals in England and Wales. In the Civil Procedure Rules 1993, in England and Wales, the term plaintiff was replaced with claimant (thought to be a more transparent, and more widely understood term: the same reform excised other traditional terms as well). Araucaria A relatively widespread tool for visualising arguments (Reed & Rowe, 2001, 2004). It was developed at thew University of Dundee, in Scotland. The software is freely available. It was also discussed in chapters 11 and 12 in Walton, Reed, and Macagno (2008). See Section 3.7. Arbitration In civil cases, a form of case disposition (q.v.). Like mediation,arbi- tration is a form of alternative dispute resolution (that is, alternative to the courts). AREST A particular expert system, described by Badiru et al. (1988), and whose application was the profiling of suspects of armed robbery. See in the notes of Section 6.1.3. Argumentation How to put forth propositions in support or against something. An established field in rhetorics, within AI & Law it became a major field during the 1990. ArguMed Verheij (1999, 2003) described the ArguMed computer tool for visualis- ing arguments. It was described by Verheij (1999, 2003). One of its peculiarities is the concept of entanglement (q.v.). See Section 3.7. Argumentation layers Prakken and Sartor (2002) usefully “propose that models of legal argument can be described in terms of four layers. The first, logical layer defines what arguments are, i.e., how pieces of information can be combined to provide basic support for a claim. The second, dialectical layer focuses on con- flicting arguments: it introduces such notions as “counterargument”, “attack”, “rebuttal” and “defeat”, and it defines, given a set of arguments and evaluation criteria, which arguments prevail. The third, procedural layer regulates how an actual dispute can be conducted, i.e., how parties can introduce or challenge new information and state new arguments. In other words, this level defines the pos- sible speech acts, and the discourse rules governing them. Thus the procedural layer differs from the first two in one crucial respect. While those layers assume a fixed set of premises, at the procedural layer the set of premises is constructed dynamically, during a debate. This also holds for the final layer, the strategic or heuristic one, which provides rational ways of conducting a dispute within the procedural bounds of the third layer” (Prakken & Sartor, ibid., section 1.2). See Section 3.8. Arraignment In criminal cases: “All trials on indictment begin with the ‘arraign- ment’ which consists of formally putting the counts in the indictment to the accused and inviting him to plead [i.e., to plead guilty or not guilty] to each. 1026 Appendix: Glossary

The jury are not empanelled at this stage and in most [English] courts the pro- cedure is that matters are ‘listed to plead’ where nothing else is dealt with but the taking of the plea” (Osborne, 1997, p. 138), Exceptionally (in England) if solicitors write to the Crown Prosecution Service and to the court “that there is categorically to be a not guilty plea, the matter may be listed for trial without this preliminary stage” (ibid.). Artificial intelligence (AI) Chapter 1 in Patrick Winston’s (1984) popular textbook explained (ibid., pp. 1–2): There are many ways to define the field of Artificial Intelligence. Here is one:

• Artificial Intelligence is the study of ideas that enable computers to be intelligent.

But what is intelligence? Is it the ability to reason? Is it the ability to acquire and apply knowledge? Is it the ability to perceive and manipulate things in the physical world? Surely all of these abilities are part of what intelligence is, but they are not the whole of what can be said. A definition in the usual sense seems impossible because intelligence appears to be an amalgam of so many information-representation and information-processing talents. Nevertheless, the goals of Artificial Intelligence can be defined as follows:

• One central goal of Artificial Intelligence is to make computers more useful. • Another central goal is to understand the principles that make intelligence possible.

Nissan (1991, section 1) introduced the bipolarity of goals of artificial intelli- gence as follows, in a rather florid style catering to a broad audience: In the framework of a discussion about the epistemology of computing, Bernard Stiegler (1986) employs a metaphor based on the myth of Epimetheus and Prometheus. According to that myth, Epimetheus endowed animals with various qualities, but for- got man. Unfledged and defenceless, to survive, man had to be endowed with reason by Prometheus, who sacrificed himself in the process. Here, this metaphor is going to be transposed onto the following idea, different from Stiegler’s: the AIer [i.e., the prac- titioner or scholar of artificial intelligence] is an Epimetheus who yearns for becoming Prometheus for the machine. Because of the very nature of the different interests catered to by the two terms in the binomial science and technology, the technologist relishes his or her Epimethean role. Rational, industrial criteria justify this, and are justified them- selves by the social-cultural pattern that produced, e.g., Edison (and for which, see, e.g., Jenkins, 1987, section II). So far, computer technology has endowed the machine with new attributes, just as the myth has Epimetheus endowing the spectrum of animal species with various combinations and dosage of faculties, that – albeit non-ratiocinative – are suitable for inserting them in the natural environment. No actual system – scientific prototypes, or industrial applications that computing has produced, escapes Epimetheus’ limits. Not only [that]: nowadays, we [are] witness[ing] a trend in computing that aims at the mainstreamization, in computing practice, of AI methods, that are adapted by making them more similar to traditional algorithmic programming. The idea is, that you can describe objects in cohesive clusters, or adopt the AI technique of defining and searching a constraint space of possibilities, with just practical aims, with no cognitive preoccupations (or with just ergonomically [i.e., labour-facilitating] motivated cognitive preoccupations). Aristocrats of AI science may turn up their nose at such down-to-earth interests, when wearing the gown of [fundamental] research, but actually that possi- ble attitude does not exclude taking interest, as a side occupation, in applied projects, Appendix: Glossary 1027

hopefully rentable. Scientific and technological interests each have a specific dignity: admitting the criteria of practice is not degradation, it is not tantamount to the guilt of Peer Gynt, who in Ibsen’s drama [of 1867] wears a tail to gain [acceptance into] the Trolls’ country. [Whatever] the computer system – implemented or extrapolated as feasible – we were to consider, we are condemned to spot there, at most, Epimetheus’ gift – a task- dependent suitability – while being reluctant to consider the system as being already intelligent, with no double quotes. This does not necessarily imply a mind/mechanism or soul/matter dualism, even though it may culturally motivate some dependency or counter-dependency (the latter claiming, because this runs against dualism, that intelligence can be achieved, with a certain technical paradigm, or more in general). Dissatisfaction with given AI artifacts matches the criteria of Promethean escha- tology, practically impossible to please on the ground of all cognitive desiderata. Realistically, such criteria have to be drastically simplified to be coped with step by step, and then they have to be gradually redefined more tightly, while we progress on the alternative trestles of widening the technical can-do space and of gaining more scientific insight. The\c arbitrariness of technical representation shapes, and provi- sionally delimits, scientific conception, but the latter, in turn, provides feedback for technological development. Awareness that scientific conceptions are a product of social constructivism, not just an objective product of either occasional serendipity or methodic [perspiration], is helpful to protect the scientist from overly enthusiasm. However, new adepts not always appreciate that, which can explain why in certain domains (e.g., lin- guistics), AI often becomes an obtrusive tool, or even a new theory on its own, instead of being recognized as being a versatile, non-partisan testbed where representation subserves theory, does not replace it. Different research taxonomies of AI are surveyed by Hall and Kibler (1985); cf. Ringle (1979, 1983). Anyway, AI is most often conceived as fitting in a spectrum between the two ends: simulation of cognition as being the main concern, and the work- ing, but cognitively indifferent tool. The contrast between science and technology – as we witness it in AI, notwithstanding the headway made by the tandem since the Seventies eternalizes distance to be covered; this is easily admitted, once we draw a comparison with the history of science and technology during the positivism era, when distances were thought to be smaller than they later proved to be (and when the [20th] century was figured out by extrapolating into “triumph” current “militant” conceptions of the [19th] century). Nowadays, there is the factor of impatience, as Latour (1986) has pointed out: “used to precede, engineers find it uneasy to follow [folk expectations], instead of just stupefy”. What is most specific in the contrast, inside AI, between technological opulence and scientific eschatology, is tightly bound to a terminological choice, that took sides with only one element of the binomial: by naming the discipline artificial intelligence (or, less explicitly and more modestly, AI), the distance left to be covered has been seized by the wrong end. A practical, concrete, down-to-earth choice (the nearest end), could have been: defining a cumulative, open-ended meter, callidiority (from Latin callidior, i.e. “smarter [than]”). This meter has both the merit and the fault of sparing technology the lashing of cognitive science ambitions, as the callidiority meter measures only past steps, not relativeness with respect to an Omega Point. Yet, such meter could cost unglu- ing the binomial. For technology, it may mean sinking into marshes – [in] the eyes of AI scientists – similar to those that, for up-to-date programmers [since the 1980s], COBOL [programming-language] administrative data processing has become. Production-system based commercial products would be repeated ad nauseam, acriti- cally, as a this-worldly relish of a mortal fallen angel, nearly deaf to the Memento mori 1028 Appendix: Glossary

admonition of newly underfunded basic research. Several AIers fear such an AI Winter, that could stem out of premature industrial disillusion, after too feverish a fashion. [And indeed, it took place during the 1990s, with blue-sky research on open-ended problems no longer being funded the way it used to be during the 1980s. The very term expert sys- tem went out of favour, then being replaced with intelligent system.] On the other hand, ungluing the binomial is a scenery that by now cognitive scientists, AI researchers, and the scientific culture of AI cannot afford to accept. Indeed – and this is the fundamental importance of AI – AI as implemented or to be implemented is, nowadays, the testbed that makes theories materialize. Intellect hopes in the advantages of matter. Once we have started considering it feasible to bring the Heavenly Jerusalem down to the Earthly one, not to loose hold has become a cultural imperative. Association rules “Association rules represent relationships between items in very large databases” (Chan et al., 2001b). Association rules are discovered by means of data mining (for which, see Chapter 6). An example would be “given a marker database, it was found that 80% of customers who bought the book ‘XML for beginners’ and ‘internet programming’ also bought a book on ‘Java programming’.” If X and Y are two sets of disjoint terms, then an association rule can be expressed as conditional implication X ⇒ Y i.e. the occurrence of the set of items X in the market basket implies that the set of items Y will occur in this market basket. Two important aspects of an association rule are confidence and support [...]. The confidence of an association rule r: X ⇒ Y is the conditional probability that a transaction contains Y given that it contains X, i.e. confidence (X ⇒ Y) = P(X,Y)/P(X). The support of an association rule is the percentage of transactions in the database that contains both X and Y, i.e., Support (X ⇒ Y) = P(X,Y). The problem of mining association rules can be stated simply as follows: Given predefined values for minimum support and minimum confidence, find all association rules which hold with more than minimum support and minimum confidence.” (Chan et al., 2001b, p. 278, citing Agrawal & Srikant, 1994 for the definition of confidence and support). ATT-Meta A system for agents’ simulative reasoning by agents on each other, which deals with agents’ beliefs in respect of a formal approach to uncer- tain reasoning about them. Barnden (2001) applies it to reasoning about legal evidence. Attribution In psychology: how people (and computational cognitive models) rea- son about their own beliefs and the ones they ascribe to others. In AI, this area is called agents’ beliefs. See Section 3.4. Auxiliary probative policy (rules of) A category of rules excluding or restricting the use of admitted evidence. As opposed to rules of extrinsic policy. In interpre- tations of the American law of evidence, according to Wigmore’s terminology, rules of auxiliary probative policy are such exclusionary rules that are intended to promote rectitude of decision, avoiding unreliability or alleged prejudicial effect. AV B PA Audio and Video-Based Biometric Person Authentication: an acronym for the name of a series of conferences in biometrics. AV E R s The visualisation component of the architecture of a sense-making soft- ware tool for crime investigation, as envisaged by Bex et al. (2007). AVERs was “implemented as a web front-end to an SQL database. A case can be represented visually through multiple views; in this paper we will focus on the two graphical views, that is, the evidence view and the story view” (section 6 ibid.). Ideally, Appendix: Glossary 1029

they wanted to design a more sophisticated tool than such investigative analy- sis software. Their approach to the story of the prosecution and the defence is qualitative, and does not resort to probabilistic quantification. It fits within logi- cal and computer science research into argumentation, but this is combined with reasoning about stories and evidence. See Section 5.4. Background generalisations See generalisations. Backup evidence question In argumentation studies, Walton’s (1997) Appeal to Expert Opinion offered (ibid., pp. 211–225) an argumentation scheme for “Argument for Expert Opinion”, then reproduced in Walton et al. (2008, pp. 381– 382). See s.v. Expert opinion, Appeal to above. The expert source is E; the subject domain is S; and A is a proposition about which E claims to be true (or false). The backup evidence question is: “Is E’s assertion based on evidence?”. It is articu- lated in three detailed subquestions: “What is the internal evidence the expert herself used to arrive at this opinion as her conclusion?”; “If there is external evi- dence – for example, physical evidence reported independently of the expert – can the expert deal with this adequately?”; “Can it be shown that the opinion given is not one that is scientifically unverifiable?”. Bail “Bail is the release of a person subject to a duty to surrender to custody in the future” (Osborne, 1997, p. 95). In English law, the Bail Act 1976 provides that “a defendant who fails without reasonable cause to surrender to custody is guilty of the offence of absconding”, and it lies on the defendant to prove reasonable cause (e.g., sudden serious illness, or an accident on the way to court, for which evidence must be given). The Bail Act 1976 abolished the “common practice to grant an accused bail ‘on his own recognisance’. This was a fixed sum of money which the accused did not have to provide at the time of granting bail but which, should he fail subsequently to surrender to custody would be forfeited” (Osborne, ibid.). Bayes, naïve For a given sample we search for a class ci that maximises the posterior probability

by applying Bayes rule. Then x can be classified by computing

Also see naïve Bayesian classifiers. Bayes’ theorem When dealing with a hypothesis H, and some evidence E, Bayes’ theorem states: 1030 Appendix: Glossary

P(H|E) = P(E|H)P(H)/P(E)

this can be read as follows: the posterior probability P(H|E), i.e., the probability that H is true given E, is equal to the product of the likelihood P(E|H), i.e., the probability that E given the truth of H, and the prior probability P(H)ofH, divided by the prior probability P(E)ofE. Bayesian debate A controversy among legal scholars, concerning legal evidence and the use of statistics, and in particular of Bayes’ theorem. See Sections 2.3 and 5.1. On statistics in DNA evidence, see Section 8.7.2.2. Bayesian enthusiasts Such legal scholars of evidence or forensic statisticians who strongly support the use of Bayes’ theorem as a foundation for statistical analysis as applied to legal evidence. Opposed by the Bayesio-skeptics. Note that whereas some in both camps accept these labels, there also are objections, and Bayesians vs. skeptics are more acceptable labels. Bayesian networks A Bayesian network is a directed acyclic graph (i.e., a graph without loops, and with nodes and arrows rather than direction-less edges), such that the nodes represent propositions or variables, the arcs represent the existence of direct causal influences between the linked propositions, and the strengths of these influences are quantified by conditional probabilities. Whereas in an infer- ence network the arrow is from a node standing for evidence to a node standing for a hypothesis, in a Bayesian network instead the arrow is from the hypoth- esis to the evidence. In an inference network, an arrow represents a relation of support. In a Bayesian network, an arrow represents a causal influence, and the arrow is from a cause to its effect. Judea “Pearl has always argued for a subjective degree of belief interpretation of the probabilities in Bayesian networks” (ibid.), these being a formalism he introduced and developed in a series of papers in the 1980s, leading to a book (Pearl, 1988). Judea Pearl’s “departure from standard Bayesianism arises because he thinks that prior probability distributions are inadequate to express back- ground knowledge, and that also needs to use causal judgments which cannot be expressed in probabilistic terms” (Gillies, 2004, p. 284).1 Bayesian reasoning or Bayesian updating or Bayesian conditionalisation The use of the formula of Bayes’ theorem in order to go from the prior probabil- ity P(H) of a hypothesis H,totheposterior probability P(H|E)ofH, i.e., the probability that the hypothesis H is true, given the evidence E. Bayesianism “Bayesianism is, roughly speaking, the view that relating hypotheses to evidence can be solved by bayesian reasoning” (Gillies, 2004, p. 287). Bayesianism (Imperialistic) A charge made, by supporters of alternative systems of probability, or then by those suspicious of probabilities altogether, not only in a legal scholarship context: Imperialistic Bayesianism consists of the attitude of the Bayesian who dismisses (too quickly, the charge claims) any approach to uncertainty that is not based on Bayes’ Theorem.

1 Dechter, Geffner, and Halpern (2010) is a jubilee volume honouring Judea Pearl. Appendix: Glossary 1031

Bayesio-skeptics Such legal scholars of evidence who have misgivings about the validity or desirability of Bayes’ theorem, or even of other probabilistic or sta- tistical formalisms, in the analysis of the evidence of given criminal cases (while not necessarily opposed to such use in civil cases). The term skeptics is more widely acceptable, being less charged, albeit less specific. Bench trial A trial in which the verdict is given by some (trained, professional) judge or judges, instead of by a jury (i.e., popular judges). In some countries there are no jury trials. In countries with jury trials, there are as well bench trials, which apply in different categories of cases. Beyond a reasonable doubt. The standard for deciding to convict, in a criminal case. The corresponding Latin formula is: in dubio pro reo (“If in doubt, find for the defendant”). In contrast, in a civil case the standard for the verdict is less demanding: more likely than not. See utility. “Proof beyond a reasonable doubt is such proof as precludes every reasonable hypothesis except that which it tends to support and which is wholly consistent with the defendant’s guilt and inconsistent with any other rational conclusion” (Stranieri & Zeeleznikow, 2005a, Glossary, s.v. Proof beyond reasonable doubt). In contrast, “Proof by a fair preponderance of the evidence is the standard of proof required in civil cases; a decision is made according to that evidence which as a whole is more credible and convincing to the mind and which best accords with reason and probability” (ibid., s.v. Proof by a fair preponderance of the evidence). In the Carneades argumentation tool (Gordon & Walton, 2006; Gordon et al., 2007), without sticking to the legal sense of the phrase, the strongest standard of proof for an argument was defined to be BRD (beyond reasonable doubt): “A statement meets this standard iff it is supported by at least one defensible pro argument, all of its pro arguments are defensible and none of its con argu- ments are defensible”. Cf. Scintilla of evidence, and Preponderance of the evidence. A curious effect of the different standards for criminal and civil cases can be seen in the 1995 case of celebrity sportsman O. J. Simpson, that at a criminal trial, a jury on 4 October 1995 found not guilty of the murder of his ex-wife and her friend (a June day in 1994, Nicole Brown Simpson and Ron Goldman had been stabbed to death outside her Brentwood home in California), yet later on, in 1997, was considered “responsible” for their deaths by a civil court that ordered him to pay compensation (not paid because of bankruptcy). This is further complicated by the rule against double jeopardy: a defendant in the United States cannot be tried all over again. In late November 2006, Simpson’s book entitled IF I did it, Here’s How it Happened was announced amid an outcry. Publication was decided upon on 1 July 2007, but the family of the male victim acquired the rights to the revenue (e.g., Hunt, 2007). Nevertheless, basically it is quite correct that the justice system cannot afford other than a very demanding standard of proof at criminal trials.2

2 In early October 2008, thirteen years to the day after his acquittal from the charge of murder, the former football star was convicted of armed robbery, in the context of an event that he claimed was 1032 Appendix: Glossary

Another example is that of actor Robert Blake, who in 2002 was arrested for the murder of his wife Bonnie Lee Bakley, one year after her murder. Retired stuntman Ronald “Duffy” Hambleton testified against Blake, claiming that he had tried to hire him to kill his wife. Blake was acquitted of murder in 2005, but her family filed a civil suit and Blake was found to be liable for her murder. In an appeal, the prosecution suggested that detectives failed to investigate whether associates of Christian Brando (who the woman had been claiming, in a letter to him, had fathered her baby) may have murdered Bakley. Hambleton was one of Brando’s associates, and a witness claimed that Brando (who had an alibi) haid said “Somebody should put a bullet in that bitch’s head”. This makes it all the more interesting, that Blake was found liable by a civil court, after being acquitted by a criminal court. This other example is from the U.K., and concerns the Omagh bombing,3 in Northern Ireland (i.e., Ulster). It killed 29 people in 1998. The police failed to secure a criminal conviction. Only one man, Sean Hoey, faced criminal charges over the Omagh killings, and he was acquitted in December 2007. Another man, Colm Murphy, was found guilty in Dublin’s Special Criminal Court of conspiring to cause the Omagh bombing, but his conviction was later quashed. On 8 June 2009, four men were found to be responsible for the terrorist attack, and the Real IRA was found liable, in a landmark civil case brought by relatives of the victims at Belfast High Court. They had sued five men (one of them was cleared), as well as the Real IRA as an organisation, for up to £14 million. The case opened in April 2008. Evidence for the case was heard (until March 2009) in both Belfast and Dublin, thus making legal history. It took Mr Justice Morgan three months to sift through the evidence. Those sued were Michael McKevin (the leader of the Real IRA), Liam Campbell, Colm Murphy, and Seamus Daly (these four were found responsible), as well as a man who was cleared, Seamus McKenna (the evidence against him came from his estranged wife, who was eventually considered an unreliable witness). The judge awarded more than £1.6 million in damages to 12 named relatives who took the action. Much of the evidence was obtained by an undercover FBI agent, David Rupert, who infiltrated the Real IRA. Records and traces on two phones used by the bombers on the day of the attack were important evidence, and the judge deemed it proved that Campbell and Daly were in possession of the phones before and after the attack. Quite importantly, the burden of proof was as required in a civil case, and it is in this context that one has to understand the judge’s statement that he considered the case against Campbell overwhelming. Big Floyd A ink analysis tool of the FBI (Bayse & Morris, 1987), with inferen- tial capabilities, and applying the notion of template matching for detecting the likelihood that particular types of crimes were committed. See Section 6.1.2.3.

an attempt to recover his own property. One wonders whether the jury could have been insensitive to the highly publicised previous case. See Trial by the media. 3 Omagh is pronounced Oma. The town of Omagh is in County Tyrone. Appendix: Glossary 1033

Biometrics “Biometrics, which refers to identifying an individual based on his or her physiological or behavioral characteristics, has the capability to reliably dis- tinguish between an authorized person and an imposter. A biometric system can be operated in two modes: (1) verification mode and (2) identification mode (Jain et al., 2000). The former is called person verification,orperson authen- tication. A biometric system operating in the verification mode either accepts or rejects a user’s claimed identity, while a biometric system operating in the iden- tification mode establishes the identity of the user without any claimed identity information” (Khuwaja, 2006, pp. 23–24). Jain, Bolleand, and Pankanti (1999) and Li and Jain (2009) are books on the subject. Bromby (2010) discussed how biometrics can aid certification of digital signatures. The most mature technique for person verification, or one of the most mature, is fingerprint-based identification (Isobe, Seto, & Kataoka, 2001; Seto, 2002). Other approaches are based on “face, hand geometry, iris, retina, signature, voice print, facial thermogram, hand vein, gait, ear, odor, keystroke dynamics, etc.” (Khuwaja, 2006, p. 24). For example, iris recognition is the subject of Li, Yunhong, and Tan (2002), Yunhong, Tan, and Jain (2003). Retina recognition is discussed by Yoichi Seto (2009). Biometric fusion (or information fusion in biometrics:Ross&Jain,2003) is “[t]he general method of improving performance via collection of multiple samples” (Rattani et al., 2008, p. 485). Multi-biometrics is “[t]he ability to uti- lize multiple biometrics modalities (multimodal), instances within a modality (multi-instance), and/or algorithms (multi-algorithmic) prior to making a spe- cific verification/identification or enrollment decision” (ibid.), where enrollment is “[t]he initial process of collecting biometric data from a user and then stor- ing it in a template for later use” (ibid., p. 484). See Section 8.7 on individual identification.4 Blackboard systems “A blackboard system is a group of knowledge modules col- laborating with each other by way of a shared database (blackboard), in order to reach a solution to a problem. Its basic components are: the blackboard, knowledge sources (independent modules that collectively contain the knowl- edge required to solve the problem) and a control mechanism (or scheduler) which directs the problem-solving process by deciding which knowledge source is most appropriately used at each step in the solution process. The knowledge sources have a condition part and an action part. The condition component speci- fies the situations under which a particular knowledge source could contribute to an activity. The scheduler controls the progress toward a solution in blackboard systems, by determining which knowledge sources to schedule next, or which problem sub domain to focus on” (Stranieri & Zeleznikow, 2005a, Glossary) See Section 6.1.6.1 in this book. Blackboard systems have found application in legal computing: “GBB is an expert system shell based on the blackboard paradigm. It provides the blackboard database infrastructure, knowledge source

4 In particular, see Section 8.7.3.1. Also see in the last footnote of Section 6.2.1.9. 1034 Appendix: Glossary

languages and control components needed by a blackboard application. It is used in the construction of the CABARET legal knowledge based system” (Stranieri & Zeleznikow, 2005a, Glossary). Ashley’s (1991) HYPO system (which modelled adversarial reasoning with legal precedents) was continued in the CABARET project (Rissland & Skalak, 1991), and the CATO project (Aleven & Ashley, 1997). Besides: “The PROLEXS project at the Computer/Law Institute, Vrije Universiteit, Amsterdam, Netherlands is concerned with the construction of legal expert shells to deal with vague concepts. Its current domain is Dutch landlord-tenant law. It uses several knowledge sources and the inference engines of the independent knowledge groups interact using a blackboard architecture” (Stranieri & Zeleznikow, 2005a, Glossary). Blackboard systems are the subject of Hayes-Roth (1985) and of Engelmore and Morgan (1988). Blue ribbon jury Specially qualified jury, instead of a jury whose members are ordinary members of the public. This is one of several possible remedies to trial complexity (Hewer & Penrod, 1995, p. 533). Bolding-Ekelöf degrees of evidential strength Introduced in Bolding (1960), Ekelöf (1964). Åqvist (1992) proposed a logical theory of legal evidence, based on the Bolding-Ekelöf degrees. Shimony and Nissan (2001) restated Åqvist’s approach in terms of the probabilistic version of Spohn’s (1988) kappa calculus as developed in AI research. (The kappa value of a possible world is the degree of surprise in encountering that possible world, a degree measured in non-negative integer numbers.) Burden of proof (or persuasion burden). Which party in a trial should prove or disprove a given claim. “There is a distinction between the evidential and legal burden of proof” (Jefferson, 1992, p. 19). In criminal cases, the defendant’s “bur- den is called the evidential burden or onus of proof. The prosecution’s burden is the legal one” (ibid.). See Evidential burden (as well as onus of proof) and Legal burden. The burdens of proof are also important in scientific inquiry; Scientific uncertainty and burdens of proofs in, respectively, scientific practice and environmental law are discussed – from the vantage point of the philoso- phy of science – in Lemons et al. (1997). Allen and Pardo assert (2007a, p. 108, fn 1), concerning formalisation, that

there are attempts to defend an expected-utility approach to burdens of persuasion with an argument that is valid if, but only if, burdens of persuasion apply to cases as a whole (the defendant is liable or not, guilty or not), but this is false; they apply to individual elements (Allen, 2000).

Burden, evidential One of the two kinds of burden of proof, as opposed to legal burden. In criminal law in England, “[i]n most offences the Crown does not need to negative [sic] any defence the accused might have. It has to show the actus reus and mens rea, if any. If the defendant wishes to rely on a defence, he must raise it and show evidence in support, as Lord Diplock said with regard to mistake in Sweet v Parsley [1970] AC 132. The same can be said about self-defence, provocation, automatism and duress” (Jefferson, 1992, p. 19). Appendix: Glossary 1035

There is a distinction between the evidential and legal burden of proof. The difference may be illustrated by reference to automatism [...]. Before the accused can rely on this defence, he must put forward some evidence that he was acting automatically when he, say, hit his lover over the head with a heavy ashtray. The evidence might consist of a witness’s saying that he saw what happened or a psychiatrist’s drafting a report. In legal terms he has to adduce or lead evidence. If he does not adduce such evidence, his plea will fail at that stage and the prosecution does not have to lead evidence that his plea should not succeed. If he does, the prosecution has to disprove that he was acting automatically. His burden is called the evidential burden or onus of proof. The prosecution’s burden is the legal one. (ibid.).

Burden, legal One of the two kinds of burden of proof, as opposed to evidential burden. In criminal law in England, “before the accused can rely on [a given] defence, he must put forward some evidence [to that effect]. If he does not adduce such evidence, his plea will fail at that stage and the prosecution does not have to lead evidence that his plea should not succeed. If he does, the prosecution has to disprove [what he claimed in his defence]. His burden is called the evidential burden or onus of proof. The prosecution’s burden is the legal one” (Jefferson, 1992, p. 19).

In most area of the criminal law the prosecution must prove both the actus reus and the mens rea “beyond reasonable doubt”. [...] The same principle applies to most defences. The prosecution has, for example, to disprove duress and self-defence. Older cases to the contrary are no longer authoritative. To this principle there are three exceptions. Insanity For the accused to have this defence he must show that he was insane at the time of the offence. The standard of proof is on the “balance of probabilities”. That phrase means in effect that if it is more likely than not that the accused was insane, he has the defence. [...] The legal reason assigned for this exception is that every person is presumed to be sane; [...]. The effect is that if the jurors are not certain either way, the accused does not have this defence. Parliament expressly placing the burden on the accused [...] Parliament can alter the burden by statute and has done so on several occasions [for given kinds of offence and defendant’s defence]. [...] Where Parliament places the burden of proof on the accused, the standard of proof is on the balance of probabilities, unless Parliament states otherwise. “Exception, exemption, proviso, excuse or qualification” in a statutory offence [... In a case for the possession of morphine,] Lord Ackner held that Parliament could place the burden of proof on the accused either expressly or “by necessary implication”. When deciding whether the burden was by implication on the accused, the court had to look not just for the language of the enactment but also at its substance and effect. The prac- tical consequences could also be investigated. [...] On the facts the prosecution had merely to obtain an analyst’s report [which is not a burdensome task for the prosecu- tion]. Therefore, the burden remained on the Crown. [...] Where Parliament places the burden of proof on the accused, he bears the legal burden and not just the evidential one. [...] The types of argument utilised in Hunt will be used in later cases to decide whether an exception in a statute places the burden of proof on the accused. Doing so has to be justified and could not be justified simply on the basis of the grammar of the section containing the offence. In deciding whether Parliament intended to place the burden on the accused one should look at the practicalities. If one side would have serious difficul- ties in proving something, there was an inference that that party did not bear the burden. It was also a factor whether the crime was serious or not. If it was serious, it was more likely than not that the prosecution bore the onus. The burden was not likely to be placed 1036 Appendix: Glossary

on the accused, for it ought not easily to be held that Parliament did not intend to protect the innocent. (Jefferson, ibid., pp. 19–22). CABARET A computer system for argumentation from AI & Law (Rissland & Skalak, 1991). See Section 3.9.1. Blackboard systems (q.v.) have found applica- tion in legal computing: “GBB is an expert system shell based on the blackboard paradigm. It provides the blackboard database infrastructure, knowledge source languages and control components needed by a blackboard application. It is used in the construction of the CABARET legal knowledge based system” (Stranieri & Zeleznikow, 2005a, Glossary). Ashley’s (1991) HYPO system (which modelled adversarial reasoning with legal precedents) was continued in the CABARET project (Rissland & Skalak, 1991), and the CATO project (Aleven & Ashley, 1997). CACTUS A piece of software (a simulation system based on a multi-agent archi- tecture) for training police officers in managing public order events, while communicating as they would in a real situation (Hartley & Varley, 2001). See at the end of Section 6.1.6.2. Carneades A computer tool, implemented using a functional programming lan- guage and Semantic Web technology, based on a particular formal model of argumentation (Gordon & Walton, 2006). See Section 3.7. Case-based learning Learning from case studies, in an educational setting (Williams, 1992). It has provided inspiration for case-based reasoning in arti- ficial intelligence. Leake (1996) remarked (citing Williams, 1992): “Although case studies already play a useful role in legal and medical education, students using them generally do not confront the complexity of real episodes and do not have the opportunity to act to execute, evaluate, and revise their solutions”. Case-based reasoning (CBR) A methodology in artificial intelligence, that instead of matching rules from a ruleset to a situation at hand, tries to match it to some entry from a pool of past cases, by calculating how close they are according to various features (e.g., Leake, 1996; Veloso & Aamodt, 1995). This is similar to, and at least in part overlapping with, analogical reasoning (Veloso, 1994). Stranieri and Zeleznikow (2005a) provide this definition: Case based reasoning is the process of using previous experience to analyse or solve a new problem, explain why previous experiences are or are not similar to the present problem and adapting past solutions to meet the requirements of the present problem. The contrast between rule-based and case-based intelligent systems from arti- ficial intelligence should not be mistaken for the contrast between such legal jurisdictions that mainly judge based on precedent (which is the case of Anglo- Saxon countries), and such jurisdictions (such as France) where adjudication is mainly based on rules as stated in law as made by legislators. Moreover, the two opposite pairs do not overlap even when either rule-based or case-based reasoning is adopted in intelligent software systems applied to the legal domain. Bain’s JUDGE system (Bain, 1986, 1989a, 1989b) is, among the other things, a tool whose AI mechanism is case-based reasoning. It adopts a hybrid approach involving both rule based and case based systems. JUDGE is a Appendix: Glossary 1037

cognitive model of judges’ decision-making when sentencing (and indeed it was based on interviews with judges). Also see model-based case-based reasoning paradigm. CaseMap A commercial software tool for organizing the evidence. It is produced by CaseSoft, an American firm (www.casesoft.com) See procedural-support systems, and Section 4.1.1. Case disposition The manner a case is concluded, i.e., in criminal cases, by convic- tion, by acquittal, or by plea bargain (a predominant mode in the United States), or by the prosecution’s decision not to prosecute, or by the alleged victim with- drawing the charges, or by the defendant dying or becoming incapacitated during the trial. In civil cases, case disposition includes: by the court finding for one of the parties, or by settlement out of court, or by alternative dispute resolution (either arbitration,orbinding or nonbinding mediation), or by withdrawal of one of the parties (i.e., an employee who cannot afford to pay legal expenses so s/he could have his or her day in court at an employment tribunal), or by forgival and reconciliation. CATO A computer system for argumentation from AI & Law (Aleven & Ashley, 1997). See Section 3.9.1. Chances, doctrine of See doctrine of chances. Character In reference to evidence of disposition and character: “The word ‘dis- position’ is used to denote a tendency to act think or feel in a particular way. The word ‘character’ may include disposition, or sometimes mean ‘general rep- utation’ or merely the question of whether or not the accused has a criminal record” (Osborne, 1997, p. 313). See character evidence, and see evidence of disposition. Character vs. action In argumentation: Walton et al. (2008, p. 330) describe argumentation schemes relating an agent’s character to an agent’s actions. In particular, §31.3, “Abductive Scheme for Argument from Action to Character”, is as follows:

Premise: Agent a did something that can be classified as fitting a particular character quality. Conclusion: Therefore, a has this character quality. Critical Questions CQ1: What is the character quality in question? CQ2: How is the character quality defined? CQ3: Does the description of the action in question actually fit the definition of the quality? By contrast, §31.4, “Scheme for Argument from Character to Action (Predictive)” is as follows:

Premise: Agent a has a character quality of a kind that has been defined. Conclusion: Therefore, if a carries out some action in the future, this action is likely to be classifiable as fitting under that character quality. 1038 Appendix: Glossary

Critical Questions CQ1: What is the character quality in question? CQ2: How is the character quality defined? CQ3: Does the description of the action in question actually fit the definition of the quality? Thus, in both cases the critical questions are the same. Walton et al. (2008, pp. 330–331) added this comment:

Comment: Even though the critical questions are the same for both, the predictive scheme for argument from character to action needs to be distinguished from the retro- ductive scheme that reasons from character to a particular action, and these two schemes need to be distinguished from the argument from a past action to an agent’s character. Their §31.5, ‘Retroductive Scheme for Identifying an Agent from a Past Action’ (ibid., p. 331) is as follows:

Factual Premise: An observed event appears to have been brought about by some agent a. Character Premise: The bringing about of this event fits a certain character quality Q. Agent Trait Premise: a has Q. Conclusion: a brought about the event in question. Cf. in Walton’s Legal Argumentation and Evidence (2002, p. 44). Douglas Walton devoted a book to character evidence (Walton, 2006b). Character evidence Arguments in favour or against a party in a trial, based on flattering or unflattering biographical data. According to the jurisdiction, use of such evidence is not always permitted. Evidence of prior convictions is a form of evidence of disposition and character. Another form is uncharged conduct, by which uncharged misconduct is intended: such past misconduct for which no charges were brought. There is much debate about the question of whether evidence of prior convictions has sufficient probative value to be heard by the trier of fact (a jury or a trained judge). At least in some cases (typically, against suspect child-molesters), it may be helpful to point toward what may have been the factual proof, to let it be known that the defendant had already been con- victed of the same kind of offence. Nevertheless, in the law of evidence in some countries, exclusionary rules about which evidence can be used apply: rules of extrinsic policy give priority to other values (such as the protection of personal rights) over rectitude of decision. It is important to bear in mind that legal truth and factual truth are not identical. Yet, even withholding from a jury the infor- mation that the defendant had prior convictions, such a policy does not extend to a claim that police officers and law enforcement personnel should forego the use of prior convictions, while carrying out criminal investigations. Such a claim has been made, and goes by the name of jury observation fallacy. Osborne (1997, p. 319) remarks about English law: “The general rule was that the character of any party in a civil case, or any witness in any case, is open to attack. The purpose of such attack is of course to show that the party or witness should not be believed”. Yet “the fundamental rule [...] is that the pros- ecution may not for the purpose of proving an accused’s guilt adduce evidence Appendix: Glossary 1039

of the character of the defendant whether of previous behaviour, previous con- victions, or general reputation. The reason is obviously the extreme prejudice to the accused in the eyes of the jury. The main exception to this is the use of the ‘similar fact’ principle” (ibid.). See similar fact evidence. Osborne remarks (ibid.):

It has been recognised from the 18th century that an accused could call witnesses to speak to his good character, or cross-examine prosecution witnesses in order to get them to do so. This was exceptional and was intended as an additional protection for an accused, who could not testify before 1898. The important point to note however is that character is indivisible. One cannot assert a good character for one type of behaviour without the prosecution having the right to cross-examine or call evidence about other aspects of one’s character. [...] It [is] not open to an accused to put only half of his character in issue.

The defendant loses his shield (his protection from bad character evidence) if he makes imputations on the character of the prosecutor or of prosecution wit- nesses, but determining what is an imputation is not easy. See shield, and see imputation. “There were numerous cases in the period 1990–1993 which left unclear what direction the judge [to the jury] the judge should give in respect of a defendant with good character who chooses not to testify” (ibid., p. 320). Should good character only, or primarily instead, affect credibility? If it only affects credibil- ity, then as the defendant didn’t testify, good character evidence is of no use. It is useful, instead, if it is admitted by the court that good character is capable of being relevant to innocence. Allen and Pardo (2007a) offered a critique, in terms of the reference-class problem (q.v.) of how probability theory was applied to juridical proof concern- ing character impeachment evidence in Friedman (1991). Within research into argumentation, see Douglas Walton’s (2007) Character Evidence: An Abductive Theory. Claimant In civil cases in England and Wales, the party that turns to the courts for adjudication against another party. In the Civil Procedure Rules 1993,theterm plaintiff was replaced with claimant (thought to be a more transparent, and more widely understood term: the same reform excised other traditional terms as well). Common law In countries like Britain there are both statutory law, i.e., laws passed by parliament, and common law, i.e., the body of sentences passed by judges, and that serve as precedent. On the European Continent, what really matters is statutory law, and the courts must abide by it when adjudicating. Complexity (of a trial), or complex litigation Such features that may push a case beyond a jury’s ability, because a difficult challenge is posed to reasoned decision-making. Tidmarsh (1992) and Hewer and Penrod (1995, p. 531) dis- cuss substantive definitions of trial complexity, i.e., based on the substance of a case (e.g., antitrust, securities and takeover litigation, or commercial disputes, and products liability torts, or sometimes breach of contract cases); procedu- ral definitions of trial complexity (e.g., complexity during the pre-trial phase, complexity during the trial, complexity in the implementation or administration of remedies following the verdict, and complexity arising from the number of 1040 Appendix: Glossary

parties); and “laundry list” definitions (based on the number of parties, the num- ber of witnesses, the presence of a class action, the existence of a product liability claim, the presence of related cases involving multiple or complex factual or legal issues, the extent of discovery). Hewer and Penrod (1995, p. 533) recommend the following actions in order to alleviate problems arising from trial complexity according to three dimensions, namely, complex evidence, complex law, and voluminous evidence (for short henceforth: E, L, V): Better organisation of voluminous evidence (E, V); Explain complex legal issues more clearly (L); Limit the volume of evidence (E, V); Limit the time for presentation of evidence (E, V); Stipulate to facts before the trial (E, V); Allow fewer trial interruptions (V); Provide jury with notebooks including pictures and information about witnesses and exhibits (E, V); Allow juror note-taking (E, L, V); Allow jurors to question witnesses (E); Instruct the jurors prior to the evidence (E, L); Provide jurors with written copy of the judge’s instructions (L); More thorough responses to juror questions during deliberations (E, L, V); Specially qualified (blue ribbon) juries (E, L, V); Special masters, i.e., neutral experts to assist jury (E, L, V); Special verdict forms with detailed ques- tions for the jury to answer (L); Judge commenting or summarising of evidence (E, V); Greater reliance on summary judgement (E, L, V); Bifurcation of issues (E, V); Bifurcation of parties (V). We have dealt with complexity in another sense in Section 6.2.1.7:theGOMS (Goals, Operators, Methods, Selections) family of models of cognitive com- plexity includes the GOMS Keyboard-Level Model (KLM), developed by Kieras (2001), and which provides a tractable means of measuring human involvement in an operational process. Composites Composite images of human faces, used for suspect identification. Facial portraits, or mugs, typically are not composites, but rather a photograph (mugshots), or then portraits of suspects are, drawn by a sketch artist manually, based on a verbal description of a victim or eyewitness (Identi-kit procedures). An alternative to mugs and to artist’s sketches is a composite, by which initially a photographic photofit was intended. The term photofit is still in use in the U.S., whereas in the U.K. the more general term composite is preferred. Old computer- ized systems for composites include E-FIT, PROfit (CD-FIT), and Mac-A-Mug Pro. An advanced tool is CRIME-VUs. See Section 8.2.2. In non technical language, especially in the media, the differences between the various kinds of pictorial or photographic support for suspect identification tend to be blurred, and also the status of the person sought is imprecise. For example, in Italian the media would refer to fotografie di pregiudicati, e.g., literally, “pho- tographs of ex-cons”, whereas in Israeli Hebrew mug shots are often referred to informally as foto-rétsakh, i.e., literally, “photo-murder”.5 Such descriptors are grossly imprecise. (The Hebrew for “composite” is klasterón.)

5 I received the following (impressionistic) explanation for how the term foto-rétsakh came into being – denoting not only mugshots, but also such passport-sized photographs that are perceived to resemble the police’s mugshots. In the 1940s and 1950s, one method of shooting facial photographs Appendix: Glossary 1041

Such images are sometimes made public, not only in order to track down a person wanted on suspicion of a crime, or some youngster or old and forgetful person who has disappeared. Police may release an artist’s impression of the face of a dead person they want to identify. That is to say, they already have him or her, but are unaware of that person’s identity. The same may apply to a sufferer of amnesia. Compusketch A system for assisting a witness in approximating his or her descrip- tion of the facial features of a criminal suspect. It is a computerised version of the Photofit process. See Section 8.2.2. Computational forensics Computer techniques subserving any discipline within forensic science. See Frank and Srihari (2008). Computational forensics should not be mistaken for computer forensics (q.v.). Computer crime Crime that exploits the vulnerability of computer systems, and takes place through breaches of computer security. See Section 6.2.1.5. Computer forensics Another name for forensic computing. Also see digital forensics. See Section 6.2.1.5. This is quite different from what is meant by computational forensics (q.v.). Computer investigation Actions undertaken, either by the police or by an organisa- tion that experienced computer crime, in order to identify suspect perpetrators as well as in order to find out they managed to evade computer security measures. Computer security A discipline that provides a preventative response to computer crime. Confabulation A defect of testimony: the witness is also inferring, not merely reporting. Confabulation in depositions may occur because witnesses discussed their recollections, and this had an effect on what they later think they remember. In particular, if it was two eyewitnesses who saw the same event and then dis- cussed it, this may influence what they later claim to remember; this is sometimes referred to as memory conformity. Confirmationism An approach to questioning witnesses, which seeks to confirm a given account. It is a flawed approach, and Voltaire lampooned it. Confirmation bias as occurring in the police interrogation rooms, see e.g. Kassin et al. (2003), Meissner and Kassin (2002), and Hill et al. (2008). Confrontation right The right of a defendant to confront his accusers in court. Consistency question In argumentation studies, Walton’s (1997) Appeal to Expert Opinion offered (ibid., pp. 211–225) an argumentation scheme for “Argument for Expert Opinion”, then reproduced in Walton et al. (2008, pp. 381–382). See s.v. Expert opinion, Appeal to above. The expert source is E; the subject domain is S;

was to have a man who had to be photographed introduce his head into a hole in a black curtain. The resulting photograph supposedly appeared to portray a murderer. But perhaps this explanation I was given in 2010 is a rationalisation ex post facto, which makes it appear as though it was a passport-sized photograph that was initially called foto-rétsakh. More plausibly, the term denoted mugshots ab initio, and it was only by metaphorical interpretation of the thing denoted (i.e., the signified, as opposed to the signifier, which is the word itself), that the term was also applied by some to passport-sized photographs with a dark backdrop (especially of an old kind), if that. 1042 Appendix: Glossary

and A is a proposition about which E claims to be true (or false). The consistency question is: “Is A consistent with what other experts assert?”. It is articulated in two subquestions: “Does A have general acceptance in S?”; “If not, can E explain why not, and give reasons why there is good evidence for A?”. Contrary-to-duty obligations A norm is violated, yet there are norms about how to deal with such a situation of violation. S.v. time in this Glossary, we con- sider some procedural constraints on temporal sequence at a trial, and how they can be allowed sometimes to be violated. Wishing to model this in terms of AI techniques, it makes sense to resort to techniques concerning contrary-to-duty obligations. Contrary-to-duty obligations are sometimes called reparational obli- gations, when the concept is concerned (as it often has been in the scholarly work of logicists within research into deontic logic) as remedial obligation for a state of affairs contravening a previous obligation; see, e.g., Parent (2003). A related example is that known by philosophers as the one of the gentle mur- derer: one shall not murder, but if he does, let him do it gently; “gentle murder” is also called the Forrester paradox (Forrester, 1984). Research on contrary-to- duty obligations is related to conditional obligations; on the latter, see Chellas (1974), and on the relation between the two classes of obligations, see Tomberlin (1981). Horty (1993) deals with both classes in terms of nonmonotonic deontic logic. For a discussion of contrary-to-duty obligations (or contrary-to-duty impera- tives), see e.g. Carmo and Jones (2002), which is an encyclopedic entry, as well as Chisholm (1963), Åqvist (1967), Hage (2001), Carmo and Jones (1996), Prakken and Sergot (1996, 1997), and Governatori and Rotolo (2002). An approach that resorts to Petri nets (a graph representation expressing constraints on tempo- ral precedence) for the representation of deontic states (i.e., states of obligation), including contrary-to-duty obligations, has been proposed in Raskin et al. (1996). By the same team, the paper by van der Torre and Tan (1999) is on contrary-to- duty reasoning. Ursu and Zimmer (2002) are concerned with the representation of duty and contrary-to-duty statements, in computer-aided design tools of the class of critiquing intelligent design assistants. Examples given by Ursu and Zimmerinsection4,ofasecondary (contrary-to-duty) obligation that comes into effect when the primary obligation is violated, include: “Preferred design: uni- form wall thickness should be used”, yet: “When unavoidable” – i.e., when walls must have a different thickness – “transition from one wall thickness to another should always be as smooth as possible”. Or then this other example: “There must be an alternative escape route from all parts of the building. However, in the following situations a single route is acceptable”. Convince Me A computer tool for supporting argumentation (Schank & Ranney, 1995). It is one of the tools reviewed in van den Braak et al. (2006) It is based on Thagard’s Theory of Explanatory Coherence (e.g., Thagard, 2000a). The argu- ments consist of causal networks of nodes (which can display either evidence or hypotheses), and the conclusion which users draw from them. Convince Me predicts the user’s evaluations of the hypotheses based on the arguments pro- duced, and gives a feedback about the plausibility of the inferences which the users draw. Appendix: Glossary 1043

Coplink A tool for criminal intelligence analysis, developed for the Tucson police at the University of Arizona, and performing network link analysis. See Section 6.2.5. Corpus (plural: corpora) A collection of documents. For example, corpora are what information retrieval tools and text mining search automatically. See Section 6.1.9. Corroborative evidence “Corroborative evidence is that which independently tends to support or confirm other evidence” (Osborne, 1997, p. 303). “The general rule [in England] has always been that evidence does not require corroboration and that the court may act on the uncorroborated evidence of one witness alone, however serious the charge” (ibid.). Still in English law: Until 1995, however, there were individual classes of cases where the type of evidence, or the type of witness, were deemed inherently “suspect” in some way so as to require extra caution from a court before it considered its verdict. The law on corroboration evolved in a haphazard and piecemeal way and was burdened with difficult technicali- ties. Classically, three kinds of witness were thought to be sufficiently suspect to require corroboration of their evidence before there could be a conviction, namely children, accomplices, and victims of a sexual offence. There were, in addition, rules which indi- cated that corroboration should generally be looked for in any case where a witness might have some personal motive for wishing to secure the conviction of the accused, for example someone who had a grudge against the accused, or who might himself have fallen under suspicion of the crime in question. In these cases, a judge would have to remind the jury, in very technical terms, of the risk of convicting on the evidence of the “suspect” witness and then go on to describe what items of evidence could have the technical quality of corroboration on the particular facts of the case. Judges were notoriously prone to get aspects of corroboration wrong, either by directing the jury with insufficient force about the risks of acting without corroboration, or by misidenti- fying items of evidence in the case which they might say were technically capable of amounting to corroboration but which in fact lacked the necessary quality (ibid.). Reform abolished the requirement for corroboration of evidence from given cat- egories of witnesses, and also the requirement for the judge to give a warning to the jury ceased to be mandatory and became discretionary (ibid., pp. 304–305). Criminal trial (as opposed to civil trial). In Anglo-American jurisdictions, the sequence is as follows. Initially, there is the indictment. Then, the accused is asked to plea guilty or not guilty. If the defendant pleas guilty (which typically is because of a plea bargain), the court hears the facts from the prosecution (with no need to present evidence), then the defence may intervene, and finally the sentence is given. If the defendant pleas not guilty, the case will have to be pros- ecuted. There is an adjournment to an agreed date. Then the adjourned hearing takes place, following adversarial lines (as typical of the common law system of Anglo-American jurisdictions). There is the prosecution’s opening speech. Then the prosecution calls witnesses. For each prosecution witness, there is an examination in chief of the witness on the part of the prosecution, followed by cross-examination of the witness by the defence, and sometimes there is re-examination on the part of the prosecution. Then there is the close of the pros- ecution case. (Now the defence may submit that there is no call to answer. If the court accepts this, then the defendant is discharged. Otherwise:) Defence calls 1044 Appendix: Glossary

witnesses. For each one of the defence witnesses, there is an examination in chief of the witness by the defence, then cross-examination on the art of the prosecu- tion, and sometimes re-examination by the defence. Then there is the defence’s closing speech to the bench. Now the prosecution may have one more speech, but if this is the case, then the defence must have the last word. Now the factfind- ers (either lay magistrates, i.e., jurors, or one or more stipendiary magistrates, e.g., trained, professional judges) retire to consider their decision. (if there is a jury, the jury receives instructions from the judges before it retires.) Then the magistrates return and give the verdict (and state no reason). If the verdict is not guilty, then the defendant is discharged. If the verdict is guilty, then the court hears the facts from the prosecution (with no need to present evidence), and next the defence may intervene. Finally, the sentence is given. See utility, and beyond a reasonable doubt. CRIME-VUs A project which produced EvoFIT (under the lead of Charlie Frowd), a computer graphic tool for suspect identification, and validated it with techniques from . The project was conducted at the University of Central Lancashire in Preston, and the Faces Lab of the University of Stirling, Scotland. The approach combines facial composites, sketches, and morphing between facial composites. See Section 8.2.2.4. Cross-examination Questioning of one of the parties, or of a witness called by one of the parties at a trial, by the other party’s lawyer. See examination.During cross-examination, not always questions are direct. Implication and innuendo are often effective. By accumulating details, it is in their final statement to the court (also called final argument, or more often closing arguments) that lawyers pro- pose an account that puts facts in relation to each other, make characterisations, and draw conclusions. CSI Crime scene investigation. Daedalus A tool for supporting the activities of the sostituti procuratori (examining magistrates and then prosecutors) in the Italian judiciary. Developed by Carmelo Àsaro. A related tool is Itaca. See procedural-support systems, and Section 4.1.3. DART A tool for supporting argumentation (Freeman & Farley, 1996), which was applied to legal situations also by Gulotta and Zappalà (2001). See Section 3.7. Dead Bodies Project A project (Zeleznikow & Keppens, 2002, 2003; Keppens & Schafer, 2003a, 2004,cf.2005, 2006) intended to help at inquests aiming at ascertain the causes of death, when prima facie a crime cannot be ruled out. See Section 8.1. Decision tree A conditional structure of flow control. Decision trees (as well as IF/THEN rules) are automatically extracted from databases by machine-learning tools. Several machine-learning commercial products that primarily produce decision trees were described by Mena (2003, section 7.9, pp. 221–229):

• AC (http://www.alice-soft.com) • Attar XperRule (http://www.attar.com/) • Business Miner (http://www.businessobjects.com) Appendix: Glossary 1045

• C5.0 (http://www.rulequest.com/), also a rule-extractor; for expecially large databases (its algorithm is also used in SPSS’s Clementine) • CART (http://www.salford-systems.com), also a rule-extractor; very powerful and accurate, but relatively slow, and for numeric data only • Cognos Scenario6 • Neurosciences aXi Decision Tree7 • SPSS Answer Trees8 • as well as several free decision tree software tools.9

Deafeasibility Carbogim et al. (2000) presented a comprehensive survey of defea- sible argumentation. In this book, we have dealt with defeasibility in Sections 3.3 and 3.9.1. “Nonmonotonic reasoning [q.v.], because conclusions must sometimes be reconsidered, is called defeasible; that is, new information may sometimes invalidate previous results. Representation and search procedures that keep track of the reasoning steps of a logic system are called truth maintenance systems or TMS. In defeasible reasoning, the TMS preserves the consistency of the knowl- edge base, keeping track of conclusions that might later need be questioned” (Luger & Stubblefield, 1998, p. 270). Defence In court, and previously during the preparations for the trial, the formal standing (and actions taken in that role) of the accused (the defendant) and of his lawyers on his behalf (but he may be representing himself, without resorting to a lawyer). Defendant The party against whom the plaintiff (who in particular may be the pros- ecution) turns to the courts for adjudication. In some kinds of trial, the names are applicant for the plaintiff, and respondent for the defendant. Dempster-Shafer theory In statistics and in artificial intelligence: “Dempster- Shafer theory [Shafer, 1976] has been developed to handle partially specified domains. It distinguishes between uncertainty and ignorance by creating belief functions. Belief functions allow the user to bound the assignment of proba- bilities to certain events, rather than give events specific probabilities. Belief functions satisfy axioms that are weaker than those for probability theory. When the probabilistic values of the beliefs that a certain event occurred are exact, then the belief value is exactly the probability that the event occurred. In this case, Dempster-Shafer theory and probability theory provide the same conclusions” (Stranieri & Zeleznikow, 2005a). Deontic, deontology Pertaining to duty and permissibility. Deontic logic has oper- ators for duty. Deontological arguments appeal to principles of right or wrong, ultimate (rather than teleological) principles about what must, or ought or must not or ought not to be or be done.

6 http://www.cognos.com/products/scenario/index.html 7 http://www.neurosciences.com 8 http://www.spss.com/spssbi/answertree/ 9 Such free software tools are linked to from the data mining portal www.kdnuggets.com 1046 Appendix: Glossary

Deontic logic A modal logic of obligation and permission. Established in the 1940s. It was especially prominent in AI & Law research from the 1970s. See, e.g., Nissan (2008a), Åqvist (1984, 1986), Jones and Sergot (1992). Also see contrary-to-duty obligations. Digital anti-forensics Strategies to evade computer forensic investigations, as well as ways to exploit critical failures in computer forensics software or in the reliability of computer security systems. Section 6.2.1.5. Digital forensics A discipline that provides techniques and strategies for tackling crime involving digital media. Digital forgeries Forged items involving digital media, such as images. They are to be detected by digital forensics, or in particular digital image forensics. See Section 8.2.5. Digital image forensics See image forensics. Discretion The faculty of making a choice, rather than being compulsively directed. In the legal context, there is e.g., prosecutorial discretion (q.v.). But there also is judicial discretion. Duke University’s George Christie began ‘An Essay on Discretion’ (Christie, 1986) by stating (ibid., p. 747):

Few terms have as important a place in legal discourse as “discretion”. Despite the importance of the term, however, those who use it do not agree on its meaning. It is universally accepted that discretion has something to do with choice; beyond this, the consensus breaks down. If there is little agreement about the meaning of discretion, there is even less agree- ment about its desirability. Indeed, participants in the judicial process and observers of that process take a schizophrenic view of discretion. Sometimes they praise it and sometimes they execrate it.

In particular, there is a distinction between primary discretion and secondary discretion (ibid., pp. 747–748):

In the judicial context, [Maurice] Rosenberg distinguishes between primary discretion and secondary discretion. Primary discretion arises when a decision-maker has “a wide range of choice as to what he decides, free from the constraints which characteristically attach whenever legal rules enter the decision process.” Used in this sense, discretion can mean simply that a person has the authority to decide. Courts, judges, and legal scholars often use the term discretion in this sense, referring simply to authority to decide, or unconstrained choice.

That is Rosenberg’s primary discretion. Moreover (Christie, ibid., pp. 748–749; the brackets are Christie’s):

Rosenberg contrasts the primary form of discretion with “the secondary form, [which] has to do with hierarchical relations among judges.” The secondary form of discretion enters the picture when the system tries to pre- scribe the degree of finality and authority a lower court’s decision enjoys in the higher courts. Specifically, it comes into full play when the rules of review accord the lower court’s decision an unusual amount of insulation from appellate revision. In this sense, discretion is a review-restraining concept. It gives the trial judge a right to be wrong without incurring reversal. Appendix: Glossary 1047

In the limiting case, the choice made by a person exercising primary discretion is by definition the correct choice. The correctness of the choice cannot be attacked because there are no external criteria on which to base such an attack. When secondary discretion is involved, one can attack the correctness of the choice, although the authority of the person to make that choice cannot be attacked. Thus secondary discretion involves the authority to make the wrong decision. Christie mentions two examples Rosenberg gave, from college football, and notes: “In both cases, everyone agreed that the officials were clearly wrong; but, in both instances, no redress for those errors was possible” (ibid., p. 749). Rosenberg however was “concerned with the effect of secondary discretion on appellate courts’ treatment of certain contested rulings of trial courts, partic- ularly procedural rulings such as denials of motions for new trials” (Christie, ibid.). However, as “in any hierarchically organized bureaucracy, there are limits to the amount of perverseness that superiors are prepared to tolerate in their sub- ordinates” (ibid.), practically “Rosenberg’s secondary discretion – the authority to make wrong decisions – usually boils down to the authority to make deci- sions to which reviewing authorities will accord a presumption of correctness.” (ibid.) Nevertheless: “The reviewing authority will intervene only if the initial decisionmaker abused his discretion” (ibid.). Christie conceded (ibid., fn 12): Behind this linguistic formula, of course, lie the difficult questions: How perverse must the initial decision be before it will be said to be an abuse of discretion? And are there any objective criteria for deciding degrees of abuse? Christie then proceeded to remark about when the two kinds of discretion merge, and what the difference is (ibid., 749–750): A cynic might contend, however, that Rosenberg’s notion of secondary discretion merges with what he calls primary discretion when an inferior is given the authority to make wrong choices that cannot be overturned. There is no practical difference between the authority to make whatever decision one chooses and the authority to make decisions that will be enforced even if they are felt to be wrong. Indeed, primary and secondary discretion do sometimes seem to merge at the edges, but one clear distinction exists – different types of criticism can be leveled at decisions made under different types of discretion. Also consider strong discretion: “According to Dworkin, strong discretion char- acterises those decisions where the decision-maker is not bound by any standards and is required to create his or her own standards” (Stranieri & Zeleznikow, 2005a, Glossary). Stranieri and Zeleznikow (2005a) also explained: [Dworkin 1977] presents a systematic account of discretion by proposing two basic types of discretion, that he called strong discretion and weak discretion. Weak discretion describes situations where a decision-maker must interpret standards in her own way, whereas strong discretion characterises those decisions where the decision-maker is not bound by any standards and is required to create his or her own standards. [MacCormick, 1981] does not dispute this conceptualisation but contends that Dworkin’s distinction between typologies is one of degree and not of type. Discretionary As opposed to mandatory. In particular, as applied to judicial decision-making: what is up to the judge to decide, unfettered by mandatory 1048 Appendix: Glossary

rules. See Section 4.2.5. However, see the entry for discretion above. Kannai, Schild, and Zeleznikow (2007) offer an artificial intelligence perspective on legal discretion. Meikle and Yearwood (2000) are concerned with the provi- sion of support for the exercise of discretion, and how the need to avoid the risk of adversely affecting it when using a computer tool, inspired the structural design of EMBRACE, a decision support system for Australia’s Refugee Review Tribunal. Leith (1998) has warned about the risks, with AI applications to law, that judicial discretion be restricted, if computer tools come to be involved in the judicial decision-making process. Meikle and Yearwood (2000) classify legal decision-making in four quad- rants, according to two operational dimensions: “One dimension is the extent to which a system should either be an “outcome predictor” (a highly convergent aim) or should give access to diverse resources about the issues of interest (a highly divergent aim). This is the predictive–descriptive dimension. The other is the extent to which a system either needs to support discretion (by permitting complete autonomy, perhaps because the domain has no constraints) or needs to support weak discretion (by permitting only that allowable within prescribed con- straints). This is the strong–weak discretion dimension” (Meikle & Yearwood, 2000, p. 101). It was proposed that EMBRACE, as well as Bench-Capon’s PLAID (Bench- Capon & Staniford, 1995), may be placed in the quadrant characterised by strong discretion and descriptiveness (instead of predicted outcome, which when there is strong discretion lets the user override the prediction either partly or altogether). We argue that the evolution of Daedalus is from weak to strong discretion (pro- vided that validation steps are safeguarded), and that the approach is descriptive, whereas predictiveness is avoided out of a concern to ensure fairness to the sus- pects. Also see Lara-Rosano and del Socorro Téllez-Silva’s article (2003) on fuzzy10 support systems for discretionary judicial decision making. Disposition “The word ‘disposition’ is used to denote a tendency to act think or feel in a particular way. The word ‘character’ may include disposition, or sometimes mean ‘general reputation’ or merely the question of whether or not the accused has a criminal record” (Osborne, 1997, p. 313). See evidence of disposition. DNA evidence An important use (yet not the only use) of such evidence is for the purposes of identifying perpetrators. See Section 8.7.2. Dock identification “An old practice, now disapproved [in England], is the so-called ‘dock identification’ [...] where a witness is asked if the man seen at the scene of the crime is present in court. There will clearly be a tendency to look at the man in the dock and pick him just because he is there in that position” (Osborne, 1997, p. 305). “Obviously there is no theoretical objection to a witness testifying that the accused is the man he saw commit the crime. This is direct evidence by a first-hand observer. [Yet], where this happens (i.e. where the witness sees the accused at the trial for the first time after the offence) it is known as a ‘dock

10 Fuzzy logic is the subject of Section 6.1.15 in this book. Appendix: Glossary 1049

identification’ and is frowned on except in exceptional circumstances” (ibid.). A remedy is identification parades, for which, see Section 4.5.2.3. Doctrine of chances The odds that a new event is just a coincidence, in view of similar past events. In legal scholarship, it is discussed along with uncharged conduct as being a kind of character evidence, and in particular with the some- what different concept of similar fact evidence (see s.vv.). Sometimes an expert witness would err clamorously with the statistics: “Forensics and expert wit- ness investigations currently have a high profile in the media, principally in the field related to medical practitioners. The discrediting of evidence provided in the notorious Sally Clark case by Professor Sir Roy Meadows, has made many ‘experts’, not only medical, but also those operating in different professions, stop and take stock of how we undertake expert witness work, be it within the realm of Civil or Criminal Law” (Smith, 2006). In Britain, Sally Clark, a lawyer by 1 profession, lost her apparently healthy firstborn son, aged 2 /2 months, to sud- den death in December 1996, and then, in January 1998, also her second baby, aged two months, in similar circumstances. She was convicted in November 1999 (by a 10–2 majority) of smothering her two babies; she was sentenced to prison for life, and spent years in prison after her second baby died, in circumstances similar to the death of her firstborn son. Meadows had claimed that there was only one chance in 73 million for this to be a coincidence (no witness qualified in statistics was in court); in so claiming, the Leeds professor was shockingly inaccurate, as the death of siblings is not necessarily statistically independent. Eventually, the mother was released and the expert witness disgraced. After a while, he was reintegrated in the medical profession, and shortly afterwards Mrs. Clark died in her early forties. (In another British case, a young woman was able to refute a similar charge, by bringing evidence that her maternal lineage, traced back to India, had a story of infant death.) A prominent forensic statis- tician, Philip Dawid, has discussed the Sally Clark case ([2003] EWCA Crim 1020) in Dawid (2004b), which is highly readable even for ones with little math- ematical background. A more technical paper is Dawid (2001b). Also see Dawid (2005b, Sec. 4.3 and in particular, Sec. 4.3.1). Already in January 2000, Stephen Watkins’ editorial, ‘Conviction by Mathematical Error?’, was published in the British Medical Journal. Dawid finds that both Meadows’ and Watkins’ calcu- lations were flawed. He was an expert witness in statistics for the defence at the appeal hearing. Arguably the expert witness’s testimony against Sally Clark had been influ- enced by a case in the United States, in which several babies of the same mother had been believed to have died of cot death, yet she was eventually convicted of having killed them herself. The following are quoted from Nissan (2001c, section 3: ‘The Doctrine of Chances & Uncharged Conduct’); at the time, I was kindly referred to these by Peter Tillers (p.c., 9 Feb. 2000). In the New York Times, George Judson (1995) reported from Owego, N.Y., that a defendant, who was a “48-year-old woman accused of smothering her five infant children a quarter of a century ago, was convicted of their deaths today in Tioga County Court. In 1972, a leading medical journal cited the deaths of two infants from rural New York, ‘MH’ and 1050 Appendix: Glossary

‘NH’, as compelling evidence that Sudden Infant Death Syndrome ran in fam- ilies. Today, a jury found that the babies [...] were murdered by their mother, as were two brothers and a sister before them”. The defendant “had confessed to state troopers last year that she had smothered her babies”, “a chilling and detailed confession”, yet according to her she “had testified that she made the confession only to end hours of questioning by state troopers, saying that her children had simply stopped breathing, sometimes even as she fed them” (ibid.). According to the confession, the babies crying spells were the trigger; in con- trast, she “suggested in her confession”, the boy she and her husband adopted afterwards is alive as “unlike the five others, he had survived his crying spells because his father was out of work and at home during his infancy, and she had not been left to cope with the child alone”. The five murder verdicts are of mur- der by depraved indifference (i.e., without the conscious intention to kill). The Hoyt case from upstate New York “was striking [...] also for the family’s place at the center of research” which at the time was prominent in promoting a medi- cal theory on cot deaths (ibid.). “But to a forensic pathologist in Dallas, [...]the death of five children in one family from SIDS was statistically impossible, and she believed that [the aforementioned] research was leading pediatricians to dis- regard danger signs within some families” (ibid.). Benderly (1997) approaches the effect of the Hoyt diagnosis of old and recent multiple murder verdicts from the viewpoint of scientific error and its effects on subsequent research. Williams (1996), referring to the Hoyt case, pointed out: “Criminal defense lawyers know how difficult it is to overcome a confession in a criminal trial, for juries find it hard to fathom why anyone would falsely implicate oneself”. “Confessions are usually used as ground truth but are not 100 per cent reliable” (Vrij, 1998a, p. 89); even “people considered as guilty by virtue of a confession may actually be innocent, as some innocent people do confess” (ibid.). Prof. Tillers also kindly referred me (p.c., 9 Feb. 2000) to the “famous case ‘Brides in the Bath’”: “Rex v. Smith, 11 Cr. App. R. 229, 84 L.J.K.B.11 2153 (1915) (husband perhaps drowned a number of wives to recover insurance pro- ceeds; at first sight the drownings were accidental but...)”. Prof. Tillers also referred me to the news from the New York Times of Sunday, March 19, 1995; in the words of the report – from Hot Sulphur Springs, Colorado – “A woman whose 11 marriages earned her the nickname the Black Widow was convicted on Friday of torturing and killing her ninth husband” (NYT, 1995). This particular husband had “hired a private investigator when he began to suspect that she was lying about how many times she had been married”, and had intended to sue her for fraud and emotional distress. She was divorced from all previous husbands, except the eighth (her marriage to the ninth was annulled for that very reason), and “except for an elderly man who died of natural causes”, and this includes her having divorced (twice) from “the lawyer who helped her avoid questioning in the 1972 shooting death of her third husband”. In closing arguments, defense

11 L.J.K.B. stands for Law Journal Reports, Kings Bench. Appendix: Glossary 1051

lawyers denied there was any physical evidence to dismiss the alibi of the two defendants (the woman and her boyfriend, also convicted, had claimed they had been away, camping). As to the admissibility of character evidence, it is remark- able that the two defendants were convicted even though “[t]estimony about her previous marriages was not allowed during her trial” (ibid.). Moreover, Prof. Peter Tillers kindly sent me an article by a professor of Law from the University of California at Davis, Edward Imwinkelried (1990), a paper which “has an extensive discussion of the American view of the ‘doctrine of chances’”. Imwinkelried’s paper, “The use of evidence of an accused’s uncharged misconduct to prove mens rea: the doctrines that threaten to engulf the character evidence prohibition”, states: “The admissibility of uncharged misconduct evi- dence is the single most important issue in contemporary criminal evidence law. The issue has figured importantly in several of the most celebrated criminal trials of our time”. The introduction starts by describing a hypothetical case in which: “The accused is charged with homicide. The indictment alleges that the accused committed the murder in early 1990. During the government’s case-in-chief at trial, the prosecutor calls a witness. The witness begins describing a killing that the accused supposedly committed in 1989. The defense strenuously objects that the witness’s testimony is ‘nothing more than blatantly inadmissible evidence of the accused’s general bad character’. However, at sidebar the prosecutor makes an offer of proof that the 1989 killing was perpetrated with ‘exactly the same modus operandi as the 1990 murder’. Given this state of the record, how should the trial judge rule on the defense objection?” (Imwinkelried, ibid.). Federal Rule of Evidence 404(b), “which is virtually identical to Military Rule 404(b)” (the paper was published in the American Military Law Review), “forbids the judge from admitting the evidence as circumstantial proof of the accused’s conduct on the alleged occasion in 1990. [...] Thus, the prosecutor cannot offer the witness’s testimony about the 1989 incident to prove the accused’s dispo- sition toward murder and, in turn, use the accused’s antisocial disposition as evidence that the accused committed the alleged 1990 murder”. Yet, the judge is permitted “to admit the evidence when it is relevant on a noncharacter the- ory”, as “uncharged misconduct evidence ‘may, however, be admissible for other purposes, such as proof of motive, opportunity, intent, preparation, plan, knowl- edge, identity, or absence of mistake or accident’. In our hypothetical case, the trial judge could allow the prosecutor to introduce the 1989 incident to establish the accused’s identity as the perpetrator of the 1990 killing. If the two killings were committed with the identical, unique modus operandi, the uncharged inci- dent is logically relevant to prove the accused’s identity as the perpetrator of the charged crime without relying on a forbidden character inference. Hence, the judge could properly admit the testimony with a limiting instruction identifying the permissible and impermissible uses of the evidence” (Imwinkelried, ibid.). “Unless the judge clearly explains the law governing stipulations, a juror might suspect that any accused who knew enough about the crime to stipulate to the mens rea must have been involved personally in the crime. [...] When the question is the existence of the mens rea, the prosecutor ordinarily has a much 1052 Appendix: Glossary

more compelling need to resort to probative uncharged misconduct evidence. [...] The character evidence prohibition is violated when we permit a prose- cutor to rely on the theory depicted in [Imwinkelried’s] Figure 2 to justify the admissibility for uncharged misconduct evidence. [...] The courts should admit uncharged misconduct evidence under the doctrine to prove mens rea only when the prosecutor can make persuasive showings that each uncharged incident is similar to the charged offense and that the accused has been involved in such incidents more frequently than the typical person. [...]” (Imwinkelried, ibid., quoted the way it is excerpted in the the summary). Imwinkelried’s (1990) stated purpose in his paper “is to describe and critique [...] two lines of authority. The first section of the article discusses one line, namely, the case law advancing the proposition that the first sentence in Rule 404(b) [namely: ‘Evidence of other crimes, wrongs, or acts is not admissible to prove the character of a person in order to show action in conformity there- with’] is automatically inapplicable whenever the prosecutor offers uncharged misconduct to support an ultimate inference of mental intent rather than physical conduct. The next section of the article analyses the second line of authority. That line includes the decisions urging that under the doctrine of objective chances, the prosecutor routinely can offer uncharged misconduct on a non-character the- ory to prove intent. Both lines of authority are spurious, and both represent grave threats to the continued viability of the character evidence prohibition”. Double-counting the evidence If the same item of evidence is then used again to make the evidence weightier, this is an example fo double-counting (Robertson & Vignaux, 1995, section 6.2, p. 95): Each piece of evidence must be considered only once in relation to each issue, otherwise its effect is unjustifiably doubled. However, this does not mean that once an intem of evidence has been used by one decision-maker for one purpose it cannot be used by another decision-maker for another purpose. Thus, the fact that the police have used an item of evidence to identify a suspect does not mean that the court cannot use it to determine guilt. If a defendant is treated as though his or her guilt were likelier, for the very fact that this suspect is being tried, this is an example of the evidence being double-counted (ibid.): Of course, the court must not use the fact that the accused is in the dock as evidence of guilt and then also consider the evidence produced, since to do so would be to double- count the evidence which led to the arrest and which is also used in court. Wigmore cautioned jurors “to put away from their minds all the suspicion that arises from the arrest. the indictment and the arraignment”. This does not mean that the evidence on which the arrest was made, or on which this suspect was identified and retained in the first place, should be given a lesser weight, in order to compensate (ibid.): Fear of double-counting evidence has misled some about the weight of the evidence which caused the suspect to come under suspicion. A man might be stopped in the street because he is wearing a bloodstained shirt and we are now considering the value of the evidence of the shirt. It has been suggested that because this was the reason Appendix: Glossary 1053

for selecting this particular suspect we should change the way the evidence should be thought about, that it is less useful than if the suspect was arrested on the basis of other evidence. This is not correct. The power of the evidence is still determined by the ratio of the two probabilities of the accused having a bloodstained shirt if guilty and if not guilty. It is just that there happens to be less evidence in one case than the other. When the suspect is stopped because of a bloodstained shirt there may be no other evidence. When the suspect is arrested on the basis of other evidence and then found to have a bloodstained shirt, the likelihood ratio for the bloodstained shirt is to be combined with a prior which has already been raised by the other evidence. Once again the power of an item of evidence is being confused with the strength of the evidence as a whole.

Doxastic Of or pertaining to belief. Doxastic attitude An attitude of holding some belief. Doxastic logic A modal logic of belief. In doxastic logic (or logic of belief), belief is treated as a modal operator. A doxastic logic uses Bx to mean “It is believed that x is the case.” Raymond Smullyan (1986) defined several types of reasoners.12 An accurate reasoner never believes any false proposition (modal axiom T). An inaccurate reasoner believes at least one false proposition. A conceited reasoner believes his or her beliefs are never inaccurate. A conceited reasoner will nec- essarily lapse into an inaccuracy. A consistent reasoner never simultaneously believes a proposition and its negation (modal axiom D). A normal reasoner is one who, while believing p,alsobelieves he or she believes p (modal axiom 4). A peculiar reasoner believes proposition p while also believing he or she does not believe p. A peculiar reasoner is necessarily inaccurate but not necessarily inconsistent. It can be shown that a conceited reasoner is peculiar. The converse of a normal reasoner is a stable reasoner. If a stable reasoner ever believes that that he or she believes p, then he or she really believes p. A modest reasoner never believes Bp→p (that is, that believing p entails that p is true), unless he or she believes p.Atimid reasoner does not believe p (being afraid, as though, to believe p) if he or she believes that believing p entails believing something false. A queer reasoner is of type G and believes he or she is inconsistent, but is wrong in this belief. A type G reasoner is a reasoner of type 4 (see below) who believes he or she is modest. According to Löb’s Theorem, any reflexive reasoner of type 4 is modest. If a consistent reflexive reasoner of type 4 believes that he or she is stable, then he or she will become unstable: she will become inconsistent. A reasoner is of type 4 if he or she is of type 3 and also believes he or she is normal. A reasoner is of type 3 if he or she is a normal reasoner of type 2. A reasoner is of type 2 if he or she is of type 1, and if for every p and q he or she (correctly) believes: “If I should ever believe both p and p→q (p implies q), then I will believe q.” A type 1 reasoner has a complete knowledge of propositional logic: he or she sooner or later believes every tautology, i.e., any proposition provable by truth

12 This was summarised in http://en.wikipedia.org/wiki/Doxastic_logic (to which we are indebted for the concepts in this entry). 1054 Appendix: Glossary

tables (modal axiom N). If a type 1 reasoner ever believes p and believes p→q (p implies q) then he or she will eventually believe q (modal axiom K). A type 1∗ reasoner is somewhat more self-aware that a type 1 reasoner. In fact, a type 1∗ reasoner believes all tautologies; his or her set of beliefs (past, present and future) is logically closed under modus ponens,13 and for any propositions p and q, if he or she believes p→q, then he or she will believe that if he or she believes p then he or she will believe q. Dynamic uncertain inference Snow and Belis (2002) analysed “a celebrated French murder investigation” (p. 397), namely, the case of which Omar Raddad was convicted in Nice, in 1994, and then pardoned, the conviction being very controversial (the victim’s body was found with, near it, a sentence accusing Raddad written on the floor, scrawled in the victim’s blood). Snow and Belis (2002) “apply ideas about credibility judgments structured by graphs to the prob- lem of dynamic uncertain inference. By dynamic, we mean that assessments of credibility change over time without foreknowledge as to the types of evidence that might be seen or the arguments that the [crime] analyst might entertain over time” (ibid., p. 397), in contrast with such “kind of belief change that occurs” when the possible outcomes of experiments “are typically known before one learns the actual outcomes” (ibid., pp. 397–398). ECHO A computer tool, based on artificial neural networks, for abductive reason- ing, developed by Paul Thagard and first applied to the modelling of reasoning on the evidence in a criminal case in Thagard (1989). See Section 2.2.1. EMBRACE A decision support system for Australia’s Refugee Review Tribunal. Entanglement A concept expressing an undercutting move in argumentation, in Verheij’s (1999, 2003) ArguMed computer tool for visualising arguments (see Section 3.7). It was described by Verheij (1999, 2003). In the words of Walton et al. (2008, p. 398): In ArguMed, undercutting moves, like asking a critical question, are modelled by a concept called entanglement. The question, or other rebuttal, attacks the inferential link between the premises and conclusion of the original argument, and thereby requires the retraction of the original conclusion. On a diagram, entanglement is representated as a line that meets another line at a junction marked by an X. Entrapment Such circumstances of obtainment of evidence that the perpetrator was deceived, by being allowed or even enabled or incited to commit an offence, with law enforcement personnel present or even participating.14 Osborne (1997, p. 298) remarked that in England, some cases

clearly established that, even when policemen acting in plain clothes and participating in a crime go too far and incite criminals to commit offences which would otherwise not have been committed, the law of evidence will not be used to discipline the police.

13 In logic, modus ponens states that from p being true and p→q, we can deduce that q is true. As to p→q,thisisarule (also called clause) such that “If p is true, then q is true”; q is a logical consequence of p. 14 See in the notes of Section 4.5.2.1 in this book. Appendix: Glossary 1055

There is no defence of “entrapment” known to English law and the law of evidence could not be used to create such a defence by the device of excluding otherwise admis- sible evidence. Where police had gone too far, the question of their misconduct will be dealt with in police disciplinary proceedings; but insofar the accused was concerned, entrapment would only be relevant to mitigate the sentence imposed, not to the question of admissibility. Epistemic paternalism According to philosopher of knowledge Alvin Goldman (1991), the attitude by which the rules of evidence prescribe that the jurors will not be provided with some of the evidence. See Section 4.3.2.2. Evidence-based crime prevention Crime prevention policy and practice as ideally being based on scientific evidence from criminology within the social sciences, rather than the crime policy agenda being driven by political ideology and anec- dotal evidence (Farrington, Mackenzie, Sherman, & Welsh, 2006). Evidence in the phrase under consideration is not to be understood as legal evidence.15 Evidence discourse Presenting or discussing the evidence, and in particular legal evidence, especially from the viewpoint of discourse analysis. Other disciplinary viewpoints are possible. In an article in a law journal, a scholar from the University of Bristol, Donald Nicolson (1994), discussed epistemology and poli- tics in mainstream evidence discourse, from the viewpoint of critical legal theory, “in relation to three core concepts: truth, reason and justice” (ibid., p. 726). “The main contention of this article is that, given both its intellectual ancestry and political function, mainstream discourse of evidence can best be understood as a form of positivism. This ‘fact positivism’ is to the study and practice of fact-finding what legal positivism is to the study and practice of law. Both encour- age the view that the task of lawyers and adjudicators is neutral and value-free. Both focus attention on logic, whether of rules or of proof, and away from the inherently political and partial nature of law and facts” (ibid.). Evidence, theory of juridical Conventional, vs. the one advocated by Allen (1994), whose “theory of juridical evidence is designed to replace the conventional the- ory – that the necessary and sufficient conditions of evidence are provided by the rules of evidence – with the thesis that evidence is the result of the interaction of the intelligence and knowledge of the fact finder with the sum of the observations generated during trial. If the conventional theory is true, the rules of evidence should provide a complex and relatively thorough statement of the grounds for the admission of evidence. They do not”, as “The general rules of relevancy pro- vide virtually no comprehensible criteria for admission and exclusion”, whereas Allen’s own theory of juridical evidence provides a set of necessary and sufficient conditions for admissibility (Allen, 1994, p. 630). Evidence, law of The set of rules that regulate which evidence should be admissible in court. The following is quoted from my own discussion (Nissan, 2001c)of Twining (1997):

15 Of course, evidence matters for various disciplines. For example, in the paper collection Evidence, edited by Bell, Swenson-Wright, and Tybjerg (2008), there are chapters on evidence in law, history, or science. 1056 Appendix: Glossary

William L. Twining’s ‘Freedom of proof and the reform of criminal evidence’, relevant for common law, is quite valuable because of the depth vision afforded by the author’s charting recent and broader of legal theory in Anglophone countries. To Twining, the critics of the common law of evidence, recommending simplification and reduction in scope of its rule, “have won the argument, albeit in a slow and piecemeal fashion. One result is, as I have argued elsewhere, that the common law of evidence is much narrower in scope and of much less practical importance than the discourse of com- mentators, educators and practitioners has typically suggested” (441). A second trend, “which has proceeded much further in England” than in the U.S. (442), “has been the disaggregation of ‘The Law of Evidence’ into several bodies of law: Criminal Evidence, Civil Evidence and, increasingly, rules of evidence in tribunals, arbitration and other fora are treated as distinct” (442). “A further result of this trend has been a growing recognition that problems of proof, information handling, and ‘evidence’ arise at all stages of legal processes” (443). [...]InPreliminary Treatise of Evidence at Common Law (1898), “a Harvard scholar, James Bradley Thayer, advanced an interpretation of the law of evidence which has been accepted by most commentators as the classic state- ment of the modern common law” (Twining: 450), and based on two principles (ibid., from Thayer: 530): “That nothing is to be received which is not logically probative of some matter requiring to be proved” (the exclusionary principle), and: “That everything which is thus probative should come in, unless a clear ground of policy or law excludes it” (the inclusionary principle). In Wigmore’s and others’ reception, rules excluding or restricting the use of admitted evidence are either intended to promote rectitude of deci- sion (avoiding unreliability or alleged prejudicial effect), these being Wigmore’s “rules of auxiliary probative policy”; or, instead – these being exclusionary “rules of extrin- sic policy” – they “give priority to other values over rectitude of decision” (Twining: 450). As per Twining’s “interpretation, the Thayerite view is that the common law of evidence is a disparate series of exceptions to a principle of free proof” (Twining: 453), it being “broadly true that surviving English law of evidence conforms to the Thayerite model” (463).

Evidence of character See character evidence. Evidence of disposition A category of evidence that fits into the broader category of evidence of disposition and character, and consists of evidence that a partic- ular person has a tendency to act, think, or fell in a particular way. “Evidence of disposition is in general inadmissible for the prosecution both because it is not necessarily logically relevant to the issue of the accused’s guilt of the offence with which he is now charged and also because it is clearly highly prejudicial to the accused for the jury to be told of his previous disposition. The risk is that the average jury will lose sight of everything else in the case apart from the striking revelation of the accused’s bad character” (Osborne, 1997, p. 313), and if the defendant has convictions for “crimes in the past that are notoriously impopular with the public”, there may “be a tendency in laypersons to wish to punish the accused again for his former crimes whatever his guilt of the present offence” (ibid.). Moreover, “drinking and quarrelling are not per se crimes” (ibid.), yet they are disposition, and using such traits of the accused in court is not admissi- ble: in England, “evidence of the misconduct of the accused on another occasion may not be given if its only relevance is to show a general disposition towards wrongdoing or even a general disposition to commit the type of crime of which he is now accused” (ibid.). “The exception to the general rule is the case of so- called ‘similar fact’ evidence” (ibid.). See similar fact evidence. Moreover, an Appendix: Glossary 1057

exception to the inadmissibility of evidence of bad character or disposition of the accused in a criminal case, is that the prosecution has the right to cross-examine in order to obtain such evidence, and the right to adduce such evidence, if the defendant has claimed good character. The prosecution has the right of rebuttal. Evidence of opinion The kind of evidence provided by an expert witness, whereas in contrast: “The general rule is that a witness may only testify as to matters actually observed by him and he may not give his opinion on those matters. The drawing of inference from observed facts is the whole function of the trier of fact, i.e., in a criminal case the jury” (Osborne, 1997, p. 333) in countries where criminal cases are adjudicated at jury trials rather than by trained judges at bench trials. Evidence, requirement of total A principle in the philosophy of science. See Section 4.3.2.2. There exist a weak and a strong version:

(W-RTE) A cognitive agent X should always fix his beliefs or subjective probabilities in accordance with the total evidence in his possession at the time.

(S-RTE) A cognitive agent X should collect and use all available evidence that can be collected and used (at negligible cost).

There also is a “control” version, formulated but rejected by Alvin Goldman (1991):

(C-RTE) If agent X is going to make a doxastic decision concerning question Q,and agent Y has control over the evidence that is provided to X, then, from a purely epistemic point of view, Y should make available to X all of the evidence relevant to Q which is (at negligible cost) within Y’s control.

In a social or legal context, the latter principle is improper, because harmful experimentation on human, as well as invasion of privacy, are objectionable. Exclusionary rules of evidence are another such example that the requirement of total evidence does not apply. Goldman calls such aspects of the philosophical discussion knowledge social epistemics. See Section 4.3.2.2. Evidence, theory of “The distinction between the structure of proof and a theory of evidence is simple. The structure of proof determines what must be proven. In the conventional [probabilistic] theory [which Ron Allen attacks] this is elements to a predetermined probability, and in the relative plausibility theory [which Ron Allen approves of] that one story or set of stories is more plausible than its com- petitors (and in criminal cases that there is no plausible competitor). A theory of evidence indicates how this is done, what counts as evidence and perhaps how it is processed” (Allen, 1994, p. 606). Evidential burden See Burden, evidential. Evidential computing Another name for forensic computing. Evidential damage doctrine A doctrine advocated by Ariel Porat and Alex Stein’s Tort Liability Under Uncertainty (2001). It proposes to shift the persuasion bur- den (i.e., the burden of proof) to the defendant, in such cases that a tort plaintiff cannot adequately prove his or her case (and would currently lose the case) because the defendant’s wrongful actions impair the plaintiff’s ability to prove 1058 Appendix: Glossary

the facts underlying the plaintiff’s lawsuit for damage (any damage actionable in torts). Such situations that would fall under the evidential damage doctrine, include any action in which the defendant’s negligence is established, but cau- sation is indeterminate. For example, toxic exposure, or environmental torts, or such medical malpractice cases in which the doctor was negligent but the patient had a preexisting condition. Evidentialism In epistemology (the philosophy of knowledge), “a thesis about epis- temic justification, it is a thesis about what it takes for one to believe justifiably, or reasonably, in the sense thought to be necessary for knowledge” (Mittag, 2004). Evidentialism is defined by this thesis about epistemic justification:

(EVI) Person S is justified in believing proposition p at time t if and only if S’s evidence for p at t supports believing p.

Mittag points out (ibid.): “Particular versions of evidentialism can diverge in virtue of their providing different claims about what sorts of things count as evi- dence, what it is for one to have evidence, and what it is for one’s evidence to support believing a proposition”. What is evidence, for evidentialism? Mittag explains:

Evidence for or against p is, roughly, any information relevant to the truth or falsity of p. This is why we think that fingerprints and DNA left at the scene of the crime, eye-witness testimony, and someone’s whereabouts at the time the crime was committed all count as evidence for or against the hypothesis that the suspect committed the crime. The sort of evidence that interests the evidentialist, however, is not just anything whatsoever that is relevant to the truth of the proposition in question. The evidentialist denies that such facts about mind-independent reality are evidence in the sense relevant to determining justification. According to (EVI) only facts that one has are relevant to determining what one is justified in believing, and in order for one to have something in the relevant sense, one has to be aware of, to know about, or to, in some sense, “mentally possess” it. The sort of evidence the evidentialist is interested in, therefore, is restricted to mental entities (or, roughly, to mental “information”). In addition, it is only one’s own mental information that is relevant to determining whether one is justified in believing that p. For example, my belief that Jones was in Buffalo at the time the crime was committed is not relevant to determining whether you are justified in believing that Jones committed the crime.

There exist objections to evidentialism. For example: even “though one once had good evidence for believing, one has since forgotten it. Nevertheless, one may continue to believe justifiably, even without coming to possess any additional evidence. Evidentialism appears unable to account for this” (Mittag, 2004). Of course, “I forgot what the evidence is” would not make a good impression in a courtroom, but justifying belief in the philosophy of knowledge does not neces- sarily has the same standards of evidence one would expect in civil or criminal courts. Another objection, which is relevant also for the theory of evidence in law, refuses to identify probability with justification of belief in a given proposition. One’s evidence supporting a proposition may be modelled by means of some theory of probability, but this is contentious. Mittag explains (ibid.): Appendix: Glossary 1059

A body of evidence, e, supports believing some proposition p only if e makes p probable. If we suppose for simplicity that all of the beliefs that constitute e are themselves justi- fied, we can say that e supports believing p if and only if e makes p probable. However, one might argue that, even with this assumption, one’s evidence e can make p probable without one being justified in believing that p. If this is so, the resulting evidentialist thesis is false. Alvin Goldman, for example, has argued that the possession of reasons that make p probable, all things considered, is not sufficient for p to be justified (Epistemology and Cognition, 89–93). The crux of the case he considers is as follows. Suppose that while investigating a crime a detective has come to know a set of facts. These facts do establish that it is overwhelmingly likely that Jones has committed the crime, but it is only an extremely complex statistical argument that shows this. Perhaps the detective is utterly unable to understand how the evidence he has gathered supports this proposition. In such a case, it seems wrong to say that the detective is justified in believing the proposition, since he does not even have available to him a way of reasoning from the evidence to the conclusion that Jones did it. He has no idea how the evidence makes the proposition that Jones did it likely. Thus, the evidentialist thesis, so understood, is false. The appeal to probability and statistics here is not essential to this sort of objection, so it would be a mistake to focus solely on this feature of the case in attempting to respond. [...] Evidential reasoning A major area within artificial intelligence since the 1970s, as well as a prominent area within legal scholarship, in contrast within AI & Law it only emerged as a conspicuous area since around 2000. Evidential strength There are quantitative approaches for modelling evidential strength. See Bolding-Ekelöf degrees of evidential strength. EvoFIT A tool for suspect identification, resorting to a genetic algorithm16 refining a population of facial composites. EvoFIT was developed within the CRIME- VUs project. The team working on EvoFIT is led by Charlie Frowd of the University of Central Lancashire. See Section 8.2.2.4. Examination or examination in chief Questioning of a witness called by one of the parties at a trial, by the lawyer of the same party. It is followed by cross- examination by the lawyer of the other party, and then possibly by re-examination by the lawyer of the party that called the given witness. Moreover, also a judge can ask questions. More in general, examination (as opposed to examination in chief) refers to the questioning in court, at any stage, of the parties to a trial, or of their witnesses, by any qualified questioner (a lawyer of the some party or of the other party, or a judge, or the other party if he represents himself without a lawyer). It is important not to confuse examination in court, with questioning by the police during investigation. Legal proceedings only start once the investigation stage ends: once a suspect is charged, the police can no longer question him or her. Post-charge questioning (on the part of police investigators) of terrorism suspects, possibly extended to other categories of criminals, was considered by the British government in November 2007, drawing criticism from civil liberties groups.

16 Genetic algorithms are the subject of Section 6.1.16.1 in this book. 1060 Appendix: Glossary

During investigation, questioning seeks to uncover information. This is not the case in court. Both when a party is examined in court by his own lawyer, and when the other party is cross-examined, in practice the purpose of the ques- tioning is not to discover new information (which lawyers dread in court, as it is a risk), but rather to cause the examinee to reply in such a manner that would diminish the prospects of the success of the party against whom the questioning lawyer is pitted. Only when it is a judge who is asking questions, the question- ing is genuine, i.e., seeking information not previously known to the questioner. Hickey (1993) discussed presupposition under cross-examination. The following anecdote was related by American legal scholar Roger Park during a talk he gave in Amsterdam in December 1999. During the 19th century, a lawyer at a trial hoped to expose the bad character of the man he was ques- tioning (see character evidence.). He asked him whether he had ever been in prison. The man replied that he had. Then the lawyer asked him about the circum- stances of this happening. The man explained that he had been made a prisoner by the Indians (Native Americans). The effect was opposite to that intended by the lawyer: given the worldview of both lay and trained judges in the United States in the 19th century, the implication was that the examinee was of good character, a hero. This illustrates the dangers, for a lawyer and the party he or she represents, if the questions are such that information not previously known to the lawyer emerges. Importantly, persons being questioned in court must stick to giving answers without digressing. This is a major constraint, and therefore the lawyer examining the party he represents or that party’s witnesses, has the duty to skilfully ask such questions that would enable all those important facts to emerge that would enable the lawyer to construct an effective argumentation. Jackson (1994, p. 70) pointed out: Consider, for a moment, the very basic process of courtroom examination. Legal theory tells us that the barrister is not giving evidence; he or she is merely asking questions. The evidence is given entirely by the witness. But both linguistic analysis and the philoso- phy of language (specifically, the theory of speech acts) show that this is an unrealistic account. Grammatically, no doubt, the barristers are asking questions, not making state- ments. But a question, from the viewpoint of speech act analysis, is an act requesting new information. It is commonly (though not universally) so used at the investigation stage. But the elicitation of new (to him/her) information is precisely what the barrister is not seeking to do. It is a commonplace in the training of barristers that they should never ask a question to which they do not already know the answer – more accurately, to which they do not think they know what the witness’s answer will be. So questioning by the barrister has a quite different function, the function of presenting an argument, engaging in a battle, sometimes even making claims of fact. Jackson then discusses “each of these three functions” (ibid.). Take asserting facts: “A barrister who made straightforward factual claims, in the grammati- cal form of assertions, would readily be pulled up short” [by the judge] (ibid., p. 71). Therefore, assertions are made “[a]ssuredly not up front”, but “e.g. through the presuppositions of questions”. “Of course, the barrister is not on the stand, giving evidence on oath. But it is precisely because his factual statements have not thereby been problematised that they become, when uttered, all the Appendix: Glossary 1061

more persuasive” (ibid.). “Not one of these techniques I have briefly described – Socratic questioning (constructing a narrative argument); engagement in battle (destroying the witness rather than the story); stating facts (by presupposition) – is regarded as unprofessional”: quite on the contrary. “[C]riminal investigators should not be inhibited from what they see as the legal constraints on what can be said in court from pursuing a full narrative or holistic account”, as that is what it takes to persuade the court, and that without incurring in “the shyster lawyer syndrome” by which an unscrupulous lawyer is preferable to a respectable one (ibid., p. 71). Characterizations are involved, e.g. as hostile examinations by a barrister “seek to evoke stereotypes of respondents, not simply responses” (ibid., p. 70). It is still Jackson who states (ibid., pp. 70–71):

I wish here to make a distinction between cross-examination which is designed to destroy the story, and cross-examination which is designed to destroy the witness. It is the latter with which I am concerned. It may be illustrated through the “Don’t know” pattern. The barrister may ask the witness a series of questions, to which s/he antici- pates that the witness does not know the answer. These questions may relate to matters of quite marginal relevance. But the repetition of “Don’t know” by the witness in respect of even marginal matters will create, and is designed to create, in the minds of the jury an image of a “Don’t know” witness. It is an example of a simple rhetorical technique, pars pro toto – which evokes one of the basic narrative assumptions of everyday life: we do not rely upon what is said by people who appear not to know what they are talking about. Trite, maybe. Illogical, certainly. But immensely powerful.

Here is an example of pars pro toto negatively affecting a person’s perceived credibility, in the television appearance of that person as being a guest, or “vic- tim” (as guests at broadcasts are called in the slang of the trade). “Believing himself off screen, the victim keeps his head still, but surreptitiously swivels his eyes – perhaps for a glimpse of the audience, or the clock, or for a peep at the monitor. Immediately, by chance or by malicious design, the camera switches to him and he looks shifty, cunning and wicked” (Janner, 1984, p. 147). This is because of a stereotype about swivelling eyes. Viewers are made to see the “vic- tim” swivelling his eyes, that precise moment being highlighted, yet in the stream of events this may have been innocent and unimportant, rather than evidence about his truthfulness when making a specific statement, or about his character in general. Exchange principle (Locard’s) Anyone or anything entering a crime scene takes something of the scene with them, and leaves something of themselves behind when they depart. Exclusionary principle In the American law of evidence, according to a formula- tion originally proposed by James Bradley Thayer, the principle “That nothing is to be received which is not logically probative of some matter requiring to be proved”. See also inclusionary principle. Exclusionary rules Typically in the U.S. law of evidence: rules about which kinds of evidence must be excluded and not heard in court. As opposed to admissionary rules. See (rules of) extrinsic policy and (rules of) auxiliary probative policy, i.e., kinds of exclusionary rules. 1062 Appendix: Glossary

Some kinds of evidence are excluded as a matter of policy. Sometimes, for reasons of policy, the law of some given jurisdiction may choose to disregard evidence that by common sense would prove adultery. By the law of England and Wales, until Parliament reformed family law in 1949, this was the case of evidence that could prove adultery because of lack of access of the husband, if a child was nevertheless born. Prior to 1949, such evidence was not admissible. Sir Douglas Hogg, in his role as barrister in Russell v. Russell in 1924 (he had ceased to be Attorney-general earlier that year), had already tried to obtain admis- sibility for such evidence. “The question for the House of Lords was whether evidence of non-access might be given in divorce proceedings by one spouse with the result of bastardizing a child of the marriage. The answer was of great importance, not only to the parties to the suit, the sole evidence of ’s adultery being the testimony of the husband that he did not have access to his wife at any time when the child could have been conceived, but also to all those who were interested in the proceedings in the Divorce Court, either as possible parties or as practitioners” (Heuston, 1964, p. 458). The House of Lords decided the case ruled such evidence inadmissible: it “held that on grounds of decency and public policy the law prohibited the intro- duction of such evidence.” (ibid.). Hogg had admitted that the evidence would be inadmissible in a legitimacy case, but “Hogg’s argument was that the rule pro- hibiting the introduction of such evidence had never been applied to a case in which the object of the suit was to dissolve the bond of marriage on the ground of adultery, it only applied where there was a marriage in existence and the legit- imacy of a child born in wedlock was in question” (ibid.). Hogg argued that “Where the issue is adultery the birth of a child is mere accident” (quoted ibid.). “This ingenious argument was rejected by the majority of the House, Lord Finlay saying: ‘To what an extraordinary state would the admission of this evidence in the present case reduce the law of England! The infant may be illegitimate for the purpose of proving adultery; but legitimate for the purpose of succeeding to property or a title!’” (ibid.). Writing in an American journal on legal evidence, Hans Nijboer (2008) provided comparative comments on current issues in evi- dence and procedure from a Continental perspective, as have emerged during the 2000s. He finds that while legal scholars are increasingly communicating with legal scholars in other countries and scientists from other fields, there is a simultaneous counter-tendency toward adoption of crime-specific rules of substantive law and evidentiary measures for specific kinds of crime (such as protection of witnesses in rape cases, where the protection complicates the fact-findings process). He concludes from this that there is a tension between greater generality in regard to the first two dimensions and increasing specificity in regard to the third. Expert evidence Evidence as supplied by an expert witness. The expert’s expert opinion is part of the evidence in a case. Expert opinion, Appeal to In argumentation studies, Walton’s (1997) Appeal to Expert Opinion offered (ibid., pp. 211–225) an argumentation scheme for “Argument for Expert Opinion”, then reproduced in Walton et al. (2008, Appendix: Glossary 1063

pp. 381–382). Its major premise is: “Source E is an expert in subject domain S containing proposition A.” The minor premise is: “E asserts that proposition A is true (false).” The conclusion is: “A is true (false).” Walton accompanied this with critical questions to be asked, and these come in different categories: exper- tise question, field questions, opinion questions, trustworthiness questions, consistency questions, and backup evidence questions (q.v.). Expert witness A witness called to give testimony in court not because of having been involved in the facts of the case being tried, but because of his or her pro- fessional expertise in one of the forensic sciences, bearing on the evaluation of specific elements. Cf. s.v. Witness vs. expert testimonies. An expert witness is called to provide evidence of opinion. “[T]he seminal 1993 United States Supreme Court decision Daubert v. Merrell Dow Pharms.17 [is] now widely described as the most important expert evidence decision ever written by the Supreme Court” (Cole 2009, p. 111). “Broadly speaking, Daubert [...] might be said to concern ‘the problem of expertise’” (ibid., p. 112) namely (ibid.):

Given that courts have long allowed expert witnesses to testify—and given the increas- ing use of such experts—how are courts to evaluate the testimony of proffered “experts”? Ought anyone who claims the mantle of expertise be permitted to testify in that guise? Or, should courts police claims to the title of “expert” by permitting only those experts deemed legitimate to testify? American courts have long tended toward the latter view; Daubert made this commitment (in the federal courts and in the many jurisdictions that subsequently adopted Daubert or Daubert-like rules) explicit. But, this preference only generates another philosophical dilemma: how are courts supposed to adjudicate claims to expertise when many, if not all, of those claims by their very nature are so technical that legally-trained judges cannot reasonably be expected to be competent to sit in judg- ment upon them? In other words, the law faces a specific instance of the question asked by the philosophical field known as “epistemology”: how does one certify knowledge as legitimate? In Britain, not always an expert witness has to appear in court in person, and sometimes it is enough for the expert to provide a report. Nevertheless, typi- cally an expert witness must be ready to be cross-examined and to defend the credibility of his or her opinion on the matter at hand, and his or her profes- sional credibility. An evidently biased expert witness would impress the court unfavourably. Like lawyers, expert witnesses, too, sometimes have a poor image: the lawyer being perceived to be a “shyster”, and the expert witness – a “hired gun”. A philosopher, Ghita Holmström-Hintikka (2001), applied to legal investiga- tion, and in particular to expert witnesses giving testimony and being interrogated in court, the Interrogative Model for Truth-seeking that had been developed by Jaakko Hintikka for use in the philosophy of science. A previous paper of hers (Holmström-Hintikka, 1995), about expert witnesses, appeared in the journal Argumentation.

17 Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993). 1064 Appendix: Glossary

Bond, Solon, and Harper (1999) is a practical guide for expert witnesses. Carol Jones (1994) is concerned with expert evidence in Britain. Chris Pamplin, the editor of the UK Register of Expert Witnesses (http://www.jspubs.com), “analyse[d] the results of a major survey of the expert witness marketplace” and among the other things, remarked (Pamplin, 2007a, pp. 1480–1481):

Another change over the years that many experts will find more welcome is the reduction in the number of cases for which they are required to give their evidence in court. It is now altogether exceptional for experts to have to appear in court in “fast-track” cases, and it is becoming less and less likely in those on the “multi-track”. In 1997 we recorded that the average frequency of court appearances was five times a year; some four years later this had dropped to 3.8; it now stands at 3.1. If this is convenient for the expert witnesses (their worst-case scenario is being cross-examined in court and leaving the court with their reputation in shatters), it must be said that justice may be the loser, in the interest of efficiency, if expert witnesses are not challenged in court every time that they deserve to. The “orac- ular” expert witness ought to be a nightmare for justice. Yet, case management requires that there will be a limit on how much evidence is to be obtained. In Pamplin’s words (2007b, p. 1488):

Limiting the amount and scope of expert evidence has long been one of the functions of the case management procedures of the civil courts. The time and expense involved in the provision of expert evidence means that the courts must have regard to the propor- tionality of any request. Indeed, the court should refuse permission where reasons for the request are viewed as frivolous. However, given that the need for additional evidence is sometimes critical to the court’s ability to make an informed decision, and that the expert evidence itself is often of a highly technical nature, two questions arise:

• How should the courts deal with such requests? • How much influence should the experts or the parties have upon the court’s decision?

If an expert feels that there is insufficient evidence before the court to prove or disprove a case, does the expert have discretion to request that further tests be carried out? If so, what is the expert’s role in that evidence-gathering process? These were questions considered recently by the Family Court. [In ReM(achild)[2007] EWCA Civ 589, [2007] All ER (D) 257 (May), a case that contrasts with Re W (a child) (non-accidental injury: expert evidence) [2007] EWHC 136 (Fam), [2007] All ER (D) 159 (Apr). And also at an employment tribunal: Howard v Hospital of St Mary of Furness [2007] All ER (D) 305 (May).] Expert witnesses do not only intervene in courts. Once instructed by a client, they (e.g., forensic engineers advising about product liability) may advise the client about the strength of their case. The client may renounce litigation, or give in to the plaintiff by settling out of court. “Selecting the right expert may be crucial in court” (Holland, 2007).“An expert who can guarantee availability during the trial, and who can respond to any additional requests promptly will stand out from the other candidates” (ibid., Appendix: Glossary 1065

p. 1486). “Ideally solicitors would like the expert to be recommended by a col- league, a barrister or client, as this is the best evidence that the expert is up to the job. Many [London] City firms also have internal databases of experts” (ibid.).

With a little application, there is much a prospective expert candidate can do to satisfy these criteria. However, the expert should always bear in mind that the expert’s role is to provide impartial assistance to the court or tribunal. In addition to the criteria above, another paramount factor is the independence of the expert from the appointing party. If the expert is not perceived to be independent, the judge will not give credit to his evidence and opinions and this could be damaging to the client’s case. (ibid., p. 1487).

Moreover, expert testimony may be involved in alternative dispute resolution, this being either arbitration,orbinding or non-binding mediation. (see case disposition.) Baria Ahmed (2007) points out:

A potential expert in alternative dispute resolution (ADR) may adopt a range of roles: an expert consultant may form part of the advocacy team; a party retained expert may provide an opinion on the instructing party’s position; the parties may instruct a neutral expert, appointed through an independent body such as the Royal Institute of British Architects, chosen by the parties jointly or appointed by the mediator or arbitrator; finally, the appointed mediator or arbitrator may themselves be an independent expert in light of their experience in the subject matter of the dispute. In the case of Early Neutral Evaluations and Expert Determinations, the neutral expert is tasked with providing either an “opinion” on the applicable law or a “decision” on the facts. [...] ADR pro- cesses, however, permit a non-traditional use of experts. The mediator, for example, may request that the expert makes a presentation of their views with all other partic- ipants present, or bring the parties’ retained experts together without their respective clients to review findings, or choose to test their opinions in open/closed meetings.

An American legal scholar, Erica Beecher-Monas (2008), argued that during the 2000s, “courts throughout the common law system have taken an increasingly antithetical approach to expert testimony.” She contrasted civil cases and criminal cases. In the former, but also

in criminal DNA identification cases, courts appear to be actively engaged in scrutiniz- ing the scientific testimony that comes before them. Defense attorneys appear to have little difficulty in challenging questionable scientific testimony. Research scientists are brought into the discourse as experts for the parties or the court. Courts are articulating the bases for their admissibility decisions, and these decisions are being reviewed on appeal.

She pointed out that the situation was different in criminal cases other than involving DNA: “In the criminal cases, however, where criminal identification procedures other than DNA are concerned, each of the participants in the legal process has failed.” She found that prosecutors were not particular about how credible their expert witnesses were:

Prosecutors repeatedly present experts whose testimony they have reason to know is (at best) dubious. Defense attorneys fail to bring challenges to the scientific validity of even patently flawed expert testimony. Courts, when challenges do arise, fail to engage in serious gatekeeping. And reviewing courts refuse to find shoddy gatekeeping to be an abuse of discretion. The consequence of this antithetical approach to admissibility, is 1066 Appendix: Glossary

that the rational search for truth, in which the adversary system is supposedly engaged, is taken seriously only in civil cases. So the problem was not only with the prosecutors. She conceded that “the civil courts are busy minutely scrutinising scientific studies proffered as the basis for expert testimony” but in criminal cases other than DNA, in stark contrast, “the criminal courts are admitting into evidence testimony (again, with the exception of DNA) for which those studies have never been done.” The critique expressed by Beecher-Monas (2008) is not isolated. The American legal scholar Michael Risinger18 (2007a) sarcastically gave his paper,19 concerning the courts admit- ting expert testimony of dubious value,20 a title reminding of the atom bomb’s fictional enthusiast Dr. Strangelove: D. Michael Risinger, ‘Goodbye to All That, Or a Fool’s Errand, By One of the Fools: How I Stopped Worrying About Court Responses to Handwriting Identification (And ‘Forensic Science’ in General) and Learned to Love Misinterpretations of Kumho Tire v. Carmichael’.21 Itiel

18 From the Seton Hall University School of Law, in Newark, New Jersey. 19 It is followed by an Appendix which is an article in its own right (Risinger, 2007b). Cf., e.g., Risinger et al. (2002). Both of these are concerned with expert testimony in handwriting identification. 20 Risinger’s paper (2007a) is described in its abstract as (among the other things) “a picaresque romp through the author’s career, much of which has been spent coming to grips with the realities of forensic science, and the courts’ abdication of their role as gatekeepers in judging the reliability of prosecution-proffered expertise.” Moreover, “the article illustrates how the lower federal courts have managed to ignore or misinterpret Kumho Tire v. Carmichael in such a way as to create a jurisprudence of expertise wholly at odds with the clear mandate of the Supreme Court, often by converting decisions with no precedential status into precedents of breathtaking breadth.” Michael Risinger has criticised dubious forensic expertise (especially in handwriting identification) in his articles as early as the 1980s. 21 Cole (2009), responding to Risinger (2007a), remarked in a footnote: “On a completely irrel- evant note: the first part of Professor Risinger‘s title refers to Robert Graves’s memoir Goodbye to All That, whose discussion of the experience of being gassed in the First World War indirectly inspired my undergraduate thesis on German preparations for chemical warfare between the two world wars.” Let me add that the intertextual reference in the second part title of Risinger’s article is to the title of a black comedy film from 1964, on the nuclear scare, in which Peter Sellers played three major roles. It is the film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb, commonly known as Dr. Strangelove, directed, produced by Stanley Kubrick; the screen- play was jointly written by him with Peter George and Terry Southern. It was based on Red Alert by Peter George. Eventually in the plot, on board of an airplane with a damaged radio (so it cannot be recalled), “Aircraft commander Major T. J. ‘King’ Kong (Slim Pickens) goes to the bomb bay to open the damaged doors manually, straddling a nuclear bomb as he repairs arcing wires over- head. When he effects his electrical patches, the bomb bay doors suddenly open, the bomb releases and Kong rides it to detonation like a rodeo cowboy, whooping and waving his cowboy hat. The H-bomb explodes and [in automated retaliation, the Soviet Union’s] Doomsday Device’s detona- tion is inevitable” (quoted from http://en.wikipedia.org/wiki/Dr._Strangelove Also see http://www. filmsite.org/drst.html about the same film). The motif of the man riding a shot bomb like a cowboy apparently was after a real-life episode. The protagonist was Harry DeWolf, a future vice admi- ral and Chief of Staff of the Royal Canadian Navy. After retiring, he published a memoir under the title ‘My Ride on a Torpedo’ (DeWolf 1966), which he began by mentioning that when he retired, “a newspaperman commenting on my service career wrote that I had once ridden a torpedo Appendix: Glossary 1067

Dror and colleagues’ paper “When emotions get the better of us: The effect of contextual top-down processing on matching fingerprints” (Dror et al., 2005)is a paper in , applied to how experts perform at matching fin- gerprints. Dror and Charlton (2006) and Dror, Charlton, and Péron (2006)triedto identify the causes of why experts make identification errors. Dror and Rosenthal (2008) tried to meta-analytically quantify the reliability and biasability of foren- sic experts. Some jurisdictions encourage conferences of experts sitting side by side and giving testimony together: this is the case of Australia’s Federal Court (see hot-tubbing) sometimes at a public inquiry (q.v.) in Britain, which are inquisitorial (see inquisitorial, towards the end of that entry). ExpertCop A piece of software (a geosimulator, combining simulation and a geographic information system) for training police officers in allocating police presence in given urban environments, for the purpose of preventing crime (Furtado & Vasconcelos, 2007). See Section 6.1.6.2. Expertise question In argumentation studies, Walton’s (1997) Appeal to Expert Opinion offered (ibid., pp. 211–225) an argumentation scheme for “Argument for Expert Opinion”, then reproduced in Walton et al. (2008, pp. 381–382). See s.v. Expert opinion, Appeal to above. The expert source is E; the subject domain is S; and A is a proposition about which E claims to be true (or false). The exper- tise question is: “How credible is E as an expert source?”. It is articulated in five detailed subquestions: “What is E’s name, job or official capacity, location and employer?”; “What degrees, professional qualifications, or certification by licensing agencies does E hold?”; “Can testimony of peer experts in the same field be given to support E’s competence?”; “What is E’s record of experience, or other indications of practiced skill in S?”; “What is E’s record of peer-reviewed publications or contributions to knowledge in S?”. Extrinsic policy (rules of) A category of rules excluding or restricting the use of admitted evidence. As opposed to rules of auxiliary probative policy. In interpre- tations of the American law of evidence, according to Wigmore’s terminology,

like a cowboy around the deck of a destroyer” (ibid., p. 167). This was on 2 July 1940, aboard the Canadian destroyer St. Laurent. DeWolf was skipper, with the rank of lieutenant commander. A young torpedoman who was painting a torpedo lifted the safety catch and pulled back the fir- ing lever. The torpedo leaped free toward the stern, and in frenzy butted causing damage, and at any moment its safety device could be unwound to arm the dormant warhead, which would then explode on contact. The torpedo “would lurch forward with each motion of the ship. It would lurch forward with each motion of the deck; then, as the deck became level, the torpedo would stop, like a bull in the ring, undecided in which direction to make its next charge” (ibid., p. 170). The torpedo rolled against the guardrails, and DeWolf and another officer tried to hold it there, and his colleague “ran to get a key to turn off the compressed air that was driving the propellers”, but the ship rolled, the torpedo rolled away, and DeWolf “straddled it, and grabbed hold of the guardrail” (ibid.). “As the torpedo advanced, I resisted as much as I could, while going forward hand over hand along the guardrail with my legs locked on the maverick” (ibid., pp. 170, 172). “These antics, no doubt, led to the story of ‘riding the torpedo’” (ibid., p. 172). Two colleagues arrived at the scene, wrestled the torpedo steady, and the air was turned off (ibid.). 1068 Appendix: Glossary

rules of extrinsic policy are such exclusionary rules that give priority to other values over rectitude of decision. These are rules which are not so much directed at ascertaining the truth, but rather which serve the protection of personal rights and secrets. Eyewitness testimony Historically, the preferred kind of evidence (e.g., by bib- lical law it is the only admissible kind of testimony). Othello states this request to Iago: “Give me the ocular proof” (Othello, III.iii.365). James Ogden (1992) remarks that this request was echoed (comically, for that matter) in late seventeenth-century plays. Ogden states: “Othello was one of the most popular plays after the Restoration; some twenty revivals are recorded. Thomas Rymer [(1692)] noted that ‘from all the Tragedies acted on our English Stage, Othello is said to bear the Bell away’. To Rymer himself it was ‘a Bloody Farce’ which ‘may be a lesson to Husbands, that before their Jealousie be Tragical, the proofs may be Mathematical’”. One of the major areas in eyewitness testimony is identification evidence. Psychological research has shown that eyewitness testimony is fraught with prob- lems, and that it is precisely the most confident witnesses that may be prone to errors. FacePrints A project and tool of Johnston & Caldwell at New Mexico State University, for assisting a witness to build a facial composite of a criminal suspect. See Section 8.2.2. Facial reconstruction The forensic reproduction of an individual human’s face from skeletal remains. Computer-graphic tools exist which support this task. See Section 8.2.6. Facticity Law’s commitment to relate as validly as possible to occurrences and events outside itself (and which took place in the past), normally requiring vari- ous mechanisms of representation based on some sort of truth by correspondence. Fact positivism As defined by Donald Nicolson (1994), it “is to the study and prac- tice of fact-finding what legal positivism is to the study and practice of law. Both encourage the view that the task of lawyers and adjudicators is neutral and value- free. Both focus attention on logic, whether of rules or of proof, and away from the inherently political and partial nature of law and facts” (ibid., p. 726). Factfinders Also called triers of fact. In a judicial context: the judicial decision- makers who are empowered to give the verdict; i.e., the jury (jurors are also called lay triers of fact,orlay factifinders,orlay magistrates, or then popular judges, the latter, e.g., at the Assizes in Italy) in a jury trial, or the professional judge or judges (also called a stipendiary magistrate), in a bench trial (a trial with no jury). In a jury trial, before the juror retire to consider their decision, the judge instructs the jury about how to go about the decision-making process. In the United States, instructions to the jury tend to be a standard formula, whereas in England, judges tend to produce an elaborate speech to the jury, highly customized for the case at hand. Also see Jury. Factual truth The past cannot be reproduced or relived. It can only be recon- structed. Moreover, there are factors – such as rules of extrinsic policy to exclude use of some kinds of evidence, such rules being intended to privilege some values (e.g., the protection of personal rights) over the rectitude of decision as aiming at Appendix: Glossary 1069

the factual truth, or even considerations in terms of cost/benefits – that militate for there consciously there being an increased likelihood of a gap between the factual truth, and the legal truth that will result from the verdict. Factum probandum (plural: facta probanda) That which is to be demonstrated by means of the factum probans (or of several facta probantes). Factum probans (plural: facta probantes) Evidence in support of a factum proban- dum. FADE (Fraud and Abuse Detection Engine) A data mining system developed by the online auction site eBay in order to detect fraud perpetrator at its site (Mena, 2003, p. 254). False positive At a criminal trial, a false positive is a wrong conviction,22 whereas a false negative is a wrong acquittal. In data mining software tools for the detection of suspicious transactions, a false positive is a false alarm, whereas

22 Already Borchard’s book (1932) was concerned with wrongful convictions; it is significant that is was published by Yale University Press. Cf. Leo (2005). http://www.innocenceproject.org/ is a website that documents real life cases of miscarriages of justice. Clive Walker and Keir Starmer’s edited book (1999) Miscarriage of Justice: A Review of Justice in Error examines the various steps within the criminal justice system which have resulted in the conviction of the innocent, and sug- gests remedies to avoid such situations in the future. The perspective is especially that of England and Wales. There are two initial chapters in Part I, “The nature of miscarriages of justice”, and these are chapter 1, “The Agenda of Miscarriages of Justice”, and chapter 2, “Miscarriages of Justice in Principle and Practice”, both of them by Clive Walker. Part II, “The Criminal Justice Process in England and Wales and Miscarriages of Justice”, comprises chapter 3, “Police Investigative Procedures”, by David Dixon; chapter 4, “The Right to Legal Advice”, by Andrew Sanders and Lee Bridges; chapter 5, “The Right to Silence”, by Keir Starmer and Mitchell Woolf; chapter 6, “Forensic Evidence”, by Clive Walker and Russell Stockdale; chapter 7, “Disclosure: Principles, Processes and Politics”, by Ben Fitzpatrick; chapter 8, “Public Interest Immunity and Criminal Justice”, by Clive Walker with Geoffrey Robertson; chapter 9, “Trial Procedures” by John Jackson; chapter 10, “The Judiciary”, by Clive Walker with James Wood; chapter 11, “Post-conviction Procedures”, by Nicholas Taylor with Michael Mansfield; chapter 12, “Victims of Miscarriages of Justice”, by Nicholas Taylor with James Wood; and chapter 13, “The Role and Impact of Journalism”, by Mark Stephens and Peter Hill. This is followed by Part III, “Miscarriages of Justice in Other Jurisdictions”, which comprises chapter 14, “Miscarriages of Justice in Northern Ireland”, by Brice Dickson; chapter 15, “Miscarriages of Justice in the Republic of Ireland”, by Dermot Walsh; chapter 16, “Miscarriages of Justice in Scotland”, by Clive Walker; and chapter 17, “The French Pre-trial System” by John Bell. Part IV, “Miscarriages of Justice in Summary”, comprises chapter 18, “An Overview”, by Helena Kennedy and Keir Starmer. An earlier edition, entitled Justice in Error, appeared in 1993. Between the two editions, there had been intervening reforms in England and Wales. The 1999 book, Miscarriages of Justice, considers these reforms, and con- siders whether the concerns expressed earlier have been adequately addressed. The chapters in the 1993 version were: “The criminal justice process in England and Wales and miscarriages of jus- tice”, which comprises chapters 3 to 13: “Police Investigative Procedures: Researching the Impact of PACE” by Clive Coleman, David Dixon, and Keith Bottomley; “The Right to Legal Advice” by Andrew Sanders and Lee Bridges; “The Right to Silence” by Fiona McElree and Keir Starmer; “Forensic Evidence” by Russell Stockdale and Clive Walker; “Prosecution Disclosure – Principle, Practice and Justice” by Patrick O’Connor; “Trial Procedures” by John Jackson; “Post-conviction Procedures” by Michael Mansfield and Nicholas Taylor; “The Prevention of Terrorism Acts” by Brice Dickson. Next, Part III, “Miscarriages of Justice in Other Jurisdictions”, comprises chapters 14 to 17: “Miscarriages of Justice in the Republic of Ireland” by Dermot Walsh; and “The French Pre-trial System” by John Bell. 1070 Appendix: Glossary

a false negative is an undiscovered case. With data mining, Mena remarks (2003, p. 221): Often an alert of a suspected crime needs verification by human personnel and may require special processing, such as putting a transaction in a special queue or status. A false positive needs special attention and time, while a false negative may cause fur- ther losses. In other words, the costs of both are different. However, in both instances, consideration must be given that doing nothing is the worst possible action and option facing a business, government agency, or law enforcement unit. The cost of doing noth- ing may, in time, be the most expensive option of all, especially in situations involving the destruction of trust, data, systems, property, and human life. Field question In argumentation studies, Walton’s (1997) Appeal to Expert Opinion offered (ibid., pp. 211–225) an argumentation scheme for “Argument for Expert Opinion”, then reproduced in Walton et al. (2008, pp. 381–382). See s.v. Expert opinion, Appeal to above. The field source is E; the subject domain is S; and A is a proposition about which E claims to be true (or false). The expertise question is: “Is E an expert in the field that A is in?”. It is articulated in four detailed subquestions: “Is the field of expertise cited in the appeal [to expert opinion] a genuine area of knowledge, or an area of technical skill that supports a claim to knowledge?”; “If E is an expert in a field closely related to the field cited in the appeal, how close is the relationship between the expertise in the two fields?”; “is the issue one where expert knowledge in any field is directly relevant to deciding the issue?”; “Is the field of expertise cited an area in which there are changes in techniques or rapid developments in new knowledge, and, if so, is the expert up to date in these developments?”. Final submissions The final speeches to the bench of the lawyers for both parties, before the court decides about the case. FinCEN (Financial Crimes Enforcement Network) The U.S. Treasury agency set up to detect money laundering. A project for FinCEN, whose goal is to identify money laundering networks, by carrying out network link analysis, was reported about by Goldberg and Wong (1998). Links are created in databases of financial transactions.23 Fingerprints The fingerprints of human hands are, relatively to other biometric features, more reliable characteristics of an individual person, “because of their immutability and individuality [...]. Immutability refers to the permanent and unchanging character of the pattern on each finger from before birth until decom- position after death. Individuality refers to the uniqueness of ridge details across individuals; even our two hands are never quite alike. Fingerprint techniques have the benefit of being a passive, nonintrusive identification system and have the additional advantage to use low-cost standard capturing devices (Espinosa-Duró, 2002)” (Khuwaja, 2006, p. 25). In biometrics, also palmprints are used, of the entire palm of a hand (Kumar, Wong, Shen, & Jain, 2003). In the 2000s, a trend

23 See Section 6.1.2.2 and fn. 36 in Chapter 6. Appendix: Glossary 1071

became felt, in scholarship, to question the reliability of fingerprint evidence. See Sections 8.7.2 and 8.7.3. Fingerprint identification The identification, by specialized forensic experts (fin- gerprint experts), or the identity of an individual whose fingerprints are available. There is a debate as to how many similarities between two prints proves identity beyond almost any doubt. See Sections 8.7.2 and 8.7.3. Fingerprint identifica- tion is not the same as fingerprint verification. Both kinds require fingerprint recognition. By analogy with fingerprints of a person’s hand being unique iden- tifiers, DNA identification techniques have been called DNA fingerprinting. Also in intrusion detection within computer security, metaphorically one speaks of fin- gerprints and fingeprinting (Section 6.2.1.12), in relation to attempts to identify an intruder. Fingerprint compression Digitised fingerprint cards are held, e.g., by the FBI, in massive quantities, so when digitized, image compression is required. Compression, however, should not be such that features necessary for matching would be lost. “Because fingerprint ridges are not necessarily continuous across the impression due to minutiae, ridges endings, or bifurcations, the informa- tion needed to determine that one fingerprint matches another resides in the fine details of the minutiae and their relationships. Consequently, these details have to be retained for matching algorithms” (Khuwaja, 2006, p. 25). Compression techniques use, e.g., wavelet packets (Khuwaja, 2004). Fingerprint matching algorithms Computational algorithms from image process- ing, that match a given input fingerprints card against either a single stored fingerprints card (in fingerprint verification), or a fingerprints database (in finger- print identification). Some fingerprint matching algorithms use neural networks (e.g., Leung, Leung, Lau, & Luk, 1991; Khuwaja, 2006). “One advantage of any neural network, which performs a fingerprint recognition task, is that it will learn its own coarse-grained features; thus, precise locations do not form any part of an input set (Hughes & Green, 1991)” (Khuwaja, 2006, p. 26). See Section 8.7.3. Fingerprint recognition Such image processing, possibly computational, that fin- gerprints are analysed, and matched against a pool of fingerprint cards.If the process is computational on digitised images, fingerprint matching algorithms are applied. The purpose may be either fingerprint identification,orjustfingerprint verification. The finer level is minutiae detection, as opposed to coarse-grained features such as ridges in a fingerprint image. Fingerpring scanning An input technique for digitized fingerprint databases, that transforms extant fingerprint cards (as used in manual processing and recognition) into digitized images. Fingerprint sensors Equipment for an input technique for taking a person’s finger- prints, by obtaining a digitized fingerprint image directly from that person. Igaki, Eguchi, and Shinzaki (1990) described a holographic fingerprint sensor. Fingerprint verification A person who claims a given identity has his or her fin- gerprints checked. The outcome is binary: either acceptance, or rejection. This 1072 Appendix: Glossary

is less processing-intensive than the fingerprint identification of suspects, and is more typical of situations when security measures are taken, in order to pre- vent undue access, i.e., in fingerprinting for security, an application for which a personal authentication system is used. FLINTS A software tool for criminal intelligence analysis. It performs network link analysis, and developed by Richard Leary, who originally applied it in the West Midlands Police. See Chapter 7. Foil At an identity parade (i.e., a lineup), or then in a photoarray, any out of several look-alikes, known to be innocent, and who appear alongside the suspect. The eyewitness is made to identify the suspect, but without a bias (such as suggesting that the perpetrator is actually one of those persons). Wells (1993) suggested criteria for minimising foil bias. See Section 4.5.2.3. Forensic computing A discipline that provides techniques and strategies for computer investigations, in response to computer crime. Also known as eviden- tial computing,orcomputer forensics. The latter is distinguished from digital forensics. See Section 6.2.1.5. Forensic sciences Various scientific specialties (such as chemistry, areas within medicine, psychology, handwriting analysis, fingerprint analysis, and so forth) when applied for the purposes of crime analysis and fact investigation, or for evaluations for the use of the court. There is a multitude of such specialties, with an increasing role in court. Sometimes globally referred to as in the singular: forensic science. Forensic test A test applying any of the forensic sciences. Free proof Historically in Continental Europe (in the Romanist tradition), free proof –inGermanfrie Beweiswurdigung, in French l’intime conviction – pertains to the evaluation of the evidence, according to a system which replaced the so- called legal proof (in Latin probatio legalis, in French preuve legale,inGerman gesezliche Beweistheorie), and emancipated the judicial evaluation of the evi- dence from the older law of proof (this also involved the demise of torture as means for obtaining evidence). In Continental Europe, free proof replaced rules of quantum and weight; these did not use to be part of the English and American judiciary systems. In the United States, in the context of Common Law systems from Anglo- Saxon countries), free proof or freedom of proof (not a term of art in England) historically lent itself to several different usages, but in current discussions in legal scholarship in the U.S.: freedom of triers of facts (i.e., factfinders, judicial decision-makers) from exclusionary rules affecting the evidence. See Twining (1997), Stein (1996). Generalisations Or background generalisations,orbackground knowledge,or empirical generalisations: common sense heuristic rules, which apply to a given instance a belief held concerning a pattern, and are resorted to when interpreting the evidence and reconstructing a legal narrative for argumentation in court. Appendix: Glossary 1073

Geoinformatics or geomatics The science and technology of gathering, analysing, interpreting, distributing and using geographic information. It encompasses sur- veying and mapping, remote sensing, geographic information systems (GIS), and the Global Positioning Systems (GPS). Guilty plea Typically, in criminal procedure in Anglo-American jurisdictions, the option a defendant is offered, to admit guilt on the part of the defendant at the beginning of a trial, and this typically in connection with a plea bargain being offered, with an explicit grant of sentencing concessions (a lighter sentence) for such a plea. In countries in Continental Europe, typically there was no possibility of a guilty pleas, prosecution was obligatory instead of discretionary, and plea bargaining was not envisaged, and was frowned upon. Handwriting identification A discipline (Morris, 2000) which in the context of the forensic sciences, is part of the domain of questioned documents evidence (Levinson, 2000). See Section 6.1.10. Hearsay Stated imprecisely: verbal statements attributed to others, or rumours. It is not admitted as evidence in court, if the person to whom the statement is ascribed could be called as a witness24 (See Sections 4.6.1 and 2.5.1). More precisely, the hearsay rule (in English and American law) “requires a court to exclude any writ- ten or oral statement not made in the course of the proceedings which is offered as evidence of the correctness of the matter asserted. A statement which is rel- evant independently of the real intention of the speaker or the truth of what is stated is not adduced for a testimonial purpose and is therefore outside the scope of the rule” (Pattenden, 1993, p. 138). In fn. 2 ibid., Rosemary Pattenden clarifies “independently of the real of the speaker”: “For example, in a contract case a per- son is contractually bound if he makes an oral statement which a reasonable man would regard as an acceptance of a proffered offer, even though he did not intend by his words to accept the offer”. In fn. 3, she explains “or the truth of what is stated”: “For example, a statement offered to prove that the declarant could speak or a statement which it is alleged is libellous. The distinction between ‘original’ and ‘testimonial’ use of an out-of-court statement is not, however, absolute. The statement ‘I am alive’ asserts and demonstrates the same thing”. As to the rationale behind the hearsay rule (which she challenges), Pattenden states:

The basis of the hearsay rule is supposedly the dangers which attach to the use of state- ments not made by witnesses within the confines of the courtroom where the declarant can be subjected to immediate cross-examination. However, when the question of admit- ting an out-of-court assertion arises in a criminal trial, no attempt is ever made to measure the real danger which the statement presents to the fact-finding process. Instead the court concentrates on conceptual issues – is the statement being used testimonially? If the answer is yes, does it fall within one of the narrow and inflexible common law

24 Also consider that a broader category is out-of-court witness statements (Heaton-Armstrong et al., 2006), which also include statements made to the police by a witness or defendant who also has to give testimony in court. 1074 Appendix: Glossary

exceptions to the rule (all of which were created before the end of the nineteenth cen- tury) or one of the more recent, but equally limited, statutory exceptions to the rule. If the answer to the second question is no, the evidence is automatically rejected. There is never any question of weighing the probative value of the evidence against the risk of unreliability (Pattenden, ibid., p. 138). Craig Osborne (1997) remarked: “It is apparent when reading reports of decided cases that courts not always appreciate the existence of a hearsay problem”, as “there are also instances of the very existence of any problem at all being over- looked” (ibid., p. 254). “It is by no means the case that words said outside court and repeated in it will amount to hearsay. What matters is whether the statement from the speaker outside court is tendered to prove the truth of its contents” (ibid., p. 255). Express assertions are excluded by the hearsay rule (ibid.): With express assertions there must be an intention to communicate, thus non-verbal behaviour such as nods, gestures, pointing or signs may well amount to an express asssertion when what the person making the sign did is recounted to the court by another witness. As there must however, be some intention to communicate nobody has ever suggested that, say, a footprint, or yawning is subject to the hearsay rule. Implied assertions are problematic. “This is where the maker of the statement did not intend to assert any particular fact” (ibid., p. 256). “The reason why it has often been suggested that these kind [sic] of statements ought to be admissible as exceptions to the hearsay rule is that there is a smaller risk of untruthfulness with implied assertions. [...] The authorities in England are not entirely conclusive” (ibid.). For the United States, Ron Allen has claimed (2008a, p. 326):

1. The rules of evidence favour admissibility even in the face of legitimate claims of irrelevancy. The standard bearer here is of course the Supreme Court’s decision in Old Chief v. U.S. [519 U.S. 171 (1997]), but that simply acknowledged the obvious truth of the narrative structure of proof at trial. 2. This narrative structure is enhanced by liberal admission of evidence, and even the rule once claimed as the embodiment of the exclusionary prac- tices of Anglo-American law – the hearsay rule – has morphed into a rule of admission (Allen, 1992). All statements by parties are admissible, for example, no matter when or under what conditions made, as are all present sense impressions and statements of states of mind or physical conditions. Business records and government reports all come in readily, along with 35 or so other categories of admission. If none of the formal exceptions work, the courts may make up ad hoc exceptions to facilitate admission (Federal Rule of Evidence 807). Although there are some technical exclu- sionary rules, in reality, like hearsay, they often make promises that they do not keep. Another wide ranging example is the character evidence rules which promise exclusion but permit generous admissibility due to provisions such as FRE 404(b).

An expert system dealing with the hearsay rule is the Hearsay Rule Advisor (HRA). It was developed as an LL.M. project by Susan Blackman (1988), under the supervision of Marilyn MacCrimmon (1989). That expert system “provides advice on whether a statement comes within the definition of hearsay and if so, Appendix: Glossary 1075

whether the statement comes within an exception to the general rule excluding hearsay statements” (MacCrimmon, ibid., p. 468). The initial questions the user is asked by this expert system – MacCrimmon explains (ibid., pp. 467–468) – classify exceptions based on the context of the trial (whether the declarant is available to testify and the type of trial, civil or criminal). This part of the program eliminates some exceptions as more facts become known. At this time the exceptions included in the pro- gram are: dying declarations, declarations against interest, declarations in the course of duty, and business documents (British Columbia only). Hearsay exceptions in the HRA are classified on the basis of four dimensions: EVENT, PERCEIVE, BELIEVE [and] INTEND. First the system searches for an approximate match between the user’s facts and the events in the system. Once a match is found, the user is asked questions designed to assess whether the three dimensions of PERCEIVE, BELIEVE and INTEND for a particular exception are satisfied by the user’s facts. These questions are tailored to fit the EVENT identified so that the system does not waste time with irrelevant or inap- plicable questions. These dimensions fit the story model of Pennington and Hastie with the proviso that I assume that belief states are encompassed by the definition of psycho- logical states as is implicit in [their examples]. Legal liability often turns on whether a person knows, thinks, believes certain things and not simply on whether they are in a particular emotional state. We begin with the declarant as the principal actor. The action is the making of the statement. The EVENT is defined as the events which initiate the required belief states which initiate the goal of telling the truth. Thus for dying declaration the initiating events are the declarant is wounded, and the declarant is dying. It is assumed that these events initiate the belief that the declarant is saying that initiates the goal of telling the truth. For the exception, declarations in the course of duty, the initiating events are the declarant is performing a duty and others are relying on his or her actions which initiate the belief state that the declarant expects to be discovered if he or she makes an error which in turn initiates the goal of avoiding censure by his or her employer. The dimensions of PERCEIVE [and] BELIEVE may be related to states of the world which enable the declarant to make a true statement. Circumstances which facil- itate accurate are often required. [...] INTEND focuses on the facts of the specific case being considered in order to establish the requisite belief state. [...] It is quite important to understand that different jurisdictions can be expected, generally speaking, to treat hearsay, too, differently. Take the Italian context (Ferrua, 2010, section 19): Suppose P gave witness in court concerning what (being crucial for convicting defendant Q)hewastoldbyN, and that the latter, called as witness, is taking advantage of the right to avoid this being a next of kin, or at any rate, that N refuses to reply or does not appear at the hearing, and therefore deliberately avoids being cross-examined. There is no doubt that the guilt of the defendant cannot be proven based on statements that N may have made during the inquiry. But what are we to say concerning what N related to P,who provided indirect testimony? True, P is not avoiding being cross-examined, but should we allow conviction based on P’s testimony, which reproduces verbatim what N’s related, arguably amounts to admit it based on N’s statements, who always deliberately avoided being cross- examined. The only conceivable way to deny this would be to claim that the “statements” referred to by the criterion of evaluation25 are only the ones made during the trial, by

25 This is merely a criterion of evaluation (“criterio di valutazione”), not an exclusionary rule (“regola di esclusione probatoria”). 1076 Appendix: Glossary

strict analogy with what is in force concerning the cross-examination rule [under the Italian jurisdiction].26 At any rate, the Court of Cassation in Italy ruled on 4 October 2004 that indirect testimony is only inadmissible (“l’inutilizzabilità della testimonianza indiretta”),27 as per “articolo 195 commi 3 e 7 c.p.p.”, if the primary source is not indicated, or if it was requested by one of the parties that primary source be called as witness, and it was not (except because of death, infirmity, or because the person cannot be found). In particular, the Court of Cassation ruled that indi- rect testimony is admissible, and has to be considered, in such a case that the primary source resorted to the right not to reply, while being a defendant tried for a related crime.28 Hot-tubbing A particular approach to expert witnesses, known by that name espe- cially with reference to a practive at Australia’s Federal Court, but also known (although not by that name) from public inquiries in Britain (see s.v. Inquisitorial, towards the end of that entry). In a comparative review, Erica Beecher-Monas (2008) explained: In Australia, [...], the Federal Court has encouraged (through its court rules) both“hot- tubbing” and joint conferences of experts. In the joint conference court rules, judges attempt to control expert witness partisanship by directing expert witnesses to confer, or to produce a document identifying the matters on which the experts agree and those on which they disagree. Under the “hot-tubbing” rules, experts testify together in court, responding to questions from attorneys and each other, as well as the judge. Judges may also appoint their own witnesses, although they rarely do so in criminal trials. There is a difference between hot-tubbing and joint conferences of experts, in respect of lawyers’ interventions (ibid.): “Hot-tubbing” is also known as taking concurrent evidence. In this procedure, the experts for both sides simultaneously take court and question each other about their opinions on the record. They are also subject to questioning by the court

26 The original text from Ferrua (2010) is concerned with the second part of “art. 111 comma 4 Cost.”, and it is as follows: “Supponiamo che P abbia testimoniato in giudizio su quanto, decisivo per la colpevolezza dell’imputato Q, gli ha confidato N e che quest’ultimo, chiamato a deporre, si avvalga della facoltà di astensione come prossimo congiunto o, comunque, rifiuti di rispondere o diserti il dibattimento, sottraendosi così per libera scelta al contraddittorio. Nessun dubbio che la colpevolezza dell’imputato non possa essere provata in base alle dichiarazioni eventualmente rilasciate da N nell’indagine preliminare. Ma che dire per quanto raccontato a P, che ha deposto come teste indiretto? È vero che P non si sottrae al controesame, ma consentire la condanna sulla base della sua testimonianza, dove è testualmente riprodotto il racconto di N, non equivale forse a consentirla sulla base delle dichiarazioni di N che si è sempre sottratto per libera scelta al con- traddittorio? La sola via per rispondere negativamente sarebbe, per l’appunto, di sostenere che le ‘dichiarazioni’ a cui si richiama il criterio di valutazione siano solo quelle costituite nel processo, in stretta analogia con quanto vale per la regola del contraddittorio.” 27 The notion of inutilizzabilità, i.e., inadmissibility of criminal evidence, was discussed in an Italian context in Gambini (1997) and in Grifantini (1993, 1999). 28 An attempt at formalisation of the reasoning about hearsay was made by Tillers & Schum in “Hearsay Logic” (1992). Appendix: Glossary 1077

and the lawyers. Hot-tubbing, in contrast to joint conferences of experts, permits the participation of legal counsel in exchanges among the experts. Moreover, with hot-tubbing the parties have a say concerning procedure (ibid.): In joint conferences, experts are supposed to work together (with only the experts present) to prepare a document probing areas of agreement and disagreement, to be submitted to the court. In its pristine form, a joint conference will exclude lawyers. In practice, however, there appears to be some flexibility – and the parties may have some ability to modify the joint conference procedure. For example, in one antitrust case, the parties refused to participate in a “hot tub” procedure and agreed to a joint conference only if they could treat the joint conference as mere negotiations, so that any communi- cation or joint report could only be admissible with consent of the parties. If the parties can play such a significant role, it is questionable how far a joint conference can go toward solving the problem of partisan experts. HUGIN A piece of Belief Net software, using which Neil and Fenton (2000) carried out calculations in order to present probabilistic legal arguments, concerning the Jury Observation Fallacy (q.v.). HYPO A computer system for argumentation, fairly well-known in the discipline of AI & Law (Ashley, 1991). See Section 3.9.1. HYPO “is a case based reasoner developed by Ashley and Rissland at University of Massachusetts at Amherst. It analyses problem situations dealing with trade secrets disputes, retrieves relevant legal cases from its database, and fashions them into reasonable legal arguments. It has turned out to be the benchmark on which other legal case based reasoners have been constructed” (Stranieri & Zeleznikow, 2005a, Glossary). IBIS An Issue-Based Information System that supports decomposing problems into issues. QuestMap (Carr, 2003) is a computer tool for supporting argumentation. It is based on IBIS, mediates discussions, supports collaborative argumentation, and creates information maps, in the context of legal education. Identikit A system for generating composite faces, for the purposes of assisting a witness to describe the features of a criminal suspect. Identikit uses plastic overlays of drawn features. See Section 8.2.2. Identity parade Also called line-up,oridentification parade. A suspect stands in a line alongside foils, i.e., persons known to be innocent and who look alike, and the victim or witness has to identify the suspect. A computerised version is ID parade discs, on which video clips from a database appear, along with a video clip showing the suspect. See Section 4.5.2.8. In contrast, it is usually undesir- abletohaveadock identification, when the witness sees at the trial the accused for the first time after the offence. Even for such a case in which there was a parade, being an “identification procedure between crime and trial at which the witness has picked out the accused to assist the police”, Osborne (1997, p. 305) raises a problem with hearsay (evidence admissibility rules include exclusionary rules which incloude the hearsay rule, as well as the rule against admission of a previous consistent statement): If the witness confirms at the trial that he has previously picked out the accused, is he not, in effect, testifying as to a prior consistent statement? Moreover the hearsay implications are compounded if some other person is called to confirm that the witness picked out 1078 Appendix: Glossary

the accused at the identification parade. The point is inadequately analysed and it is far from clear as to whether the courts have acknowledged the hearsay problem at all. See, e.g., R v Osbourne and Virtue [1973] 1 QB 678 where the witnesses in court could not remember whom they had picked out at an identification parade. A police inspector who had been present was allowed to testify about what had happened at the parade without the court acknowledging the hearsay point (Osborne, ibid.). It may be that computer tools for use at identity parades may incorporate some function for recording the outcome with a given witness in such a way that would be useful in court, but it may depend on the jurisdiction (e.g., it would be in agreement with the “philosophy” of validating steps in Asaro’s Daedalus in Italy, as opposed to Anglo-American procedural law). It would be interest- ing to see whether how the software caters to validation may in turn result in something objectionable. Thereofore, it would be useful to have a legal pro- fessional involved in discussions, during the software requirement analysis and design phase. IFS (Internet Fraud Screen) A data mining tool giving credit-card fraud alerts, developed by CyberSource for Visa U.S.A. for matching fraud transactions. IFS “uses a combination of rule-based modeling and neural-network modeling tech- niques” (Mena, 2003, p. 271). IFS’s “profile scores look at more than a dozen different information items, including the customer’s local time and the risk associated with the customer’s e-mail host. CyberSource also provides e-retailers with an IFS report that includes risk profile codes, address verification systems (AVS) codes, and other relevant information to help e-merchants calibrate their risk thresholds and score settings. This helps the e-business subscribers to control the level of risk they want to operate under” (ibid.). Image forensics A branch of forensic science whose goal is the detection of image tampering. The tampering is typically done by computer (digital forgeries), and the computational methods for detection (digital image forensics) belong to image processing within signal processing. See Farid (2008), Popescu and Farid (2007), Johnson (2007). We have discussed such techniques in Section 8.2.5. Imputation A charge, including a charge possibly implied by a defendant while attacking the credibility of others, and affecting their character. (See character evidence.). In a section entitled “Imputations on the character of the prosecu- tor or his witnesses or on the deceased victim of the alleged crime”, Osborne (1997, p. 323) explains that before Selvey v DPP [1970] AC 304, in English law “there was a problem as to whether the accused lost his shield [i.e., its being inadmissible for the prosecution to adduce bad character evidence about him, or to ask such questions during cross-examination that would aim at proving his bad character] by making imputations on prosecution witnesses which were nec- essary to develop his defence, or whether he only lost it if the imputations were merely to attack their credibility”. For example, it may be that a defendant would need to claim that it actually was “a prosecution witness who actually committed the offence” (ibid.). In Selvey v DPP, the House of Lords held that prosecution is allowed to cross-examine “the accused as to character where he casts imputations on prosecution witnesses either in order to show their unreliability or where he Appendix: Glossary 1079

does so in order to establish his defence” (Osborne, ibid., p. 324), yet with excep- tions: “In a rape case the accused can allege consent without losing his shield”, “If what is said amounts in reality to no more than a denial of the charge then an accused does not lose his shield”, and “There is an unfettered judicial discretion to exclude cross-examination as to character even if strictly permissible but there is no general rule that this discretion should be exercised in favour of the accused even where the nature of his defence necessarily involves his attacking prosecu- tion witnesses” (ibid.). “The difficult question which is for the judge to decide is: ‘What is an imputation?’ The courts have tried, not always with great success, to draw a distinction between what is merely a denial of the charge by the accused in forceful language and what amounts to an imputation” (ibid.). For example, if a defendant claiming that a witness is “a liar”, should this be merely treated as a denial of the charge, or is it to be treated as an imputation, because “this may in effect be an allegation of perjury”? (ibid.). Inclusionary principle In the American law of evidence, according to a formula- tion originally proposed by James Bradley Thayer, the principle “That everything which is thus probative should come in, unless a clear ground of policy or law excludes it”. See also exclusionary principle. Independent Choice Logic (ICL) Poole (2002) applied this formalism to legal argumentation about evidence. The formalism can be viewed as a “first-grade representation of Bayesian belief networks with conditional probability tables represented as first-order rules, or as a [sic] abductive/argument-based logic with probabilities over assumables” (p. 385). Inductive reasoning “Inductive reasoning is the process of moving from specific cases to general rules. A rule induction system is given examples of a prob- lem where the outcome is known. When it has been given several examples, the rule induction system can create rules that are true from the example cases. The rules can then be used to assess other cases where the outcome is not known” (Stranieri & Zeleznikow, 2005a, Glossary). Inference The process of deriving conclusions from premises. Inference engine “An inference engine is that part of an expert or knowledge based system that contains the general (as opposed to specific) problem solving knowl- edge. The inference engine contains an interpreter that decides how to apply the rules to infer new knowledge and a scheduler that decides the order in which the rules should be applied” (Stranieri & Zeleznikow, 2005a, Glossary). Inference network “The inference net model is a probabilistic retrieval model; since it uses a probability ranking principle. It computes Pr(I|document), which is the probability that a user’s information need is satisfied given a particular document” (Stranieri & Zeleznikow, 2005a, Glossary). Inquest In Britain: a judicial or official inquiry, usually before a jury, typically in order to identify the causes of a death, in case this is not certified by a physician, or where the possibility of a crime cannot be ruled out a priori. The Dead Bodies Project (developed by Jeroen Keppens and others during the 2000s) has been described in Section 8.1. Inquiry See investigation. 1080 Appendix: Glossary

Inquisitorial A type of criminal procedure, which is typical of many European countries on the Continent. As opposed to the adversarial system, typical of Anglo-American jurisdictions. The adversarial system features a symmetry between the parties, whereas in the inquisitorial system the court is rather struc- turally aligned with the prosecution vis-à-vis the defendant, yet the court is to be convinced and adjudicates. It is of interest to consider how professionals used to the adversarial system perform, when they have to abide by the inquisitorial system. In England and Wales, as well as in Scotland, whereas the adversarial system characterises court proceedings, public inquiries are in theory inquisito- rial (see public inquiry). This is also the case of the Coroner’s court. Professor Sir Ian Kennedy, who chaired the public inquiry (which lasted two years and nine months, from 1998 until 2001) into the conduct of children’s heart surgery at the Bristol Royal Infirmary between 1984 and 1995, and he described in a paper (Kennedy, 2007) his experience in that capacity. In 2005, Parliament passed the Inquiries Act, but the paper is general enough to retain interest even under the new regime of public inquiries (ibid., p. 15). In the interest of openness, information technology (IT) was resorted to (ibid., p. 14):

The Inquiry needed to be completely open so that everyone could see and hear the same evidence. This was achieved by the use of IT with scanning of all the documents: this came to over 900,000 pages. A “Core Bundle” of relevant documents was prepared as a CD available to all legal representatives.29 The daily proceedings could be seen at three separate locations as well as Bristol. It was also vital to create a website (this received over 1 million hits during the Inquiry and won a prestigious NHS30 prize).

The approach was inquisitorial, and therefore (ibid., p. 38):

In keeping with its inquisitorial approach, the Inquiry made it clear that there were no “parties”, no “sides”, to advance their particular view of events. Witnesses were called by the Inquiry and were the Inquiry’s witnesses. They were there to assist the Inquiry. They were not there to score points in their own favour or against others. Legal repre- sentatives initially found these challenging propositions. They were used to taking sides on behalf of a client. But, gradually, they understood.

29 In a section about the use of information technology, Kennedy explained (2007, pp. 41–44): “By scanning all relevant documents into the Inquiry’s data-base, it was possible to ensure that the Inquiry, and particularly its legal team, could have access to all the relevant evidence collected at the earliest possible stage, and in a manageable form on computers. The creation of a CD con- taining the ‘Core Bundle’ achieved the same effect for both the Panel and the legal representatives of all those involved in the Inquiry. Witnesses’ statements, and comments on them, were equally added to the database and were thus accessible to those involved. Once they began, the hearings were effectively ‘paper free’. Counsel to the Inquiry, and other legal representatives on the occa- sions on which they addressed the Inquiry, were simply able to identify the unique code given to each document for it to be transmitted onto the computer screens of the panel, other lawyers and onto the screens available to the public. It was estimated that, by not having to search through shelves of box files to find the relevant document and the[n] pass it around to all, the Inquiry was able to accomplish anything from a quarter to a third more work on an everage day of hearings.” 30 National Health Service. Appendix: Glossary 1081

Its being inquisitorial in a country used to the adversarial system, it is challenging for those conducting the inquiry, as well as to the lawyers (ibid., pp. 37–38):

It is often said that Public Inquiries are inquisitorial by nature. But the reality often is otherwise. One explanation is that those chairing Inquiries are usually unfamiliar with how to translate the idea into practice. Moreover, since they are very commonly judges, their first instinct is to revert to what is familiar and convert the Inquiry’s proceedings into a courtroom. And courtrooms, in England, are not characterized by an inquisitorial approach (with the exception of the Coroner’s court). Rather they are characterized by what can be described, perhaps a little provocatively, as a gladiatorial approach. The gladiators are the lawyers, usually counsel. The judge watches and gives the thumbs up, or down, at the end. Lawyers provide a further explanation to why the proceedings, though theoretically inquisitorial, soon take on an inquisitorial quality. This is because lawyers also are familiar with courts and the procedure of courts. It comes as no surprise that they will seek to treat the Inquiry as if it were just another court. This has also to do with how the process of arriving at the legal truth is conceived of (ibid., p. 38), as the lawyers’ attitude during the Inquiry

also rests on a fundamental premise, particularly of counsel, that there only is one way to discover the truth and that it is through the cut and thrust of examination and cross- examination. Leave aside the fact that we have already seen that the concept of a single “truth” may be self-delusionary, the approach misses the point of what the Inquiry is seeking to do. It is not seeking to paint a picture just in black and white; that something happened and something else did not, that someone did wrong. It does not occupy a binary world of right and wrong, good and bad. What it is trying to do is understand, and understanding rarely comes in black and white. Furthermore, whatever else may emerge from gladiatorial contest, understanding rarely does. Some lawyers are counsel to the inquiry, whereas some other persons are the legal representatives of groups and organisations involved in the inquiry (ibid., p. 39). The most prominent duty of counsel to the inquiry is “to take witnesses appearing before the Inquiry through their evidence”, ensuring that “the Inquiry heard both the witness’ account, any challenges to it and their responses to these challenges” (ibid.). Counsel to the Inquiry also organise the material collected by the inquiry. As to legal representatives, what they do during the hearings of an inquiry different from what they would be doing in court. This is because the procedure of the inquiry as defined by Kennedy for the Bristol Public Inquiry does not envisage that it would be the legal representatives who would examine and cross-examine the witnesses: it was up to the counsel to the inquiry to do so (ibid., pp. 39–40). In fact, “there is no right, as such, to cross-examination of witness (usually by lawyers) at a Public Inquiry. The Inquiry must, of course, behave fairly at all times” (ibid., p. 40). “The position of the Inquiry was simple. It wanted to hear witnesses telling their stories, rather than have the story filtered through the interventions of their legal representatives, who might seek to gloss over this, or over-emphasise that, out of their of what it was good for the Inquiry to hear” (ibid., p. 41). Also the status of expert witnesses was that of an amicus curiae, called to assist the Inquiry (at a bench trial, especially under the inquisitorial system, it would be to assist the court): “Just like other witnesses, experts also were the 1082 Appendix: Glossary

Inquiry’s experts. They gave evidence to assist the Inquiry in its task”, rather than representing one of the two sides in the adversarial system (ibid., p. 38). “They were advised as to the assistance that the Inquiry needed and gave their evidence accordingly, whether in the hearings, or in the conduct of several ana- lytical studies carried out by the Inquiry” (ibid.). Nevertheless, this being Britain (ibid., pp. 38–39): Again, this was unfamiliar territory to legal representatives. They were used to experts appearing for one “side” or another. They urged that the expert should brief them and then they would question a particular witness, or advise the Inquiry, in the light of what they gleaned from the expert’s briefing. I indicated that the Inquiry wished to hear from the experts and did not wish to hear their views “second hand”, through counsel. I went further, and said that the Inquiry would benefit from experts taking part in the hearings at the same time as other witnesses, so that the Inquiry could test arguments as they were put, and witnesses could refer to experts as peers, sitting alongside them, in discussing areas of technical expertise. (This is like hot-tubbing, q.v., at Australian courts.) Bear in mind that the public inquiry was about a hospital, and whereas some physicians were regular wit- nesses or were being investigated, some other physicians were called as expert witnesses. Therefore (ibid., p. 39): This they did, and the Inquiry would sometimes listen spellbound as expert and witness discussed matters of significant complexity, whether it was the correct response to a particular anatomical anomaly in the heart, or why paediatric intensive care was different from the care of adults, or how a particular statistical conclusion could be arrived at. [...] They made it plain that honest professionals could legitimately differ. [...] Insecurity governance or insecurity management A branch of information tech- nology concerned with how to respond, on an organisational level, to threats to computer security. Instructing party The client of either a lawyer or an expert consultant (an expert witness). The lawyer or the consultant is instructed by the client. Interesting case In law: “For a first instance decision to be interesting it must: 1) be appealed, or 2) includes a new principle, rule or factor in its ratio decidendi, or 3) exhibits an outcome vastly at odds with other similar cases” (Stranieri & Zeleznikow, 2005a, Glossary). Not the same as landmark case (q.v.). Interesting pattern In knowledge discovery from databases (KDD) and in data mining: “A pattern is interesting if it is a) easily understood by humans, b) valid (with some degree of certainty) on new or test data, c) potentially useful and d) novel. A pattern is also interesting if it validates a hypothesis that the user sought to confirm” (Stranieri & Zeleznikow, 2005a, Glossary). InvestigAide B&E An expert system (Valcour, 1997) for the Canadian Police, whose purpose was to support the processing and investigation of breaking and entering cases, by assisting in gathering and recording case data, and providing such information as suspect characteristics. Investigation Actions undertaken, typically by the police, in order to identify crim- inal suspects, or the extent of criminal activities. In the Unites States, the term investigation is used. In Britain, it is inquiry. Appendix: Glossary 1083

Itaca A tool, modelled after Daedalus (q.v.), for the Court of Cassation in Rome, under contract to Siemens, as per the design of Mr. Justice Carmelo Àsaro (who when a prosecutor in Lucca, developed Daedalus single-handedly). See procedural-support systems, and Section 4.1.3. Jury or lay factfinders In some countries, a group of citizens, not trained judges, who adjudicate trials of some categories of crime. In Anglo-Saxon countries, the judge can instruct the jury, but does not participate in the determination of the verdict. In Italy, some cases are heard by a mixed court (first introduced in colonial Libya), of trained judges and jurors, and after the verdict is given, the justification of the verdict must also be given: it is written by a trained judge, who – if outvoted by the jurors – may write a so-called sentenza suicida, i.e., a deliberately flawed justification in order to bring about an appeal. In Anglo- Saxon jurisdictions, a retrial may be ordered if some stringent conditions are met. Also see factfinders. Jury observation fallacy A claim, so named, against the use of knowledge of prior convictions of a criminal suspect. See character evidence. Fenton and Neil (2000) tried to support this claim by making use of Bayesian networks to present probabilistic legal arguments. Adrian Bowyer summarised this stance in a letter published in the latest issue of June 2001 of the London Review of Books (LRB), a letter immediately signalled in an e-list posting by Mike Redmayne: Writing about Labour’s proposal in its Criminal Justice White Paper that defendants’ past convictions should be revealed to juries, John Upton (LRB, 21 June) fails to mention the Jury Observation Fallacy. According to this, if a jury finds someone not guilty on the evidence presented in court – in other words, without taking previous convictions into account – the fact that this defendant has previous convictions for similar crimes usually makes it more, not less, probable that he or she is indeed innocent of this particular crime. This is because, when a crime is committed, the police quite reasonably go out and feel the collars of those with previous convictions for similar crimes. They therefore tend to fish in a highly non-representative pool, rather than picking suspects from the general population. This tips the probabilities in the defendant’s favour to an extent that is not outweighed by the likelihood of a certain fraction of past offenders becoming recidivists. If the defendant is considered innocent on the facts of the case, then his past convictions should be seen as evidence not so much of guilt as of the failures of police procedure. Mike Redmayne (a legal scholar of the London School of Economics, quite at home with probabilistic modelling) was unconvinced by some of the assumptions made. In a posting discussing Fenton and Neil (2000) at an e-list,31 he claimed: Your conclusion is sensitive to the probability that a defendant will be charged given a previous conviction and no hard evidence. If the probability is less than 1 in 200, the fallacy disappears. One point about this is that there are further screening stages between charge and trial, and even between trial and acquittal (the judge can be asked to certify that there is “a case to answer”). It would be very difficult for a case to get to the jury when (more or less) the only evidence against a defendant is that he has previous convictions for crimes similar to the one with which he’s now charged. If there is other

31 [email protected] 1084 Appendix: Glossary

evidence against the defendant, surely that affects your conclusion, because it increases the probability of guilt? There is also likely to be evidence against a defendant other than a previous con- viction because most suspects come to police attention independently of their having previous convictions. They may have been caught in the act, or, very often, reported by a member of the public – e.g. the victim. A few cases will get to court when there is very little evidence other than the defen- dant’s similar previous convictions. At this point, I wasn’t sure quite what you meant by “similar” in your model. Similarity can include more than a crime being of the same legal category. It can include similarities in modus operandi, geographical proximity, and so forth. It is where previous convictions have this sort of similarity (sometimes called “striking similarity”) that a case may get to the jury on previous convictions alone. (I also suspect that the police rely on such similarities when deciding which suspects to arrest.) If “similar previous conviction” is expanded in this manner, mightn’t previous convictions have more probative value than you allow? These are obviously points about the operational reality of the criminal justice system, and you can’t be blamed for not mentioning them. [...]

Jury research Thriving among in North America. It has produced various models of jurors’ decision-making, as well as empirical results. See Section 2.1.7. Kappa calculus In AI: a formalism introduced by Spohn (1988). The kappa value of a possible world is the degree of surprise in encountering that possible world, a degree measured in non-negative integer numbers. The probabilistic version of the kappa calculus was applied in Shimony and Nissan (2001) in order to restate Åqvist’s (1992) logical theory of legal evidence, which Åqvist based on the Bolding-Ekelöf degrees of evidential strength (Bolding, 1960;Ekelöf,1964). See Section 2.6. Knowledge acquisition “Knowledge acquisition is the transfer and transforma- tion of potential problem-solving expertise from some knowledge source to a program” (Stranieri & Zeleznikow, 2005a, Glossary). Knowledge based system “A knowledge based system is a computer program in which domain knowledge is explicit and contained separately from the system’s other knowledge” (Stranieri & Zeleznikow, 2005a, Glossary). Knowledge discovery “Knowledge discovery is the non trivial extraction of implicit, previously unknown and potentially useful information from data” (Stranieri & Zeleznikow, 2005a, Glossary). Knowledge engineering “Knowledge engineering involves the cooperation of domain experts who work with the knowledge engineer to codify and make explicit the rules or other reasoning processes that a human expert uses to solve real world problems” (Stranieri & Zeleznikow, 2005a, Glossary). Knowledge engineering paradox “The knowledge engineering paradox is that the more competent domain experts become, the less able they are to describe the knowledge they use to solve problems” (Stranieri & Zeleznikow, 2005a, Glossary). Knowledge engineering process “The knowledge engineering process is the pro- cess of transferring knowledge from the domain experts to the computer system. Appendix: Glossary 1085

It includes the following phases: knowledge representation, knowledge acquisi- tion, inference, explanation and justification” (Stranieri & Zeleznikow, 2005a, Glossary). Knowledge representation “Knowledge representation involves structuring and encoding the knowledge in the knowledge base, so that inferences can be made by the system from the stored knowledge” (Stranieri & Zeleznikow, 2005a, Glossary). Legal burden See Burden, legal. LAILA A language for abductive logic agents, used in ALIAS, a multi-agent archi- tecture. It was applied to the modelling of reasoning on the evidence in a criminal case, in Ciampolini and Torroni (2004). See Section 2.2.1.5. Landmark case In law (and by extension, in case-based reasoning within artificial intelligence): “A landmark case is one which alters our perception about knowl- edge in the domain – landmark cases are comparable to rules. Landmark cases are the basis of analogical reasoning” (Stranieri & Zeleznikow, 2005a, Glossary). Not the same as interesting case (q.v.). Learning “Learning is any change in a system that allows it to perform better the second time on repetition of the same task drawn from the same population” (Stranieri & Zeleznikow, 2005a, Glossary). Legal positivism “Legal positivists believe that a legal system is a closed logical system in which correct decisions may be deduced from predetermined rules by logical means alone” (Stranieri & Zeleznikow, 2005a, Glossary). Legal realism “Legal realists are jurisprudes for whom the reliance on rules is an anathema. They argue that judges make decisions for a range of reasons which cannot be articulated or at least are not apparent on the face of the judgement given” (Stranieri & Zeleznikow, 2005a, Glossary). See, e.g., Rumble (1965).32

32 Wilfrid Rumble began the first footnote in his paper by pointing out: “There is no infallible method to determine who is a legal realist. The most authoritative list is probably that compiled by Karl Llewellyn in 1931, with the assistance of Jerome Frank and Felix S. Cohen. See Llewellyn, Jurisprudence: Realism in Theory and Practice (Chicago, 1962), 74–76” (Rumble, 1965, p. 547, fn. 1). Karl N[ickerson] Llewellyn (1893–1962) was professor at the University of Chicago Law School. His work focused mostly on the topic of legal realism. Llewellyn (1962, repr. 2008) is a compilation of his writings from the 1930s through the 1950s. “Oliver Wendell Holmes, Jr., book, The Common Law, is regarded as the founder of legal realism. Holmes stated that in order to truly understand the workings of law, one must go beyond technical (or logical) elements entailing rules and procedures. The life of the law is not only that which is embodied in statutes and court decisions guided by procedural law. Law is just as much about experience: about flesh-and-blood human beings doings things together and making decisions. Llewellyn’s version of legal realism was heavily influenced by [Roscoe] Pound and [Oliver Wendell] Holmes [Jr.]. The distinction between ‘law in books’ and ‘law in action’ is an acknowledgement of the gap that exists between law as embodied in criminal, civil, and administrative code books, and law. A fully formed legal realism insists on studying the behavior of legal practitioners, including their practices, habits, and techniques of action as well as decision-making about others. This classic study is a fore- most historical work on legal theory, and is essential for understanding the roots of this influential perspective” (Llewellyn, ibid., from the 2008 publisher’s blurb). 1086 Appendix: Glossary

Lex posterior “Lex posterior is the legal principle that states the later rule has precedence over the earlier rule” (Stranieri & Zeleznikow, 2005a, Glossary). Lex specialis “Lex specialis is the legal principle that states the priority is given to the argument that uses the most specific information” (Stranieri & Zeleznikow, 2005a, Glossary). Lex superior “Lex superior is the legal principle that states that a ruling of a higher court takes precedence over one made by a lower court” (Stranieri & Zeleznikow, 2005a, Glossary). Liability Being legally bound or responsible. One category of liability is a defen- dant’s criminal liability. Another category is tort liability, which, e.gt., includes products liability torts and claims. Liability issues arising from the use of expert systems in the field of law were discussed by Karin Alheit (1989), as a particular case of liability in relation to the use of expert systems, for which, see Zeide and Liebowitz (1987). Alheit pointed out, in general concerning knowledge- processing software, that “[t]here exists a tremendous litig[ation] potential over their use, misuse, and even non-use” (Alheit, ibid., p. 43, referring to Zeide and Liebowitz 1987). Linear regression “In linear regression, data is modelled using a straight line of the form y = αx+β. α and β are determined using the method of least squares. Polynomial regression models can be transformed to a linear regression model” (Stranieri & Zeleznikow, 2005a, Glossary). Lineup Also called identity parade. A suspect stands in a line alongside foils, i.e., persons who look alike, and the victim or witness has to identify the suspect. A computerised version is ID parade discs, on which video clips from a database appear, along with a video clip showing the suspect. See Section 4.5.2.3. Lineup instructions Instructions given to eyewitness before an identification line- up. It is essential that such instructions must not be suggestive. For example, witnesses must not be given the impression that the perpetrator is believed to be one of the persons lines up; in fact, it may be that all of them are innocent. See Section 4.5.2.3. Link Analysis Network link analysis arose in human factors research, originally in order to determine the layout of machine shops in American industry during the First World War (Gilbreth & Gilbreth, 1917). Link analysis is currently sup- ported by computer tools. One of its applications is to crime investigation, and it is conducted by intelligence analysts. Its aim is to discover crime networks, to identify the associates of a suspect, to track financial transactions (possibly by data mining), to detect geographical patterns (possibly by kind of crime), and so forth. In Coady’s words (1985),

Link Analysis is the graphic portrayal of investigative data, done in a manner to facilitate the understanding of large amounts of data, and particularly to allow investigators to develop possible relationships between individuals that otherwise would be hidden by the mass of data obtained.

See Chapters 6 and 7. Appendix: Glossary 1087

Litigation Risk Analysis A proprietary method of Marc B. Victor, for quantifying legal and factual uncertainties by assuming probabilities, for constructing a deci- sion tree, and for using it in order to evaluate the risks of litigation. See Section 4.3.2.3. Local stare decisis “Local stare decisis is the tendency of judges to be consistent with the decisions of other members of their own region (or registry)” (Stranieri & Zeelznikow, 2005a, Glossary). Logic In How to Do Things with Rules, William Twining and David Miers (1976, pp. 140–142) made the following remarks about the relation between logic and law; these remarks are of lasting value:

The place of formal logic in legal reasoning is one of the most problematic topics in Jurisprudence. [...] First, it is important to realize that the term “logic” is used, even by philosophers, in a number of different senses. [...] Secondly, even where “logic” is con- fined to reasoning leading to necessary conclusions, very general questions of the kind “what is the role of logic in legal reasoning?” are ambiguous and misleadingly simple. For example, this question has been variously intepreted to mean: “To what extent do judges and advocates explicitly resort to deduction in justifying their decisions?”; “To what extent can judgments and other examples of argument towards conclusions of law be reconstructed in terms of formal logic?”; “To what extent is it feasible to resort to deductive-type arguments in legal reasoning?”; or “To what extent is it desirable to do so?”, or even: “What illumination can be gained by applying the techniques of formal logic to examples of legal reasoning?” All these questions are different, although they are related to each other. They are complex questions; beware of glib answers to them. Thirdly, there is an unfortunate tendency in juristic controversy to present answers to some of these questions as disagreements between extremists. For instance, it is not uncommon to contrast a view that a legal system is a closed and complete system of rules from which all conclusions on points of law in particular cases can be deduced as a matter of logical necessity (sometimes referred to as “the slot-machine model”) with the dictum of Mr Justice Holmes [(Holmes, 1881, p. 1)] that “(t)he life of the law has not been logic, it has been experience”, which can be interpreted to mean that deductive logic plays no role at all in legal reasoning. Stated in this extreme form, both views are patently absurd. It is encouraging to find that few jurists who have been accused of adopting the slot-machine model have been guilty of any such crudity and that even a cursory reading of Holmes reveals that he was concerned to show that logic is only one of a number of factors in “determining the rules by which men should be governed” rather than to deny that it had, or should have, any influence in this respect.

The reference to Holmes is to Oliver Wendell Holmes, the Younger (1841–1935), who was a progressive judge in the U.S. Supreme Court. Twining & Miers go on to quote from an essay by Anthony Gordon Guest [(1961, pp. 195–196)]: “argu- ments need not be cast in a strictly syllogistic form, provided that they exhibit a logical structure. In the dialectic of the law, logic has an important part to play at a stage when a suggested rule has to be tested in order to discover whether or not its adoption will involve the contradiction of already established legal prin- ciples. [...]”. Then, Twining & Miers (ibid., p. 142) offer a caveat concerning arguments about inconsistency: “Such arguments need to be treated with cau- tion for a number of reasons: First, it is quite common for some kind of rules to ‘hunt in pairs’. [...] Secondly, arguments about ‘inconsistency’ and ‘contra- diction’ may often be more appropriately expressed as arguments about what 1088 Appendix: Glossary

constitutes an appropriate level of generality for a rule or a concept in a particu- lar context”. Maxims that point in contradictory directions exist in the common law; they ‘hunt in pairs’ indeed, and the phenomenon is termed normative ambi- guity (ibid., p. 210). Twining & Miers illustrate this phenomenon with pairs of proverbs (e.g., “Too many cooks spoil the broth”, but “Many hands make light work”: ibid.). They go on to list legal examples (ibid., pp. 210–211), then they remark (ibid., p. 212):

First, one must be wary of exaggerating the extent and the importance of normative ambiguity. Often the canons of interpretation may give clear and explicit guidance in a given case, especially where several canons cumulatively support the same conclu- sion. The difficulties tend to arise where several factors have to be weighes against each other as they favour different results. Secondly, it is important to distinguish between rule-statements which are logical contradictories and those which merely have different tendencies. [...] Moreover there are typically no rules dictating which of two canons is to prevail in such situations. Thus, just as they have carefully avoided laying down strict rules for determining the ratio decidendi of a case, so the common law judges have left themselves a fairly wide leeway of discretion in legislative interpretation. The canons indicate factors to be taken into account in deciding a particular case, but do not indicate precisely what weight should be given to such factors.

Loose talk An important concept for the evaluation of the truthfulness of a proposition. Cf. philosopher Terry Horgan account of vagueness he called transvaluationism (q.v.) Whereas in court, it is recognised that sometimes peo- ple speak other than literally, or with various degrees of precision, and yet are not lying, arguably also advanced natural-language processing capabilities, with which some legal software may be eventually endowed, ought to recognise that much. Let us consider Dan Sperber and Deirdre Wilson’s notion of loose talk (Sperber & Wilson, 1986), by means of an example they provide (Sperber & Wilson, 1990). “At a party in San Francisco, Marie meets Peter. He asks her where she lives, and she answers: ‘I live in Paris’.” Contrast this to a situation in which the location of the event when the occurance takes place is different: “Suppose Marie is asked where she lives, not at a party in San Francisco, but at an electoral meeting for a Paris local election”. There is a difference, con- cerning the truth value of Marie’s utterance, in terms of relevance (relevance for discourse, not the relevance of evidence). “It so happens that Marie lives in Issy- les-Moulineaux, a block away from the city limits of Paris. Her answer is literally false, but not blatantly so. If Peter presumed literalness, he will be misled”. Yet, assumptions are warranted, that in terms of artificial intelligence could be repre- sented in terms of a nesting of beliefs that agents ascribe to each other. It is not precise that Marie lives in Paris, in the sense that this is inside the city limits.

In ordinary circumstances, however, Mary’s answer is quite appropriate, and not mis- leading. How come? This is easily explained in terms of relevance theory. A speaker wants, by means of her utterance, to make her hearer see as true or probable a certain set of propositions. Suppose these propositions are all quite easily derivable as implications of a proposition Q. Q however has also other implications whose truth the speaker does not believe and does not want to guarantee. Nevertheless, the best way of achieving her Appendix: Glossary 1089

aim may be for her to express the single proposition Q, as long as the hearer has some way of selecting those of its logical and contextual implications that the speaker intends to convey and of ignoring the others.

This example, Sperber and Wilson claim, reflects quite a general phenomenon:

Our claim is that such a selection process is always at work, is part, that is, of the under- standing of every utterance. Whenever a proposition is expressed, the hearer takes for granted that some subset of its implications are also implications of the thought being communicated, and aims at identifying this subset. He assumes (or at least assumes that the speaker assumed) that this subset determines sufficient cognitive effects to make the utterance worth his attention. He assumes further (or at least assumes that the speaker assumed) that there was no obvious way in which achieving these effects might have required less effort. He aims at an interpretation consistent with these assumptions, i.e. consistent with the principle of relevance. When this criterion determines a single inter- pretation (or closely similar interpretations with no important differences between them) communication succeeds.

For Peter to interpret the answer Marie gave him while in San Francisco, various things are relevant, and the city limits of Paris are not among them. Marie can predict how Peter will understand her answer.

In our example, Peter will be able to infer from Mary’s answer quite an amount of true or plausible information: that Marie spends most of her time in the Paris area, that Paris is familiar to her, that she lives an urban life, that he might try to meet her on his next trip to Paris, and so on. It it such cognitive effects which make Marie’s utterance sufficiently relevant to be worth his processing effort, in a way Marie manifestly may have anticipated. So, Peter is entitled to assume that Mary intended him to interpret her utterance in this way. Peter would be mislead by Marie’s answer only if he were to conclude from it that she lives within the city limits of Paris. However it is clear that Marie had no reason to assume that Peter would have to derive such a conclusion in order to establish the relevance of her utterance. Therefore her utterance does not warrant it.

Marie’s answer can be expected to be loosely understood. “This loose under- standing does not follow from a strictly literal interpretation having been first considered and then discarded in favor of looseness [...]. In fact, at no point is literalness presumed”. When does it become relevant to understand literally? Suppose that it was in Paris, and for the purposes of a Paris local election, that Marie had stated that she lives in Paris. “If she answers that she lives in Paris, the proposition expressed will itself be crucially relevant, hence the utterance will be understood literally, and Marie will have lied”. In fact: “An utterance may be literally understood, but only at the end rather than at the beginning of the comprehension process, and only when relevance requires it”. The procedure is actually the same: “The same procedure – derive enough cognitive effects to make up an interpretation consistent with the principle of relevance – yields in some cases a literal interpretation, in others a loose one. In other cases still, it yields a figurative interpretation”. Mac-a-Mug Pro A system for assisting a witness in approximating his or her description of the facial features of a criminal suspect. It is a computerized version of the Photofit process. See Section 8.2.2. 1090 Appendix: Glossary

Machine learning A branch of artificial intelligence and of data mining. Basically, machine learning enables AI systems to improve their performance, by aug- menting their knowledge. “Most machine-learning based software products are capable of generating decision trees or IF/THEN rules. Some are capable of producing both” (Mena, 2003, p. 221). Mena describes:

• Several products that primarily produce decision trees (ibid., section 7.9, pp. 221–229); • Several rule-extracting tools (ibid., section 7.10, pp. 229–232); • Several machine-learning software suites (Mena, 2003, section 7.11, pp. 233– 248): • ANGOSS (http://www.angoss.com) • Megaputer (http://www.megaputer.com) • Prudsys (http://www.prudsys.de/discoverer) • Oracle data mining suite33 • Quadstone (http://www.quadstone.com) • SAS (http://www.sas.com); cf. de Ville (2006). • SPSS (http://www.spss.com/spssbi/clementine) • Teradata Warehouse Miner (http://www.teradata.com) • thinkAnalytics (http://www.thinkanalytics.com)

MarshalPlan A computer tool prototype of David Schum and Peter Tillers, sup- porting the organization of the evidence, and combining Wigmore Charts, an algebraic approach, and hypertext. Entering its fully operational phase around 2005, yet the project started in the early 1990s. See procedural-support systems, and see Section 4.1.1. Mechanical Jurisprudence An article by For Roscoe Pound34 (1908) was enti- tled “Mechanical Jurisprudence”. For the concept this title expresses, cf. Christie (1984a). Pound opposed the ossification of legal concepts into self-evident truths. By mechanical jurisprudence, which he called that way and he con- demned, Pound referred the the wooden application of previous precedents to the facts of cases without regard to the consequences. For Pound, the logic of previous precedents alone would not solve jurisprudential problems. In oppo- sition to mechanical jurisprudence, Pound offered his theory of sociological jurisprudence. Pound (1908) declared: Herein is the task of the sociological jurist. Professor Small defines the sociological movement as “a frank endeavor to secure for the human factor in experience the central place which belongs to it in our whole scheme of thought and action.” The sociological movement in jurisprudence is a movement for pragmatism as a philosophy of law; for

33 http://www.oracle.com/ip/analyze/warehouse/datamining 34 The much cited American legal scholar Nathan Roscoe Pound (1870–1964) was Dean of Harvard Law School from 1916 to 1936. He also was the first one to receive a Ph.D. in botany from the University of Nebraska, which was in in 1898. Appendix: Glossary 1091

the adjustment of principles and doctrines to the human conditions they are to govern rather than to assumed first principles; for putting the human factor in the central place and relegating logic to its true position as an instrument. One sentence in Pound (1908) resonates with current endeavours to treat law by means of artificial intelligence: Undoubtedly one cause of the tendency of scientific law to become mechanical is to be found in the average man’s admiration for the ingenious in any direction, his love of technicality as a manifestation of cleverness, his feeling that law, as a developed institution, ought to have a certain ballast of mysterious technicality. Note however that it is not the purpose of AI to change legal conceptions. It is AI that has to adapt itself, when applied to law, to what legal scholars advocate. Pound (1908) claimed: “Jurisprudence is last in the march of the sciences away from the method of deduction from predetermined conceptions.” Mediation In civil cases, a form of case disposition (q.v.). Like arbitration, media- tion is a form of alternative dispute resolution (that is, alternative to the courts). Mediation can be either binding or non-binding. Memory conformity If two eyewitnesses who saw the same event then discussed it, this may influence what they later claim to remember. More in general, one has confabulation (which is undesirable) when the witness is also inferring, not merely reporting. Concerning the latter, see, e.g., Memon and Wright (1999); Gabbert et al. (2003, 2004); Luus and Wells (1994); Meade and Roediger (2002); Meudell et al. (1995); Principe and Ceci (2002); Skagerberg (2007). Mens rea The intention to transgress on the part of the defendant, and how specif- ically it is (if at all) intended to transgress. As opposed to actus reus, which is the performance of a forbidden action. Sometimes the intention does not match the action performed. A cardinal doctrine of English criminal law is expressed by the maxim: Actus non facit reum nisi mens sit rea, i.e., “An act does not itself constitute guilt unless the mind is guilty”. “The maxim draws attention to the two essential elements of a crime” (Curzon, 1997, p. 21): “the physical element (the actus reus), i.e. the prohibited conduct [...] (the so-called ‘condition of illegal- ity’)”, and “the mental element (the mens rea), i.e. the condition of mind [...](the so-called ‘condition of culpable intentionality’)” (ibid.). “Some writers suggest a third element – absence of a valid defence, i.e. a defence which might reduce or negate defendant’s criminal liability” (ibid.). Models of time as known from artificial intelligence are potentially relevant for modelling such situations, because of the requirement of a temporal coinci- dence of actus reus and mens rea, and this can hold over an interval; the following casenote is quoted from Curzon (ibid.): In Fagan v. MPC (1969), X accidentally drove his car on to Y’s foot; he then deliberately left it there for a few minutes. X was charged with assault [...] and claimed that there was no coincidence of act and intent. It was held that X’s conduct in driving the car on to Y’s foot and allowing it to remain there constituted a continuing act; the assault was committed when X decided to leave the car on Y’s foot. James J[ustice] stated: “It is not necessary that mens rea should be present at the inception of the actus reus;it can be imposed on an existing act. On the other hand, the subsequent inception of mens 1092 Appendix: Glossary

rea cannot convert an act which has been completed without means rea into an assault” (Curzon, ibid.).

“Before 1935, [in English law] it was said that where the accused had caused the victim’s death, he had to show that he did not have the mens rea for murder. This burden was placed on the prosecution in Woolmington v DPP [1935] AC 462” (Jefferson, 1992, p. 23). “Under the influence of DPP v Smith [1961] AC 290 (HL) it was thought that a person intended to do what the natural consequences of his behaviour were. In legal terms a man was presumed to intend the natural consequences of his behaviour. If this presumption was ever irrebutable, s. 8 of the Criminal Justice Act 1967 abolishes it” (ibid.). Take involuntary manslaughter. “If death has occurred but the defendant did not possess an intent to kill or cause grievous bodily harm, then providing the action or omission was not totally accidental and therefore blameless, any ensu- ing prosecution will be for manslaughter” (Bloy, 1996, p. 159). The following categories are enumerated by Bloy (ibid.): unlawful act manslaughter, reckless (subjective) manslaughter, and gross negligence. For the former:

The modern definition was expressed by the House of Lords in Newbury and Jones (1976). Lord Salmon said an accused was guilty of manslaughter if it was proved that he intentionally did an act which was unlawful and dangerous and that the act inadvertently caused death. In deciding whether or not the act was dangerous the test is would “all sober and reasonable people” recognise that it was dangerous, not whether the accused recognised it as such. (Bloy, ibid.). The test [for the dangerous character of the act] is clearly based upon an objective assessment of the circumstances. For example, what conclusions might a reasonable per- son be expected to reach about the impact of a burglary, late at night, where the occupant of the property is not far short of his 90th birthday? If it is to be reasonably expected that he has a weak heart, or [is] in poor health, then the act of burglary immediately becomes a dangerous act. If however, the reasonable person would not suspect that the victim might in some way be vulnerable to the type of enterprise which is undertaken then a manslaughter conviction is unlikely to be secured on the basis that the act is not a dangerous act. (Bloy, pp. 162–163).

For manslaughter as being the outcome of gross negligence, in English law: “The decision of the House of Lords in Adomako (1994) is of great significance in helping to clarify the ambit of gross negligence manslaughter and whether or not recklessness is a relevant concept within this species of manslaughter” (Bloy, ibid., p. 166). A patient undergoing surgery suffered a cardiac arrest and subsequently died, because the anaesthetist “failed to notice that an endotra- cheal tube had become disconnected from the ventilator supplying oxygen to the patient” (ibid., p. 167). “The time period between the disconnection occur- ring and the [anaesthetist] noticing that this was the cause of the problem was six minutes. [The anaesthetist] was charged with manslaughter and convicted It was not denied by the appellant that he had been negligent but it was his contention that his conduct was not criminal” (ibid.). Bloy (ibid.) explains the attitude of the Court of Appeal and then of the House of Lords, which both dismissed Adomako’s appeal: Appendix: Glossary 1093

The Court of Appeal treated the issue as one of breach of duty and stated the ingredients of involuntary manslaughter by breach of duty to be:

• the existence of a duty; • the breach of the duty causing death; and • gross negligence on the part of the accused which the jury considered justified a criminal conviction;

In respect of the mens rea the Court of Appeal was of the opinion that proof of any of the following states of mind might convince a jury that a defendant had been grossly negligent:

• indifference to an obvious risk of injury to health; • actual foresight of the risk coupled with the determination to run it; • an appreciation of the risk coupled with an intention to avoid it but also cou- pled with such a high degree of negligence in the attempted avoidance as the jury considered justified the conviction; and • inattention or failure to advert a serious risk which went beyond “mere inad- vertence” in respect of an obvious and important matter which the defendant’s duty demanded he should address.

Twining and Miers (1976), while discussing normative ambiguity (see our entry for logic) provide examples of pairs of maxims pointing in different directions. The following pair is about mens rea (ibid., p. 211): “All statutory criminal offences are presumed, irrespective of their wording to include a mens rea requirement” (Sweet v. Parsley [1970] A.C. 132), but: “The presumption that all statutory offences include a mens rea requirement may be rebutted by the serious- ness of the conduct to be prohibited” (R. v. St Margaret’s Trust [1958] 1 W.L.R.35 522). Also see Fitzgerald (1961) “Voluntary and involuntary acts”, and Hart’s (1961b) “Negligence, mens rea and criminal responsibility”. Throughout the his- tory of law, there has been variation in how intention has been treated vis-à-vis liability (e.g., Jackson, 1971). Jackson (2010) points out: Legal doctrine does not require proof that the mens rea “caused” the actus reus;whatit requires (normally) is (merely) that the mens rea exists “contemporaneously” with the actus reus – in order that we may attribute to the latter the appropriate moral opprobrium, i.e. to show from the offender’s intention the required immorality of his act. Meter-models Quantitative models of the jurors’ decision-making process. In such models, the verdict decision is based on the comparison of a meter reading of the final belief, to a threshold to convict. Different classes of such models include probabilistic models, algebraic models, stochastic models, and such modelling that is based on AI formalisms for belief revision. In the algebraic approach, belief updating is additive, whereas in probabilistic models is multiplicative. See Sections 2.1.1 and 2.1.6.

35 W.L.R. stands for the Washington Law Review. 1094 Appendix: Glossary

MIMIC Short for the name Multiple Image-Maker and Identification Compositor. A system for generating composite faces, for the purposes of assisting a witness to describe the features of a criminal suspect. MIMIC uses film strip projections. See Section 8.2.2.2. Minutiae detection Part of fingerprint recognition, and a prerequistie for fingerprint matching algorithms. This is a finer level than coarse-grained features such as ridges in a fingerprint image. See, e.g., Jiang et al. (2001), Espinosa-Duró (2002). See Section 8.7. Model-based case-based reasoning paradigm Within case-based reasoning, in artificial intelligence: “The model based approach assumes that there is a strong causal model of the domain task. It generally involves selecting among partially matched cases, in which symbolic reasoning is used to determine the difference between the given problem and the retrieved cases” (Stranieri & Zeleznikow, 2005a, Glossary). Modus ponens In logic, a form of inference by which if P→Q holds and P holds, then Q holds. Multi-agent system In artificial intelligence, an approach such that intelligent behaviour is coordinated among a number of separate intelligent agents, these being autonomous software modules (sometimes embodied in robots). They are called autonomous agents. A precursor was the blackboard paradigm (for which, see blackboard systems). See Section 6.1.6. Multimedia forensics A branch of forensics concerned with uncovering perpe- trators of piracy targeting protected digital content or encrypted applications. Typically, perpetration consists of unauthorised music and movie copying, either for private use of for selling pirated copies, thus eating a big bite of the profit of the record industry and studios. Chang-Tsun Li (of the University of Warwick, England), has published a book (Li, 2008) on state-of-the-art pirate tracking software. A particular technique, traitor tracing, can be applied to mul- timedia forensics, but the term has previously been used also in the literature about cryptography. See Sections 8.2.5 and 6.1.2.5. Naïve Bayesian classifiers “Naïve Bayesian classifiers assume the effect of an attribute value on a given class is independent of the other attributes. Studies comparing classification algorithms have found that the naïve Bayesian classifier to be comparable in performance with decision tree and neural network classi- fiers” (Stranieri & Zeleznikow, 2005a, Glossary). For a given sample we search for a class ci that maximises the posterior probability

by applying Bayes rule. Then x can be classified by computing Appendix: Glossary 1095

Nearest neighbour algorithm “The nearest neighbour algorithm is used in infor- mation retrieval where data that is closest to the search is retrieved. To perform this search, we need a ‘metric’ (distance function) between the occurrence of each piece of data. The kth nearest neighbour algorithm classifies examples in a sample by using two basic steps to classify each example: (a) Find the k near- est, most similar examples in the training set to the example to be classified; (b) Assign the example the same classification as the majority of k nearest retrieved neighbours” (Stranieri & Zeleznikow, 2005a, Glossary). Negotiation “Negotiation is the process by which two or more parties conduct com- munications or conferences with the view to resolving differences between two parties. This process might be formal or mandated as in legal and industrial dis- putes, semi-formal, as in international disputes or totally informal as in the case of two prospective partners negotiating as to how they will conduct their married life” (Stranieri & Zeleznikow, 2005a, Glossary). Network representation schemes “A network representation scheme is a knowl- edge representation scheme using graphs, in which nodes represent objects or concepts in the problem domain and the arcs represent relations or associations between them. Semantic networks are an example of a network representation scheme” (Stranieri & Zeleznikow, 2005a, Glossary). Neural networks “A neural network receives its name from the fact that it resem- bles a nervous system in the brain. It consists of many self-adjusting processing elements cooperating in a densely interconnected network. Each processing ele- ment generates a single output signal which is transmitted to the other processing elements. The output signal of a processing element depends on the inputs to the processing element: each input is gated by a weighting factor that deter- mines the amount of influence that the input will have on the output. The strength of the weighting factors is adjusted autonomously by the processing element as data is processed” (Stranieri & Zeleznikow, 2005a, Glossary). “Neural networks are particularly useful in law because they can deal with a) classification diffi- culties, b) vague terms, c) defeasible rules and d) discretionary domains” (ibid.). See Section 6.1.14. Network topology “A neural network topology is a specification of the number of neurons in the input layer, the output layer and in each of the hidden layers” (Stranieri & Zeleznikow, 2005a, Glossary). Nonmonotonic reasoning Such reasoning that it is not true that adding new infor- mation would never make the set of true statements to decrease. This is a standard concept in artificial intelligence. As a textbook explains ” (Luger & Stubblefield, 1998, p. 269):

Traditional mathematical logic is monotonic: It begins with a set of axioms, assumed to be true, and infers their consequences. If we add new information to this system, it may cause the set of true statements to increase. Adding knowledge will never make the set of true statements decrease. This monotonic property leads to problems when we attempt to model reasoning based on beliefs and assumptions. In reasoning with uncertainty, humans draw conclusions based on their current set of beliefs and assumptions. In rea- soning with uncertainty, humans draw conclusions based on their current set of beliefs; 1096 Appendix: Glossary

however, unlike mathematical axioms, these beliefs, along with their consequences, may change as more information becomes available. Nonmonotonic reasoning addresses the problem of changing belief. A nonmonotonic reasoning system handles uncertainty by making the most reasonable assumptions in light of uncertain information. It then pro- ceeds with its reasoning as if these assumptions were true. At a later time, a belief may change, necessitating a reexamination of any conclusions derived from that belief.

Obligation See deontic logic, and contrary-to-duty obligations. Ontology “An ontology as an explicit conceptualisation of a domain” (Stranieri & Zeleznikow, 2005a, Glossary). See Sections 6.1.7.3 and 6.1.7.4. “Legal ontolo- gies are generalised conceptual models of specific parts of the legal domain. They provide stable foundations for knowledge representation” (Mommers, 2003,p. 70). For example, Mommers (2003) presented an ontology “based on an analysis of the relation between the legal domain and knowledge about that domain. It is explained how knowledge in the legal domain can be analysed in terms of three dimensions (acquisition, object and justification), and how these dimensions can be employed in alternative designs for collaborative workspaces” (ibid.). Boer, van Engers, and Winkels (2003) discussed using ontologies in order to compare and harmonise legislation.36 Onus of proof The same as evidential burden. See Burden, evidential.“Inany given scenario the onus of proof indicates the degree of certainty for a given outcome to occur. In a criminal case in Common Law countries such proof must be beyond reasonable doubt, whereas in most civil cases in such countries, the proof required is by a fair preponderance of the evidence (i.e. more than 50% likely to occur)” (Stranieri & Zeleznikow, 2005a, Glossary). Jefferson explains (1992, p. 19):

There is a distinction between the evidential and legal burden of proof. The difference may be illustrated by reference to automatism [...]. Before the accused can rely on this defence, he must put forward some evidence that he was acting automatically when he, say, hit his lover over the head with a heavy ashtray. The evidence might consist of a witness’s saying that he saw what happened or a psychiatrist’s drafting a report. In legal terms he has to adduce or lead evidence. If he does not adduce such evidence, his plea will fail at that stage and the prosecution does not have to lead evidence that his plea should not succeed. If he does, the prosecution has to disprove that he was acting automatically. His burden is called the evidential burden or onus of proof. The prosecution’s burden is the legal one.

36 “In the E-POWER project relevant tax legislation and business processes are modeled in UML to improve the speed and efficiency with which the Dutch Tax and Customs Administration can implement decision support systems for internal use and for its clients. These conceptual models have also proven their usefulness for efficient and effective analysis of draft legislation. We are cur- rently researching whether conceptual modeling can also be used to compare ‘similar’ legislation from different jurisdictions to improve the capacity of the Dutch Tax and Customs Administration to react to future consequences of increased movement of people, products, and money between EU member states and increased harmonization between tax authorities in Europe. In addition, addressing the problem of comparing models is also expected to improve our methodology for modeling legislation.” (Boer et al. 2003, p. 60). Appendix: Glossary 1097

Open-textured legal predicate “Open textured legal predicates contain questions that cannot be structured in the form of production rules or logical proposi- tions and which require some legal knowledge on the part of the user in order to answer” (Stranieri & Zeeleznikow, 2005a, Glossary). Opinion question In argumentation studies, Walton’s (1997) Appeal to Expert Opinion offered (ibid., pp. 211–225) an argumentation scheme for “Argument for Expert Opinion”, then reproduced in Walton et al. (2008, pp. 381–382). See s.v. Expert opinion, Appeal to above. The expert source is E; the subject domain is S; and A is a proposition about which E claims to be true (or false). The opinion question is: “What did E assert that implies A?”. It is articulated in four detailed subquestions: “Was E quoted as asserting A? Was a reference to the source of the quote given, and can it be verified that E actually said A?”; “If E did not say A exactly, then what did E assert, and how was A inferred?”; “If the inference to A was based on more than one premise, could one premise have come from E and the other from a different expert? If so, is there evidence of disagreement between what the two experts (separately) asserted?”; “Is what E asserted clear? If not, was the process of interpretation of what E said by the respondent who used E’s opinion justified? Are other interpretations plausible? Could important qualifications have been left out?”. Outlier A major anomaly, a notable departure from a pattern. Outliers may be a useful indicator for the purposes of crime detection. See Chapter 6. “Data objects that are grossly different from or inconsistent with the remaining set of data are called outliers” (Stranieri & Zeleznikow, 2005a, Glossary). Overfitting “Overfitting occurs when the data mining method performs very well with data it has been exposed to but performs poorly with other data” (Stranieri & Zeleznikow, 2005a, Glossary). Overtraining of neural networks “A neural network over-trains if it has been exposed to an abundance of examples, far too many times. In this case it can learn each input-output pair so well that it, in effect memorises those cases. The network classifies training set cases well, but may not perform so well with cases not in the training set” (Stranieri & Zeleznikow, 2005a, Glossary). Palmprints In biometrics, the print of the entire palm of a hand (Kumar et al., 2003), instead of just fingerprints. This is used in personal authentication systems, but is not practical in criminal investigation, as it is only seldom that a suspect would leave an entire palmprint, rather than fingerprints. See Section 8.7. PATER A software system for probabilistic computations for testing paternity claims (Egeland, Mostad, & Olaisen, 1997). See Section 8.7.2.1. Pattern recognition “The creation of categories from input data using implicit or explicit data relationships. Similarities among some data exemplars are con- trasted with dissimilarities across the data ensemble, and the concept of data class emerges. Due to the imprecise nature of the process, it is no surprise that statistics has played a major role in the basic principles of pattern recognition” (Principe et al., 2000, p. 643). 1098 Appendix: Glossary

PEIRCE-IGTT A piece of software: an abductive inference engine from artificial intelligence, developed by a team led by John Josephson. One of its applica- tions was to the modelling of reasoning on the evidence in a criminal case. See Section 2.2.1.5. Pentitismo In the Italian criminal justice system, an arrangement on the part of the prosecution, by which some political terrorist or member of the Mafia who was himself highly liable, were permitted to turn into a state witness (q.v.) against other defendants. There is some similarity to the supergrass system of Britain. In Italy, a somewhat equivalent system is the pentitismo: in the late 1970s, as well as during the 1980s and still during trials held during the 1990s, on occasion a “repentant” terrorist would act as state witness against one or more defendants. Such a witness used to be called a pentito,orasuperpentito. Sometimes drew strong criticism, and in all fairness, defeated justice, such as when the mur- derer of a journalist obtained, by turning state witness, his own liberty, as well as that of the woman who had been his girlfriend before they were separately arrested in different circumstances. Once released, he immediately proceeded to wed another woman. One photograph that was highly visible in the mass media showed him talking, and, inside the same frame, the grim face of the father of the journalist whose murder justice had renounced punishing. It has also happened that the sincerity of a superpentito, securing convictions, was quite dubious. This was the case of the state witness during the Sofri case (for the 1972 terrorism- related killing of a police inspector), as well as of a witness from the Mafia against former prime minister Andreotti, who was convicted for the violent death of a journalist. Also the testimony of a state witness who had raped and murdered in the Circeo case, securing the convictions of other far rightists for a bombing with massive casualties in Bologna (it took place on 2 August 1980), appears to be discredited. Personal authentication systems Systems for verifying the personal identity of a person, using biometric characteristics; e.g. using fingerprints with digital signature technologies (Isobe et al., 2001; Seto 2002). Personal stare decisis “Personal stare decisis is the tendency of judges to be consistent with themselves” (Stranieri & Zeleznikow, 2005a, Glossary). Persuasion argument One of two classes of arguments (the other one being adver- sary arguments), “depending on the goals and expectations of the participants. [It] consists of arguments in which the participants are motivated to reach a com- mon agreement, for example in order to solve some problem”: “the participants are both willing to be persuaded as well as trying to persuade” (Flowers et al., 1982, p. 275). This is relevant for computer tools for supporting negotiation. Persuasion burden See burden of proof. Photoarray An alternative to a lineup at which a suspect and foils are physically present and standing alongside each other. In a photoarray, the eyewitness is made to see a set of photographs instead. Also called photospread. An alternative is such an identity parade that there is no physical presence, but the eyewitness is made to see video clips of the suspect and foils. See Section 4.5.2.3. Appendix: Glossary 1099

Photofit A system for assisting a witness in approximating his or her description of the facial features of a criminal suspect. See Section 8.2.2. Pirate tracing software A kind of software subserving Multimedia forensics,for uncovering perpetrators of piracy targeting protected digital content or encrypted applications. See Sections 8.2.5 and 6.1.2.5. Plaintiff The party that turns to the courts for adjudication against another party (the defendant). In some kinds of trial (at employment tribunals in England and Wales), the names are: applicant for the plaintiff; respondent for the defendant. In the Civil Procedure Rules 1993, in England and Wales, the term plaintiff was replaced with claimant (thought to be a more transparent, and more widely understood term: the same reform excised other traditional terms as well). Plausibility, relative “The distinction between the structure of proof and a theory of evidence is simple. The structure of proof determines what must be proven. In the conventional [probabilistic] theory [which Allen attacks] this is elements to a predetermined probability, and in the relative plausibility theory [which Ron Allen approves of] that one story or set of stories is more plausible than its com- petitors (and in criminal cases that there is no plausible competitor). A theory of evidence indicates how this is done, what counts as evidence and perhaps how it is processed” (Allen, 1994, p. 606). See Allen (1991, 2008a, 2008b). Plausible inference “Polya developed a formal characterisation of qualitative human reasoning as an alternative to probabilistic methods for performing com- monsense reasoning. He identified four patterns of plausible inference: inductive patterns, successive verification of several consequences, verification of improb- able consequences and inference from analogy” (Stranieri & Zeleznikow, 2005a, Glossary). Plea A statement made in court by other party in argument of the case. In particular, in Anglo-American criminal procedure,37 the answer given by the defendant at the start of the trial, after the indictment. The answer is either guilty,ornot guilty. Plea bargain In Anglo-American criminal procedure, and in countries influenced by that system: an offer which the prosecution has a discretion to make, so that in return for a guilty plea at the start of the trial (before evidence is submitted to the court), the defendant is offered sentencing concessions (a lighter sentence). Bargaining about the sentence also takes place if one of the defendants is offered the option to become a prosecution witness against other defendants. In countries on the European Continent, it used to be the case that there could be no plea bargaining. Plea bargaining applies in criminal cases, and should not be mistaken for a settlement out of court, stopping the proceedings in the trial of a civil case. Police science A field encompassing all aspects of law enforcement, focusing on the factors that affect crime and the police response to crime (Greene, 2006). One aspect of this discipline is the generation or refinement of methods of investigation, enabled by technological advances.

37 Kamisar, LaFave, Israel, and King (2003) covers criminal procedure in the United States of America. 1100 Appendix: Glossary

Polygraph testing Testing by means of hardware equipment, recording levels of arousal while the person tested is being questioned. Various methods exist. Polygraph testing and polygraph evidence are admitted in some countries (such as the United States), while being frowned upon for good reason in some other countries (including the United Kingdom). See Section 4.5.2.1. In the U.S., the Polygraph Protection Act of 1888 banned most polygraph tests for personnel selection purposes; the police in the U.S. resorts to such tests extensively during investigation. In some other countries (e.g., in the U.K.), the police is not allowed to use polygraph tests. Post-charge questioning Questioning of a suspect on the part of the police, after legal proceedings have started. This is not supposed to happen. Legal proceed- ings only start once the investigation stage ends: once a suspect is charged, the police can no longer question him or her. Post-charge questioning (on the part of police investigators) of terrorism suspects, possibly extended to other cat- egories of criminals, was considered by the British government in November 2007, drawing criticism from civil liberties groups. Preponderance of the evidence On balance, the evidence seems to favour adjudi- cation one way rather than in the other. This standard of proof is weaker than beyond reasonable doubt. Without sticking to the legal sense of these phrases, in the Carneades argumentation tool (Gordon & Walton, 2006), a standard of evidence stronger than scintilla of evidence and weaker than beyond reasonable doubt is The second weakest is PE (preponderance of the evidence): “A state- ment meets this standard iff its strongest defensible pro argument outweighs its strongest defensible con argument”. A stronger standard (yet weaker than BRD, beyond reasonable doubt) is DV, which is defined as follows: “A statement meets this standard iff it is supported by at least one defensible pro argument and none of its con arguments is defensible”. Principal component analysis In statistical data analysis: “Principal components analysis (PCA) is the technique most often used to identify features that do not contribute to the prediction from a data-set. PCA involves the analysis of variance between features and the class variable in a prediction exercise. PCA requires specialist statistical software, since the calculations are cumbersome. PCA is applicable only to features that are numeric” (Stranieri & Zeleznikow, 2005a, Glossary). Principled negotiation “Principled negotiation promotes deciding issues on their merits rather than through a haggling process focussed on what each side says it will and will not do” (Stranieri & Zeleznikow, 2005a, Glossary). Prior convictions (evidence of) See character evidence (of which this is a kind); jury observation fallacy. Private privilege The rule by which some categories of witnesses cannot be compelled to disclose certain kinds of information or documents.This includes protection from self-incrimination, either for the accused – who under English law “may not be asked questions which tend to show that he may be guilty of any other offence than that with which he is presently charged” (Osborne, 1997, Appendix: Glossary 1101

p. 338) – and for other witness: “In a criminal case no witness can be com- pelled to answer any question which would, in the opinion of the judge, have a tendency to expose the witness to any criminal charge” (ibid.). Another kind of private privilege is legal professional privilege, by which lawyer–client commu- nications, as well as communications with third parties for the purpose of actual or pending litigation, are protected from disclosure. Yet, the client may waive this privilege, and direct his or her lawyer accordingly. Moreover, communications to facilitate crime or fraud are not privileged. Privilege A rule that protects some kinds of communication or material documents from disclosure at trial or during a police investigation. There is private privilege, and there is public interest privilege. Probabilistic information retrieval models “Probabilistic information retrieval models are based on the probability ranking principle which ranks legal doc- uments according to their probability of relevance to the query given every available source of information. The model estimates the probability of rele- vance of a text to the query, on the basis of the statistical distribution of terms in relevant and irrelevant text, given an uncertainty associated with the represen- tation of both the source text and the information need, as well as the relevance relationship between them” (Stranieri & Zeleznikow, 2005a, Glossary). Probability Donald Gillies (2004, p. 286) provides this usefully concise explanation:

Probability theory originated from the study of games of chance, and these still afford a good illustration of some of the basic concepts of the theory. If we roll a fair dice, the probability of getting 5 is 1/6. This is written P(5)=1/6. A conditional probability is the probabilities of a result given that something else has happened. For example, the prob- ability of 5 given that the result was odd, is no longer 1/6, but 1/3; while the probability of 5 given that the result was even, is no longer 1/6, but 0. A conditional probability is written P(A|B). So we have P(5|odd)=1/3, and P(5|even)=0. A related concept is inde- pendence. Two events A and B are said to be independent if the conditional probability of A given B is the same as the probability of A, or, in symbols, if . Successive rolls of a die are normally assumed to be independent, that is to say, the probability of getting a 5 is always the same, namely 1/6, regardless of what results have appeared so far. An important concept for probability in AI is conditional independence. A and B are said to be conditionally independent given C,ifP(A|B&C) = P(A|C).

Also see Probability, prior and posterior. Probability, objective “An objective probability is one which is supposed to be a feature of the objective world, such as mass or electrical charge. A well-known objective interpretation of probability is the frequency interpretation. For exam- ple, to say that the probability of 5 is 1/6 on this interpretation is taken to mean that, in a long series of rolls of the die, the result 5 will appear with a fre- quency of approximately 1/6. Those who adopt this interpretation estimate their probabilities from frequency data” (Gillies, 2004, p. 287). Probability, prior and posterior With reference to Bayes’ theorem, which when dealing with a hypothesis H, and some evidence E, states: 1102 Appendix: Glossary

P(H|E) = P(E|H)P(H)/P(E)

this can be read as follows: the posterior probability P(H|E), i.e., the probabil- ity that H is true given E, is equal to the product of the likelihood P(E|H), i.e., the probability that E given the truth of H, and the prior probability P(H)ofH, divided by the prior probability P(E)ofE. A synonym of prior probability is a priori probability. A synonym of posterior probability is a posteriori probability. Probability, subjective It “is taken to be the measure of the degree of belief of a particular individual that some event will occur. For example, if I say that my subjective probability that it will rain in London tomorrow is 2/3, this means that I believe to degree 2/3 that it will rain in London tomorrow. A woman’s degree of belief can be measured by the rate at which she is prepared to bet, or her bet- ting quotient. It can be shown that, starting from this way of measuring belief, the standard axioms of probability can be derived. An application of the subjec- tive theory of probability to Bayesianism produces what is known as subjective Bayesianism. Here P(H) is taken to represent the prior degree of belief of Mr. R, say, that H is true, while P(H|E) represents his posterior degree of belief in H after he has come to know evidence E. A rational man on this approach changes his degree of belief in the light of new evidence E from P(H)toP(H|E), where the value of P(H|E) is calculated using Bayes Theorem” (Gillies, 2004, p. 287). Probative value “Probative value is a relational concept that expresses the strength with which evidence supports an inference to a given conclusion. It is a cru- cial concept for determining admissibility (see Fed[eral] R[ules of] Evid[ence] 403, which instructs judges to exclude evidence when its probative value is sub- stantially outweighted by its prejudicial, confusing, or duplicative effect) and for determining whether parties have satisfied their burdens of proof” (Allen & Pardo, 2007a, p. 108, fn. 2). Procedural Procedural, as opposed to substantive, as opposed to procedural, per- tains to how to administer the judiciary process. For example, the order in which the parties and their witnesses testify belongs in procedure. Procedural representation scheme In artificial intelligence: “A procedural rep- resentation scheme is a knowledge representation scheme in which knowledge is represented as a set of instructions for solving a problem. Examples of pro- cedural representation schemes include production rules”, i.e., IF-THEN rules (Stranieri & Zeleznikow, 2005a, Glossary). Procedural-support systems A category of computer tools for assisting humans in handling court cases. “Procedural-support systems are AI & Law programs that lack domain knowledge and thus cannot solve problems, but that instead help the participants in a dispute to structure their reasoning and discussion, thereby promoting orderly and effective disputes” (Prakken & Renooij, 2001). “When procedural-support systems are to be useful in practice, they should pro- vide support for causal reasoning about evidence” (ibid.). Available operational tools include CaseMap, MarshalPlan, and (in Italy) Daedalus. See Section 4.1. Production rule In artificial intelligence: a rule consisting of a condition part (or left-hand part) and an action part (or right-hand part). It is also called an IF- THEN rule. Appendix: Glossary 1103

Production rule system In artificial intelligence: “Production rule systems are expert systems which consist of a set of production rules, working memory and the recognise-act cycle (also known as the rule interpreter)” (Stranieri & Zeleznikow, 2005a, Glossary). PROLEXS “The PROLEXS project at the Computer/Law Institute, Vrije Universiteit, Amsterdam, Netherlands is concerned with the construction of legal expert shells to deal with vague concepts. Its current domain is Dutch landlord-tenant law. It uses several knowledge sources and the inference engines of the independent knowledge groups interact using a blackboard architecture” (Stranieri & Zeleznikow, 2005a, Glossary). See Section 6.1.14.9 in this book, and s.v. blackboard systems. PROLEXS is the subject of Walker, Oskamp, Schrickx, Opdorp, and van den Berg (1991) and of Oskamp et al. (1989). Prosecutorial discretion The choice being left to the prosecutor (in some juris- dictions, especially in the Anglo-American adversarial system), whether to prosecute or not, and if not, to propose a plea bargain. As opposed to obliga- tory prosecution, which until recently used to be common in Continental Europe. Prosecution, as being the decision to charge a suspect with a crime, is the sub- ject, e.g., of books by Miller (1969) and by Jacoby, Mellon, Ratledge, and Turner (1982). Cf. Kingsnorth, MacIntosh, and Sutherland (2002).38 Discretion (q.v.) is a broader concept. Public inquiry As public inquiries in Britain are inquisitorial, as opposed to the adversarial system that characterises the courts, the impact of this contrast is explained at the entry for inquisitorial. As early as the Bristol Public Inquiry in the late 1990 (it was chaired by Sir Ian Kennedy: see inquisitorial), “the Inquiry established a process whereby the statements of witnesses who were not to be called were made available on the Inquiry’s website, together with the com- ment, if any, of someone identified by the Inquiry’s lawyers as having been the object of criticism in the statement” (Kennedy, 2007, p. 37). That was also the case of witnesses that were called to give oral evidence before the Inquiry, but then their statements with the comments were not posted at the website until the

38 Flowe et al. (2010) pointed out: “Prosecutors have the discretion to determine whether a sus- pect will be charged and what charges the suspect should face (Bordenkircher v. Hayes, 1978). Prosecutors also have a legal and ethical obligation to protect felony suspects who are not just innocent-in-fact, but who are also innocent-in-law (California District Attorneys Association, 1996). Charges should not be filed even if the prosecutor has a personal belief in the suspect’s guilt. Rather, issuing decisions should be guided by whether the evidence in the case is legally sufficient and admissible. Previous has found that felony charges are more likely to be issued if there is physical evidence to support the allegations (Albonetti, 1987; Feeney, Dill, &Weir,1983; Jacoby et al., 1982; Miller, 1969; Nagel, & Hagan, 1983) and if the crime is serious, such as when a victim has been injured (Kingsnorth et al., 2002). Factors that may lead prosecu- tors to not file charges include: A primary aggressor has not been identified (e.g., the California Primary Aggressor Law requires a primary aggressor be identified), the suspect is thought to be innocent, or there are ‘interests of justice’ concerns, such as the suspect will provide testimony in a more serious case (Silberman, 1978). Despite the fact that much research has been carried out examining the relationship between evidentiary factors and felony issuing decisions, little is known about the role that eyewitness identification evidence may play in prosecution.” 1104 Appendix: Glossary

witness had given oral evidence. That way, the contributions of the various wit- nesses were known in advance. This made it possible to schedule the witness’ appearance accordingly. Resorting to a website did away with Salmon letters (ibid.): Not only was this [web-supported procedure] fair to all, but it allowed the Inquiry to take account of and explore differences of view when questioning witnesses. Moreover, it meant that the Inquiry could avoid a procedure known as the issuing of “Salmon let- ters”, named after Lord Justice Salmon who chaired the Royal Commission on Tribunals in 1966. The purpose of “Salmon letters” was to put individuals on notice should they have been criticised in evidence. It was a procedural response to the evidence heard, designed to ensure fairness. I took the view that it reflected an approach which equated Public Inquiries with judicial proceedings. It was, therefore, inappropriate and, more- over, unnecessary. Fairness could be maintained in a far more coherent and sensible way. In effect, the “Salmon letters” procedure introduced an unnecessary formal step into the proceedings, which commonly provoked legal to-ing and fro-ing. By getting witnessed to reveal and confront their various accounts well in advance, everyone knew where they stood. There was no need to have resort to some additional, and time-consuming, and, frankly, out-dated procedural mechanism. Public interest privilege Also called public interest immunity. A category of privilege, by which “evidence is excluded because of some public interest in withholding it which outweighs the usual public interest in open litigation” (Osborne, 1997, p. 340). A lesser legal concept than privilege is confidential- ity, and it, too, is such that communications in professional–client relationships are sometimes protected (which is, instead, a right, and is considered private privilege, for the client of a lawyer). “[O]ne originally separate basis of public privilege which has merged somewhat into the mainstream is the rule that no question may be asked in proceedings which would tend to lead to the identi- fication of any person who has given information leading to the institution of a prosecution” (ibid., p. 342). Questioned documents evidence Evidence from forensic tests (Levinson, 2000), concerning the authenticity of documents or parts thereof, of their authorship ascription, of their date, or of the hand in which they are written. There exist techniques for determining authenticity, age, ink and paper sources, equipment used, forgeries, alterations, and erasures, as well as handwriting identification, the latter being the subject of Morris (2000). See Section 6.1.10. The following is quoted from the introduction to the useful entry for ‘Questioned document examination’ in Wikipedia39: Questioned document examination (QDE) is known by many names including forensic document examination, document examination, diplomatics, handwriting exam- ination, and sometimes handwriting analysis, although the latter name is not often used as it may be confused with graphology. Likewise a forensic document examiner is not to be confused with a graphologist, and vice versa. The questioned document division of a crime lab is sometimes referred to as “QD” in popular media. The task of forensic document examination is to answer questions about a disputed document using a variety of scientific processes and methods. Many examinations involve a comparison of the questioned document, or components of the document, to a

39 http://en.wikipedia.org/wiki/Questioned_document_examination Appendix: Glossary 1105

set of known standards. The most common type of examination involves handwriting wherein the examiner tries to address concerns about potential authorship.

One task of a forensic document examiner is to determine if a questioned item orig- inated from the same source as the known item(s), then present their opinion in court as an expert witness. Other tasks include determining what has happened to a docu- ment, determining when a document was produced, or deciphering information on the document that has been obscured, obliterated or erased.

Professional organisations include the American Society of Questioned Document Examiners (ASQDE), the American Academy of Forensic Science (AAFS), the Southwestern Association of Forensic Document Examiners (SWAFDE), and the Southeastern Association of Forensic Document Examiners (SAFDE) in the U.S.A.; the Canadian Society of Forensic Science (CSFS); the Australasian Society of Forensic Document Examiners (ASFDE) in Australia and Asia; the Gesellschaft für Forensische Schriftungtersuchung (GFS) in Frankfurt (Germany); the Asociación Profesional de Peritos Calígrafos de Cataluña (in Spain); the National Association of Document Examiners (NADE); the Association of Forensic Document Examiners (AFDE); and so forth. Questioning During police investigations, the process of asking suspects, or actual or potential witnesses, such questions that seek to uncover information. This is quite different from examination in court. It is important not to confuse examination in court, with questioning by the police during investigation. Legal proceedings only start once the investigation stage ends: once a suspect is charged, the police can no longer question him or her. Post-charge question- ing (on the part of police investigators) of terrorism suspects, possibly extended to other categories of criminals, was considered by the British government in November 2007, drawing criticism from civil liberties groups. Questmap A computer tool for supporting argumentation (Carr, 2003). QuestMap is based on IBIS, mediates discussions, supports collaborative argumentation, and creates information maps, in the context of legal education. Collaborative prob- lem identification and solving is the purpose of IBIS, an Issue-Based Information System. Problems are decomposed into issues. See Section 3.7. Ratio decidendi The rationale of a decision made by an adjudicator in a court case. The ground or reason for the decision. The point in a case that deter- mines the judgement. “Ratio decidendi is Latin for the “reasons for decision”, that is the legal reasons why the judge came to the conclusion that he or she did. It is the fundamental basis for the rule of law in common law systems. Stare deci- sis says that the ratio decidendi will apply to subsequent cases decided by courts lower in the hierarchy” (Stranieri & Zeleznikow, 2005a, Glossary). Reason!Able A computer tool for supporting argumentation (van Gelder, 2002). Some tools envisage collaboration among users, yet Reason!Able only has one user per session. It guides the user step-by-step through the process of construct- ing an argument tree, containing claims, reasons, and objections, the latter two kinds being complex objects which can be unfolded to see the premises. See Section 3.7. 1106 Appendix: Glossary

Rebutter A defendant’s answer in matter of fact (about the accusation and the evidence) to a plaintiff’s (or, in particular, prosecution’s) surrejoinder. (See replication.) Reference-class problem Allen and Pardo (2007a, p. 109) find that scholarship which applies probability theory to juridical proof suffers from a deep conceptual problem that makes ambiguous the lessons that can be drawn from it – the problem of reference classes. The impolications of this problem are considerable. To illustrate the problem, consider the famous blue bus hypothetical. Suppose a witness saw a bus strike a car but cannot recall the color of the bus; assume further that the Blue Company owns 75 percent of the buses in the town and the Red Company owns the remaining 25 percent. The most prevalent view in the legal literature of the probative value of the witness’s report is that it would be determined by the ratio of the Blue Company buses to Red Company buses, whether this is thought of as or plays the role of a likelihood ratio or determines information gain (including an assessment of a prior probability) [...] But suppose the Red Company owns 75 percent (and Blue the other 25 percent) of the buses in the county. Now the ratio reverses. And it would do so again if Blue owned 75 percent in the state. Or in direction: it would reverse if Red owned 75 percent running in the street where the accident occurred (or on that side of the street) and so on. Or maybe the proper reference class has to do with safety standards and protocols for reporting accidents. Each of the reference classes leads to a different inference about which company is more likely liable, and nothing determines the correct class, save one: the very event under discussion, which has a likelihood of one and which we are trying to discover. “The blue bus hypothetical [...] exemplifies the general implications of reference classes, and those implications would hold for practically any attempt to quantify a priori the prbative value of evidence” (ibid., p. 113). Regression In statistics: “In linear regression, data is modelled using a straight line of the form y = αx+β. α and β are determined using the method of least squares. Polynomial regression models can be transformed to a linear regression model.” (Stranieri & Zeleznikow, 2005a, Glossary). Rejoinder The defendant’s answer to the plaintiff’s replication. (See replication.) Relevance Pertinence of a piece of evidence, for the purposes of proving that which is to be proven in court, as a criterion for such evidence to be heard or excluded instead. Yovel (2003) provided a mildly formalised treatment, with a notation in MicroProlog style, of what in legal scholarship about evidence is known as relevance. See Section 4.6 in this book. Here is a definition from a legal textbook on evidence: “The purpose of calling evidence in court is to try to prove certain facts to be true. Evidence which assists in this process is relevant and that which does not assist is irrelevant. It is the first rule of evidence, and one to which there are no exceptions, that irrelevant evidence is never admissible in court. This does not mean that relevant evidence is always allowed, because sometimes the court disallows it despite its relevance. The greater proportion of this book is about rules which limit the extent to which relevant evidence can be used” (Templeman & Reay, 1999, p. 1). Modern theories of relevance are the subject of Tillers (1983). Also see Richard Lempert’s (1977) ‘Modeling Relevance’. There also exist other senses of relevance:insensitivity analysis from statis- tics, including when it is applied to legal evidence, “An item of evidence is called ‘relevant’ to a hypothesis is observing the evidence changes the probability Appendix: Glossary 1107

that the hypothesis is true” (Levitt & Laskey, 2002, p. 375). Moreover, for the relevance of an utterance, see in the entry for loose talk in this Glossary. Teun van Dijk (1989) describes the concept of relevance as it applies to a class of modal logics broadly called “relevance logics” as a concept grounded firmly in the pragmatics, and not the semantics or syntax of language. Within a discursive community, the data items in a generic argument must be relevant to the claim to the satisfaction of members of the community. The purpose of van Dijk’s article was stated as follows (ibid., p. 25):

In this paper an attempt will be made to provide a general and informal discussion of “relevance” and related notions from this linguistic point of view. More particularly, it will be argued that the relevance requirement must be satisfied by any compound sentence, viz. by all connectives, and by any coherent discourse, i.e. not only deductive or argumentative, in natural language. Although such a claim might have feed-back in the philosophy of logic, we will be concerned with the applications of some recent ideas from relevance logics in the explicit characterization of these properties of natural language. Reparational obligations See contrary-to-duty obligations. Replication In American law: the plaintiff’s (or the prosecution’s) reply to the “defence”, intended as the original statement of the defendant or his defence lawyer (or team of lawyers). The plaintiff’s replication may prompt an answer in matters of fact, called the defendant’s rejoinder, which may prompt the plaintiff’s surrejoinder, which may prompt the defendant’s rebutter, which may prompt that plaintiff’s surrebutter. Resolution “Resolution is a semi-decidable proof technique for first order predicate calculus, which given an unsatisfiable well formed formula, proves it to be unsat- isfiable. If the well formed formula is not unsatisfiable, there is a possibility that the algorithm may not terminate” (Stranieri & Zeleznikow, 2005a, Glossary). Respondent In some kinds of trial, the defendant; then the name for the plaintiff is applicant. Rule base (or ruleset). “The rule base of a legal (or indeed any) rule based expert system is that part of the system in which the rules are stored. It is kept sepa- rate from the other part of the expert system, the inference engine” (Stranieri & Zeleznikow, 2005a, Glossary). Rule-based expert system “A rule based expert system is a collection of rules of the form : IF condition(s) THEN action. Rule based systems include production rule systems, and some would argue, logic based systems as well” (Stranieri & Zeleznikow, 2005a, Glossary). Rule-extracting tools A category of machine learning tools. Several commercial rule-extracting tools were described by Mena (2003, section 7.10, pp. 229–232):

• AIRA (http://www.godigital.com.br), and Excel add-on • DataMite,40 for relational databases • SuperQuery (http://www.azmy.com) • WizWhy (http://www.wizsoft.com)

40 http://www.lpa.co.uk/ind_top.htm 1108 Appendix: Glossary

Salmon letters See public inquiry. Scheme Argumentation schemes are “predefined patterns of reasoning. A single scheme describes an inference, the necessary prerequisties for that inference, and possible critical questions that might undercut the inference” (van den Braak & Vreeswijk, 2006). Scintilla of evidence A tenuously probative piece of evidence, enough to motivate probing further, searching for more evidence. Without sticking to this sense of the phrase, in the Carneades argumentation tool the weakest standard of evidence is SE (scintilla of evidence): “A statement meets this standard iff it is supported by at least one defensible pro argument”. Secondary obligation See contrary-to-duty obligations. Sensitivity analysis An analysis of how given pieces of evidence being available, would affect the demonstrability of given claims. It can be used when evaluating litigation risk: see Section 4.3.1. It can be useful for a costs/benefits analysis of whether to obtain some piece of evidence: see Section 4.3.2. Levitt and Laskey (2002, Sections 1.4.4 and 1.5.4) discussed and exemplified such a sensitivity analysis, in the context of their analysis of the evidence in a murder case by means of Bayesian networks (BNs). Their example concerns the French case in which Omar Raddad was convicted in 1994 of murdering his employer, but then pardoned because of how controversial the case was. Levitt & Laskey (ibid., p. 375) wrote:

The BN knowledge representation can capture useful quantitative behaviour regarding alternative explanations for the same items of evidence. For example, the relevance of items of evidence regarding Raddad depends on their relationship in the evidential argu- ment implied by the BN, and [...] they can change as evidence accrues. In particular, the evidence of Raddad’s location at the time of the murder is co-dependent with the evidence from the examiner’s testimony about the time of death. The relevance of one depends dynamically on the other, and they co-vary as evidence is accrued to the global evidential argument about Raddad’s guilt or innocence that is modelled by [a given] BN [...]. This introduction of the examiner’s testimony [...] does not change the probabil- ity of Raddad’s guilt. That is, the evidence is not relevant to Raddad’s guilt given the evidence accrued up to that point. The examiner’s report becomes relevant when we accrue the evidence that Raddad was with his relatives on Monday. In the presence of the examiner’s report, the evidence provides an alibi and greatly reduces the probabil- ity of guilt. Subsequently, the evidence regarding a possible typographical error of the recording of the day of the death changes the relevance of Raddad’s alibi for his where- abouts on Monday from very strongly relevant to very weakly relevant. The process of exploring complex models to identify subtleties such as this can be facilitated by computational tools, which are in turn enabled by the sophisticated repre- sentational and inferential capabilities of the modular BNFrag [i.e., Bayesian network fragments] architecture described in this Article. For example, sensitivity analysis can be used to examine the impact of changes in modeling assumptions of the strength of relevance of evidence to hypothesized conclusions. [...] The term sensitivity analysis has multiple, related, but different definitions in the literature on statistics and scientific experimentation. [... W]e illustrate the use of a particular sensitivity analysis, some- times called an “importance measure,” specifically to compute a measure of the weight or relevance of evidence items to a BN query. An item of evidence is called “relevant” to a hypothesis is observing the evidence changes the probability that the hypothesis is true. [...] Appendix: Glossary 1109

Sentenza suicida In Italy: a justification of a verdict written by a trained judge in a deliberately flawed manner, so that an appeal trial would necessarily take place, thus overturning a verdict given by jurors who outvoted that judge at a mixed court (nonexistent in Anglo-Saxon countries): see jury. Settlement out of court In a civil case, an agreement among the parties not to continue in the case being litigated. It involves a compromise as to compensation. A settlement out of court should not be mistaken for plea bargaining, which applies in criminal cases. Shield For a defendant in a criminal case: such protection that makes it inadmissible for prosecution to cross-examine in order to obtain bad character evidence, or to adduce such evidence. See character evidence, and see imputation. Situations in which the defendant loses his shield include such that come into being if he claims good character for himself, or bad character for prosecution witnesses (or for the prosecutor). Shield bidding A form of malpractice related to online auction fraud. It is also known as bid shielding. It “occurs when the buyer uses another email address or a friend (the shield) to drive up prices and discourage bids on an item she wants. At the last minute, the shield withdraws the high bid, allowing the buyer to win the item at a lower price. Most auction sites forbid retracting a bid once it’s made, and on eBay shill and shield bidding is clearly prohibited” (Wahab, 2004). See Section 6.2.3. Shilling A form of malpractice related to online auction fraud. It is known as bid shilling,orshill bidding. “The ability to disguise identity, revoke bids, and main- tain multiple on-line identities may facilitate undesirable practices like shilling. Shilling is where sellers arrange for false bids to be placed on the items they are selling. Sellers place the bid themselves by using multiple identities or by using confederates. The idea is to force up the cost of a winning bid and encourage interest in the auction” (Mena, 2003, p. 256). “Shill bidding: is the intentional sham bidding by the seller to drive up the price of his/her own item that is up for bid. This is accomplished by the sellers themselves and/or someone that is asso- ciated with the seller making bids to purposely drive up the price of the seller’s item.” (Wahab, 2004). Cf. shield bidding. See Section 6.2.3. Similar fact evidence An exception to the rule which in criminal law prevents the disclosure of evidence of disposition and character (see evidence of disposition). In England, “the law will permit the prosecution to adduce evidence of previous misconduct where its nature, modus operandi or some other circumstance, shows an unmistakable similarity to the offence charged. This must be strong enough to go beyond any question of coincidence so as to lead the jury to conclude ‘this is the work of the same man’” (Osborne, 1997, pp. 313–314). For a somewhat different concept, see doctrine of chances. Situation theory A formal theory that considers actors within the situation in which they are.

Situation Theory grew out of attempts by Jon Barwise in the late 1970s to provide a semantics for “naked-infinitive” perceptual reports such as “Claire saw Jon run”. Barwise’s intuition was that Claire didn’t just see Jon, an individual, but Jon doing 1110 Appendix: Glossary

something, a situation. Situations are individuals having properties and standing in rela- tions. A theory of situations would allow us to study and compare various types of situations or situation-like entities, such as facts, events, and scenes. One of the cen- tral themes of situation theory of meaning and reference should be set within a general theory of information, one moreover that is rich enough to do justice to perception, communication, and thought. By now many people have contributed by the need to give a rigorous mathematical account of the principles of information that underwrite the theory.41 Slate A particular computer tool; it supports human users’ reasoning by argumenta- tion (Brigsjord, Shilliday, Taylor, Clark, & Khemlani, 2006). Slot-machine model An extreme logicist view of a legal system. See logic. Smurfing “the breaking up of large sums of money into smaller units, and sub- sequent passing of each segment through multiple accounts. Used by money launderers, the practice is designed to make trail extremely difficult to follow” (Sparrow, 1991, p. 252, fn. 1). See Chapter 6. Social epistemics Social aspects of the philosophy of knowledge, according to Alvin Goldman (1987a, 1987b). Because of such social aspects, the requirement of total evidence is an invalid principle, and an example of contravening on it is exclusionary laws of evidence in court: jurors are not given all the evidence, and Goldman (1991), who approves of this, calls this epistemic paternalism. SPLIT-UP “SPLIT-UP is a hybrid rule based/ neural network system developed at La Trobe University that uses textbooks, heuristics, expert advice and cases to model that part of the Family Law Act 1975 (Australia) which deals with property division. Explanation is provided through the use of Toulmin argument structures” (Stranieri & Zeleznikow, 2005a, Glossary. It was they who developed SPLIT-UP). Stare decisis Stare decisis is a fundamental principle in common law legal systems. The principle dictates that the reasoning, loosely, ratio decidendi,42 used in new cases must follow the reasoning used by decision-makers in courts at the same or higher level in the hierarchy. Stare decisis is unknown to civil law, where judgments rendered by judges only enjoy the authority of reason. Traditional stare decisis is when the same decision has to be taken as a higher court judging about the same facts. Local stare decisis is when the same decision has to be taken as the same court judging about the same facts.

41 From the summary of Aczel, Israel, Katagiri, and Peters (1993). “Situation theory is the result of an interdisciplinary effort to create a full-fledged theory of information. Created by scholars and scientists from cognitive science, computer science and AI, linguistics, logic, philosophy, and mathematics, it aims to provide a common set of tools for the analysis of phenomena from all these fields. Unlike Shannon-Weaver type theories of information, which are purely quantitative theories, situation theory aims at providing tools for the analysis of the specific content of a situation (signal, message, data base, statement, or other information-carrying situation). The question addressed is not how much information is carried, but what information is carried” (from the publisher’s blurb of Aczel et al. 1993). 42 The ground or reason for the decision. The point in a case that determines the judgement. Appendix: Glossary 1111

Personal stare decisis is when the same decision has to be taken as the same judge judging about the same facts. State witness One of the intended defendants in a criminal case, who having been offered a deal by the prosecution, turns into a witness, allied with the prosecu- tion, against at least one defendant. This is not only the case of minor offenders. Sometimes offenders with a heavy liability are offered to become state witnesses, or at any rate to inform investigators in such a manner that would secure convic- tions. In Britain, a state witness is said to be giving Queen’s evidence. In Britain, a supergrass may be a very important informer, not necessarily a state witness. The supergrass system in Britain emerged in 1972, and in its heyday years it was used against armed robbers in London. It also was used to combat terrorism in Northern Ireland. The first police informer within the super- grass system was Bertie Smalls, who shopped hundreds of associates in 1976; the operation was masterminded by Scotland Yard detective Tony Lundy. In Italy, a somewhat equivalent system is the pentitismo: in the late 1970s, as well as during the 1980s and still during trials held during the 1990s, on occasion a “repentant” terrorist would act as state witness against one or more defendants. Such a witness used to be called a pentito,orasuperpentito. Sometimes the deal drew strong criticism, and in all fairness, defeated justice, such as when the mur- derer of the journalist Walter Tobagi obtained, by turning state witness, his own freedom, as well as that of the woman who had been his girlfriend before they were separately arrested in different circumstances. Once released, he immedi- ately proceeded to wed another woman. One photograph that was highly visible in the mass media showed him talking, and, inside the same frame, the grim face of the father of the journalist whose murder justice had renounced punishing. It has also happened that the sincerity of a superpentito, securing convictions, was quite dubious. This was the case of the state witness during the Sofri case (for the 1972 terrorism-related killing of a police inspector), as well as of a witness from the Mafia against Italy’s former prime minister Giulio Andreotti, who was convicted for the violent death of a journalist. Also the testimony of a state wit- ness who had raped and murdered in the Circeo case, securing the convictions of other far rightists for a bombing with massive casualties in Bologna, appears to be discredited. Statistically oriented case-based reasoning paradigms In artificial intelligence: “In statistically oriented case based reasoning paradigms, cases are used as data points for statistical generalisation. The case based reasoner computes condi- tional probabilities that a problem should be treated similarly to previously given cases” (Stranieri & Zeleznikow, 2005a, Glossary). See case-based reasoning. Statistical reasoning “In contrast to symbolic reasoning, statistical reasoning derives its results by checking whether or not there is a statistical correlation between two events. Examples of statistical reasoning include neural networks and rule induction systems. Whilst rule based systems are considered to be exam- ples of symbolic reasoning; the rules are often derived using statistical tests” (Stranieri & Zeleznikmow, 2005a, Glossary). Statutory law In countries like Britain there are both statutory law, i.e., laws passed by Parliament, and common law, i.e., the body of sentences passed by judges, and 1112 Appendix: Glossary

that serve as precedent. “Statutory law is that body of law created by acts of the legislature – in contrast to constitutional law and law generated by decisions of courts and administrative bodies” (Stranieri & Zeleznimkow, 2005a, Glossary). Stevie An argumentation-based computer tool intended for supporting criminal investigation. Stevie enables analysts to view evidence and inferences, The pro- gram is described as distilling out of that information coherent stories which are “hypothetical reconstructions of what might have happened”, and which are defined as “a conflict-free and self-defending collection of claims” which moreover is temporally consistent (van den Braak & Vreeswijk, 2006). See Section 3.10.2. Story A narrative: see Chapter 5.InStevie,astory is “a conflict-free and self- defending collection of claims” which moreover is temporally consistent (van den Braak & Vreeswijk, 2006). See Section 3.10.2. Story model Of Nancy Pennington and Reid Hastie (1986, 1988, 1992, 1993), for modelling jurors’ decision making. It is based on the information processing paradigm from cognitive psychology. Striking similarity A strong similarity between a crime to previous convictions of a criminal suspect or defendant, such that the crime under trial and the ones from previous convictions are in the same legal category, and shared similarities such as the modus operandi, geographic proximity, and so forth. It is where there is such “striking similarity”, that a case may get to the jury on previous convictions alone. See jury observation fallacy. Substantive Substantive, as opposed to procedural, pertains to the rules of right administered by a court, rather than to how to administer it. Supergrass In Britain, an informer whose collaboration is extremely fruitful for police investigators. Such an informer may or not be a state witness (q.v.). The latter is always the case, instead, of Italy’s pentitismo (q.v.). The supergrass sys- tem in Britain emerged in 1972, and in its heyday years it was used against armed robbers in London. It also was used to combat terrorism in Northern Ireland. The first police informer within the supergrass system was Bertie Smalls, who shopped hundreds of associates in 1976; the operation was masterminded by Scotland Yard detective Tony Lundy. Surrebutter or surrebuttal A plaintiff reply to a defendant’s rebutter. (See replication.) Surrejoinder A plaintiff reply to a defendant’s rejoinder. (See replication.) Teleological Of an argument (as opposed to deontological reasoning): of a “[reason given for acting or not acting in a certain way may be on account of what so acting or not acting will bring about. [...] All teleological reasoning presupposes some evaluation” (MacCormick, 1995, p. 468). Text mining “sometimes alternately referred to as text data mining, roughly equiv- alent to text analytics, refers to the process of deriving high-quality information from text. High-quality information is typically derived through the divining of patterns and trends through means such as statistical pattern learning. Text min- ing usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of Appendix: Glossary 1113

others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. ‘High quality’ in text mining usually refers to some combination of relevance, nov- elty, and interestingness. Typical text mining tasks include text categorisation, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarisation, and entity/relation modelling (i.e., learning relations between named entities)”.43 See Chapter 6. Commercial tools include:

• AeroText – provides a suite of text mining applications for . Content used can be in multiple languages. • Attensity – hosted, integrated and stand-alone text mining (analytics) soft- ware that uses natural language processing technology to address collective intelligence in social media and forums; of the customer in sur- veys and emails; customer relationship management; e-services; research and e-discovery; risk and compliance; and intelligence analysis. • Autonomy – suite of text mining, clustering and categorisation solutions for a variety of industries. • Basis Technology – provides a suite of text analysis modules to identify lan- guage, enable search in more than 20 languages, extract entities, and efficiently search for and translate entities. • Endeca Technologies – provides software to analyse and cluster unstructured text. • Expert System S.p.A. – suite of semantic technologies and products for developers and knowledge managers. • Fair Isaac – leading provider of decision management solutions powered by advanced analytics (includes text analytics). • Inxight – provider of text analytics, search, and unstructured visualisation tech- nologies. (Inxight was bought by Business Objects that was bought by SAP AG in 2008). • LanguageWare – text analysis libraries and customisation tooling from IBM. • LexisNexis – provider of business intelligence solutions based on an extensive news and company information content set. Through the recent acquisition of Datops LexisNexis is leveraging its search and retrieval expertise to become a player in the text and data mining field. • Nstein Technologies – text mining solution that creates rich metadata to allow publishers to increase page views, increase site stickiness, optimise SEO, automate tagging, improve search experience, increase editorial pro- ductivity, decrease operational publishing costs, increase online revenues. In combination with search engines it is used to create semantic search applications.

43 Based upon the Wikipedia entry http://en.wikipedia.org/wiki/Text_mining (the way it was in late July 2010). 1114 Appendix: Glossary

• SAS – solutions including SAS Text Miner and Teragram – commercial text analytics, natural language processing, and taxonomy software leveraged for Information Management. • Silobreaker – provides text analytics, clustering, search and visualisation technologies. • SPSS – provider of SPSS Text Analysis for Surveys, Text Mining for Clementine, LexiQuest Mine and LexiQuest Categorize, commercial text ana- lytics software that can be used in conjunction with SPSS Predictive Analytics Solutions. • StatSoft – provides STATISTICA Text Miner as an optional extension to STATISTICA Data Miner, for Predictive Analytics Solutions. • Thomson Data Analyzer – enables complex analysis on patent information, scientific publications and news.

Open source resources include44:

• GATE – natural language processing and language engineering tool. • UIMA – UIMA (Unstructured Information Management Architecture) is a component framework for analysing unstructured content such as text, audio and video, originally developed by IBM. • YALE/RapidMiner with its Word Vector Tool plug-in – data and text mining software. • Carrot2 – text and search results clustering framework.

Time Legal time is a debated issue in legal theory (e.g., Jackson, 1998b), as well as in AI & Law.45 For the latter, see a thematic journal issue (Martino & Nissan, 1998) devoted to temporal representation for legal applications. In our present context, it is worth mentioning especially the treatment of a crime narrative in section 5 (pp. 233–238) in Gian Piero Zarri’s article (Zarri, 1998) in that journal issue. Zarri described and applied his NKRL system of representation of time, causality and intentionality. Poulin, Mackaay [sic], Bratley, and Frémont (1989) described a “time specialist” software – using “intervals as the basic temporal element” (ibid., p. 747) – as well as a language, EXPERT/T, based on a tempo- ral logic for legal rules. Temporal logics are popular in AI (Fisher et al., 2005; Knight et al., 1999). Also see “Time in automated legal reasoning” by Vila and Yoshino (2005). For another formalism that is oriented, instead, to the semantic representation of verbal tense, Alice ter Meulen’s trees for temporal represen- tation, stemming from theoretical semantics, refer to a book review by Nissan

44 http://en.wikipedia.org/wiki/Text_mining 45 Moreover, the recency effect is debated in psychology, in relation to legal evidence (Furnam, 1986). Appendix: Glossary 1115

(1998b). One of the products of CaseSoft,46 an American firm producing soft- ware for legal professionals, is the TimeMap chronology-graphing software. Let us consider constraints on temporal sequence in a criminal trial in Anglo- American jurisdictions. The phases of such a trial are as follows: Indictment; The accused is asked to plea guilty or not guilty;

• If the defendant pleas guilty – plea-bargain: 1. The court hears the facts from the prosecution (with no need to present evidence); 2. Defence may intervene; 3. Sentence. • If the defendant pleas not guilty, the case will have to be prosecuted; 1. Adjournment to an agreed date; 2. Adjournment hearing (following adversarial lines); 3. Prosecution opening speech; 4. Prosecution calls witnesses; 4.1. Examination in chief; 4.2. Cross-examination; 4.3. (sometimes) re-examination; 5. Close of the prosecution case; 6. (The defence may submit that there is no case to answer. If the court accepts this, the defendant is discharged. Otherwise:) 7. Defence calls witnesses: 7.1. Examination in chief; 7.2. Cross-examination; 7.3. (sometimes) re-examination; 8. Defence’s closing speech to the bench (= closing arguments = final submissions); 9. (Prosecution may have one more speech, but then defence must have the last word.) 10. The magistrates retire to consider their decision (the decision is taken either by a bench of lay magistrates, i.e., a jury, or a stipendiary magistrate, i.e., a trained judge); if the fact-finders are a jury, before they retire they are given final instructions by the judge; 11. The magistrates return and give a verdict (and state no reason); 11a. If the verdict is “not guilty”, then the defendant is discharged; 11b. If the verdict is “guilty”, then: 11b.1. The court hears the facts from the prosecution (with no need to present evidence); 11b.2. Defence may intervene; 11b.3. Sentence.

46 http://www.casesoft.com 1116 Appendix: Glossary

How is the delivery of the evidence affected, by the constraints on the tempo- ral sequence of the phases of a trial? There are important implications for the possibility to introduce evidence. Consider for example employment tribunals in England and Wales (thus, we are not dealing now with criminal cases). The Applicant (i.e., the plaintiff) gives his statement, which is read aloud by himself, or silently by the Court. He is then examined by his barrister (he has one, unless he is representing himself), and then cross-examined by the Respondent’s barris- ter. (If the Applicant is an employee, then the Respondent will typically be his employer.) Some new evidence, not found in the written statement and in the bundle of documentary evidence, may emerge when the Applicant is asked questions as a witness. Cross-examination of the Applicant is followed sometimes by his being subjected to re-examination by his own side’s barrister, and then (also optionally) the Court may ask him questions. Then, all witnesses for the Applicant undergo (each in turn) the same cycle of giving their respective statement, being exam- ined, cross-examined, possibly re-examined, and asked questions by the Court. Next, the same happens for the Respondent. A major problem for the Applicant arises, if witnesses for the defence present new evidence when examined and cross-examined, as the Applicant’s barrister may be unaware on the spur of the moment of what to ask next to such an item of evidence emerging, whereas the (former) employee being the Applicant would be quite able, cognitively but not procedurally in the trial, to ask such questions that would expose untruthful evidence when it is submitted by the defence’s witnesses when examined or cross-examined. Procedural constraints prevent the Applicant himself from intervening (unless he has no barrister and is representing himself, which in other respects would be a big disadvantage), let alone making a further statement giving evidence in response to the defence’s witnesses. (This is not for- mally forbidden, but in practice this is strongly and tacitly discouraged, because of how complicate the trial would become.) A further disadvantage for the Applicant is if after he and his witnesses have finished giving evidence, the barrister for the defence submits some fur- ther item of documentary evidence (this may be very important, and deliberately withheld for the purposes of an “ambush”): the Court may criticise such a move, yet accept that the new evidence be added to the bundle. The Applicant’s barrister may protest, or refrain from doing so if he or she deems that protest- ing would be impolitic. Even when summing up in the end, the barrister for the Applicant will be unable to introduce new information as evidence, even though the Applicant may have such information that is relevant or even crucial. It happens sometimes that by agreement between the parties, a witness for the Applicant will be able to give evidence after one or more witnesses of the Respondent, because the witness being late is cogently unable to come before (e.g., if he has to fly from abroad). Nevertheless, it is up to the parties and to the Court to agree about this. It may be of advantage to the Applicant, if the late witness for the Applicant will be asked questions (possibly, even by the Court) Appendix: Glossary 1117

that would enable to assess some evidence that had previously been introduced by a witness for the Respondent. This way, such evidence may be refuted, that would not have been otherwise, because of procedural constraints. The situation with a late witness for the Applicant is such that procedural constraints on the temporal sequence are overridden because of the agreement between the parties, which enables that witness give evidence after one or more of the witnesses for the Respondent. Out of courtesy, the Court may then instruct the defence that if the defence would like to call back its witnesses that had given evidence before, after the late witness of the Applicant, this request would be granted. The defence may then renounce this (perhaps in order not to be per- ceived as having been put at a disadvantage). Another way for time constraints to be involved, is that the hearing at the tribunal is booked several months in advance, and one’s barrister will have to be paid according to the expected length of the hearing. One tactic of the Respondent may be to cause their witnesses (including ones who do not really introduce important evidence) use more time than expected, so that a new additional hear- ing, months away, will have to be booked, at which some more witnesses of the Respondent will give evidence, then the Applicant’s barrister will sum up, and the Respondent’s barrister will sum up. At this additional hearing, the Applicant will have to be silent, not being able to introduce more evidence. Booking another hearing may be beyond what is affordable to the more impecunious party, which oftentimes is an employee of a big corporation. This may in practice compel the Applicant to accept a settlement. Yet another problem is that sometimes employment tribunals (a president and two further members) are double-booked by the administration of the tribunals, and they themselves only learn about this early during the hearing. For the more impecunious party, this is a major burden, making it more likely that one more hearing will have to be booked. This, too, militates towards the more impecu- nious party being more likely to accept a settlement. All of this is interesting both legally, and for AI modelling. The temporal sequence conditions how the evidence can be introduced, and whether evidence can be given in reply. Techniques from AI can represent this. Yet, AI practitioners need know about such procedural constraints. Importantly, there are less exclu- sionary rules on evidence at employment tribunals in England and Wales, than there are on criminal evidence. In employment cases, a first deadline applies to the submission of which doc- umentary evidence will go into the bundle. The solicitors of the two parties reach an agreement. Exceptionally (as seen earlier) some new document may be sub- mitted during the hearing, subject to the discretion of the Court. Some time after the bundle of documentary evidence is finalised, the two parties exchange witness statements. This must be simultaneous, in order to avoid that a last-minute change is done in the statement that comes late, so that it would respond to some “sur- prise” in store in the statement of the other party that arrived early. Exceptionally, on the day after the exchange of statements it may happen that the solicitors for the defence claim to the solicitor of the Applicant that they forgot to email or 1118 Appendix: Glossary

to fax one of the witnesses statements. The Applicant may renounce to protest, considering that the Court may override the protest. All of this is fertile ground for AI modelling. Temporal constraints are so important, at a hearing, that they may make or break a case. We have explained such situations, that a constraint is not satisfied, but then a standard arrangement is adopted. This can be modelled in terms of contrary-to-duty obligations (q.v.). Toulmin’s model A widespread model of argument structure (Toulmin, 1958). It consists of the following parts: Data (the premises), Claim (the conclusion), Qualifier (the modality of how the argument holds), Warrant (support for the argument), Backing (support for the Warrant), and Rebuttal (an exception). See Section 3.2. Traitor tracing A technique applied to pirate tracking software, within multimedia forensics. The term traitor tracing has previously been used also in the literature about cryptography. See Section 6.2.1.5. Transvaluationism An account of vagueness proposed by philosopher Terry Horgan. Cf. loose talk (q.v.). Vagueness in statements given in court is usually recognised not to amount to untruthfulness. Horgan (2010, p. 67) states: The philosophical account of vagueness I call “transvaluationism” makes three fun- damental claims. First, vagueness is logically incoherent in a certain way: it essen- tially involves mutually unsatisfiable requirements that govern vague language, vague thought-content, and putative vague objects and properties. Second, vagueness in language and thought (i.e., semantic vagueness) is a genuine phenomenon despite pos- sessing this form of incoherence – and is viable, legitimate, and indeed indispensable. Third, vagueness as a feature of objects, properties, or relations (i.e., ontological vague- ness) is impossible, because of the mutually unsatisfiable conditions that such putative items would have to meet. An important concept in Horgan’s treatment is that of sorites sequence, an exam- ple of which is “a sequence of men each of whom has a tiny bit more hair on his head than his predecessor” (when applying vagueness to the descriptor bald). Horgan explains (2010, pp. 70–71): A second essential feature of vagueness is what I call “boundarylessness” – a term I adopt from Sainsbury (1990). This feature, which obtains with respect to a sorites sequence, involves the simultaneous satisfaction by the sequence of the following two conditions:

The Difference Condition: Initially in the sorites sequence there are items with a specific status and every predecessor of an item with this status has the same status. Eventually in the sequence there are items with the polar-opposite sta- tus, and every successor of an item with this status has the same status. No item in the sequence has both the initial status and the polar-opposite status. The Transition Condition: There is no determinate fact of the matter about status-transitions in the sorites sequence.

Examples of polar-opposite statuses are baldness vs. nonbaldness, heaphood vs. non- heaphood, satisfying the predicate “is bald” vs. satisfying the expression “is not bald”, truth vs. falsity. Appendix: Glossary 1119

The Transition Condition needs further conceptual unpacking. It involves, essentially, two conceptual aspects or dimensions, one individualistic and the other collectivistic.

The Individualistic Same-Status Principle (ISS Principle): Each item in the sorites sequence has the same status as its immediate neighbors. The Collectivistic Status-Indeterminacy Principle (CSI Principle): There is no correct overall distribution of statuses to the items in the sequence.

The ISS Principle is so called because it involves items in the sequence considered indi- vidually – each considered in relation to its immediate neighbors. The CSS Principle is so called because it involves the items in the sequence considered collectively. Both prin- ciples are essentially involved in the idea of boundarylessness – the idea of an absence of sharp boundaries. TreeAge Pro Decision tree software, for performing a Litigation Risk Analysis. See Section 4.3.1. Trial by mathematics Originally, the title of Tribe (1971), about the Bayesian approach to modelling judicial decision-making in criminal cases. Nevertheless, the phrase is likelier to occur in polemical contexts. Triangulation Aformofonline auction fraud (see Section 6.2.3). “Involves three parties: the perpetrator, a consumer, and an online merchant. The perpetrator buys merchandise from an online merchant using stolen identities and credit card numbers. Then, the perpetrator sells the merchandise at online auction sites to unsuspecting buyers. Later, the police seize the stolen merchandise to keep for evidence, and the buyer and merchant end up the victims” (Wahab, 2004). Triers of fact See factfinders. Jurors are lay triers of fact. Trustworthiness question In argumentation studies, Walton’s (1997) Appeal to Expert Opinion offered (ibid., pp. 211–225) an argumentation scheme for “Argument for Expert Opinion”, then reproduced in Walton et al. (2008, pp. 381– 382). See s.v. Expert opinion, Appeal to above. The expert source is E; the subject domain is S; and A is a proposition about which E claims to be true (or false). The trustworthiness question is: “Is E personally reliable as a source?”. It is artic- ulated in three more detailed subquestions: “Is E biased?”; “Is E honest?”; “Is E conscientious?”. Truth maintenance system (TMS for short) Within artificial intelligence, such rep- resentation and search procedures that keep track of the reasoning steps of a logic system. “Nonmonotonic reasoning [q.v.], because conclusions must sometimes be reconsidered, is called defeasible; that is, new information may sometimes invalidate previous results. [...] In defeasible reasoning, the TMS preserves the consistency of the knowledge base, keeping track of conclusions that might later need be questioned” (Luger & Stubblefield, 1998, p. 270). Uncharged conduct or uncharged misconduct A kind of bad character evidence: such past behaviour for which no charges were brought. See character evidence. Utility One theoretical approach to adjudication is in terms of utility: see Friedman (1997, pp. 277–278); Lempert (1977, pp. 1021, 1032–1041). Let there be two options: plaintiff (p) wins, i.e., the court finding for the plaintiff, or defendant (d) wins, i.e., the finding is for d. 1120 Appendix: Glossary

It would seem wisest to select the option with the greater expected utility. The formulae are:

EU(p) = P() × U(p,) + P() × U(p,)

and

EU(d) = P() × U(d,) + P() × U(d,),

“where EU(p) and EU(d) represent the expected utilities of judgments for the plaintiff and the defendant, respectively; P() represents the probability that the facts are such that the plaintiff is entitled to judgment, and P() represents the comparable probability with respect to the defendant” (Friedman, p. 277). Of the two arguments of the (social) utility function U, the first one represents the winner (“the party who receives the judgment”), and the second one stands for the party that in truth deserves to win (“the party who is in fact entitled to judg- ment”). “Thus, for example, U(p,) equals the social utility of a judgment for the plaintiff when the truth, if it were known, is such that the defendant should receive judgment. U(p,) and U(d,) must each have greater utility than U(p,) and U(d,); it is helpful to assume that the first pair has positive utility and the second pair has negative utility” (ibid., pp. 277–278). The standard of persua- sion O() is the degree of confidence when EU(p) = EU(d). The plaintiff wins is optimal “only if the fact-finder’s degree of confidence in the plaintiff’s case is at least as great as this level” (ibid., p. 278), if it is a civil case.

O() = P()/(1 − P()) = (U(d,) − U(p, )/(U(p, )) − U(d,))

In contrast, in a criminal case the standard is beyond a reasonable doubt. The negative utility of wrongly convicting an innocent, U(p,), “far exceeds any of the other utilities in magnitude” (ibid., p. 278). In civil cases, the usual con- ception is that U(p,) = U(d,) and that U(p,) = U(d,). “This means that the standard of persuasion, expressed in odds, equals 1, or 0.5 expressed as a probability. This, of course, is the familiar ‘more likely than not’, or ‘balance of probabilities’, standard” (ibid., p. 278). Virtopsy A computational technique developed by a team in Bern, Switzerland, for carrying out “a virtual autopsy”: information acquired through post mortem imaging prior to autopsy is often used to plan the autopsy, confirm autopsy find- ings and allow for a second look if further questions arise in during the forensic investigation. See Chapter 9. Voir dire The jury selection process, with safeguards: the parties can have prospec- tive jurors rejected, as a safeguard intended to weed out such jurors that are perceived to be prone to be biased. “Courts in the USA permit attorneys much more latitude in jury selection (voir dire) than do criminal court procedures world-wide” (Cutler & Penron, p. 208). Nevertheless, “US federal courts and Appendix: Glossary 1121

many state courts (e.g. Massachusetts, California) perform the most perfunctory voir dire and do not permit attorneys to ask questions about jurors’ attitudes. Indeed, judges in these courts are not obligated to permit attorneys to ask any questions during voir dire” (ibid.). This is a severe limitation on voir dire as a safeguard, and attorneys are in a sense forced to rely on stereotypes about cate- gories of prospective jurors, rather than on informed evaluations. Moreover, voir dire also applies to the acceptance of an expert witness being opposed by the opposing attorney. In the words of Knott (1987, p. 14):

The opposing attorney will ask questions that will show that you have limited expertise in the specific field at hand, and therefore your testimony should be limited or disal- lowed. The attorney is really trying to prevent certain opinions from being introduced and is doing it on the grounds that your expertise does not extend into that area. He or she will ask you questions and, on the basis of your answers, will move to reject you as an expert. Note: You will not be allowed to say anything more in your defense. The judge will assume the answers you gave during voir dire were complete. Your attorney may ask you additional questions to clear up the confusion, but don’t count on it. Note that if the opposing attorney is successful, he or she may have destroyed you and your client’s case.

Weight (evidential) The probative value of the evidence. Wigmore Charts A graphic method of structuring legal arguments, currently con- spicuous in some more formal approached within legal evidence scholarship; first introduced by American jurist John Henry Wigmore in the Illinois Law Review, 8 (1913), 77. See Section 3.2. Witness: two-witness rule Mandated by Biblical law for capital cases: two eye- witnesses are necessary, and circumstantial evidence or other evidence is not valid. This rule has been influential. Bernard Jackson, who discussed the matter at length in Jackson (1977), explains (Jackson, 1990, p. 18):

[T]he two-witness rule of the Bible has been widely adopted in countries influenced by Canon law, as indeed have some of the necessary means of avoiding its rigours. When the medieval Canon lawyers sought to construct an institution of corroboration by similar fact evidence (testes singulares), they justified their argument by analysis of the facts of the story of Susannah, found in the Apocrypha to the Hebrew Bible. True enough, they said, Susannah could not be rightly convicted when one elder said that she committed adultery under an oak while the other said it was under a holm tree. But that was only because the two elders had claimed to have observed the event together. Had they not made this claim, their evidence would not have been regarded as logically contradictory: for though adultery may not be committed simultaneously under two different trees, it may be so committed successively. Moreover, we all know (so the Canon law doctors argued) that adultery with the some lover is an act which is prone to be repeated – factum iterabile – unlike some other crimes against Canon law, such as the murder of a Bishop (especially the same Bishop). I have traced the use of this argument for corroboration by similar fact evidence from a Canonist Summa of the mid-12th century, written in Bologna, to English treason trials of the 17th century, and a famous divorce case of the same period, which then became one of the principal foundations for the so-called Moorov doctrine which Lord Hailsham so fully read into his speech in the House of Lords in the modern leading case of Kilbourne. 1122 Appendix: Glossary

In Jewish law, this argument would not have been valid for conviction in a crim- inal case, and two eyewitnesses would have still been necessary, who witnessed the same event and reported about it with no contradiction. Witness vs. expert testimonies Evidence as given by individuals who have knowl- edge of specific details in a legal narrative at hand, as opposed to evidence given by professionals (expert witnesses) based on their professional expertise. References

AAAI. (2002). Ontologies and the semantic web: Papers from the AAAI workshop, Edmonton, AB, Canada, July 2002. American Association for Artificial Intelligence. Edmonton, AB: AAAI Press. Aamodt, A., Kvistad, K. A., Andersen, E., Lund-Larsen, J., Eine, J., Benum, P., et al. (1999). Determination of the Hounsfield value for CT based design of custom femoral stems. The Journal of Bone & Joint Surgery,47 81-B(1), 143–147. http://web.jbjs.org.uk/cgi/reprint/81-B/ 1/143.pdf Aarne, A., & Thompson, S. (1928). The types of the folktale: A classification and bibliogra- phy (A. Aarne, Trans. and S. Thompson, Enlarge) (FF Communications, Vol. 74.) Helsinki: Suomalainen Tiedeakatemia = Academia Scientiarum Fennica, 1928. 2nd revision: (FF Communications, Vol. 75, No. 184), 1961. Reprints: 1973, 1964, 1981. Another reprint: B. Franklin, New York, 1971. Aarne’s German original was Verzeichnis der Märchentypen. Abaci, T., Mortara, M., Patane, G., Spagnuolo, M., Vexo, F., & Thalmann, D. (2005). Bridging geometry and semantics for object manipulation and grasping. In Proceedings of workshop towards Semantic Virtual Environments (SVE 2005). Also, Report VRLAB CONF 2005 021. Lausanne: Virtual Reality Lab at the Swiss Federal Institute of Technology. http://infoscience. epfl.ch/getfile.py?recid=99017&mode=best Abbasi, A. (2010, July/August). Intelligent feature selection for sentiment classification. In H. Chen (Ed.), AI and opinion mining, part 2, under the rubric Trends & Controversies. IEEE Intelligent Systems, 25(4), 75–79. Abelson, R. P. (1979). Differences between belief and knowledge systems. Cognitive Science, 3, 355–366. Abelson, R. P. (1995). Statistics as a principled argument. Hillsdale, NJ: Lawrence Erlbaum Associates. Abiteboul, S., Fischer, P. C., & Schek, H.-J. (Eds.). (1989). Nested relations and complex objects in databases. Lecture Notes in Computer Science, Vol. 361. Berlin: Springer. Abrahams, A. S., Eyers, D. M., & Bacon, J. M. (2009). Structured storage of legal precedents using a minimal deontic ontology, for computer assisted legal document querying. International Journal of Metadata, Semantics and Ontologies, 4(3), 196–211. Ackerman, M. J. (1995). Clinician’s guide to child custody evaluations.NewYork:Wiley. Ackermann, W. (1956). Begründung einer strengen Implikation. Journal of Symbolic Logic, 21(2), 113–128. http://www.jstor.org/stable/2268750 Ackley, D. H., Hinton, G. E., & Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive Science, 9, 147–169.

47 This is the journal of the British Editorial Society of Bone and Joint Surgery.

1123 1124 References

Aczel, P., Israel, D., Katagiri, Y., & Peters, S. (Eds.). (1993). Situation theory and its applications, (Vol. 3). CSLI Lecture Notes, Vol. 37. Stanford, CA: Center for the Study of Language and Information (CSLI).48 Adderley, R., & Musgrove, P. (2003a). Modus operandi modeling of group offending: A . Section 6.12 In J. Mena (Ed.), Investigative data mining for security and criminal detection (pp. 179–195). Amsterdam & Boston: Butterworth-Heinemann (of ). Adderley, R., & Musgrove, P. (2003b). Modeling the behavior of offenders who commit seri- ous sexual assaults: A case study. Sec. 12.4 In J. Mena (Ed.), Investigative data mining for security and criminal detection (348–362). Amsterdam & Boston: Butterworth-Heinemann (of Elsevier). Adderley, R., & Musgrove, P. B. (2003c). Clustering burglars: A case study. Section 1.15 In J. Mena (Ed.), Investigative data mining for security and criminal detection (pp. 24–37). Amsterdam & Boston: Butterworth-Heinemann (of Elsevier). Adler, J. R. (Ed.). (2004). : Concepts, debates and practice. Cullompton: Willan Publishing (distrib. Routledge). 2nd edition: 2010. Adriaans, P., & Zantinge, D. (1996). Data mining. Reading, MA: Addison Wesley. Agbaria, A., & Friedman, R. (2005). A replication- and checkpoint-based approach for anomaly- based intrusion detection and recovery. At the second international workshop on Security in Distributed Computing Systems (SDCS). In Proceedings of the 25th International Conference on Distributed Computing Systems Workshops (ICDCS 2005 Workshops), 6–10 June 2005, Columbus, OH. IEEE Computer Society 2005, pp. 137–143. Aggarwal, C. C. (2011). Social network data analysis. Berlin: Springer. Aghayev, E., Ebert, L. C., Christe, A., Jackowski, C., Rudolph, T., Koval, J., et al. (2008). CT based navigation for post-mortem biopsy: A feasibility study. Journal of Forensic Legal Medicine, 15(6), 382–387. Aghayev, E., Thali, M. J., Sonnenschein, M., Jackowski, C., Dirnhofer, R., & Vock, P. (2007). Post- mortem tissue sampling using computed tomography guidance. Forensic Science International, 166(2/3), 199–203. Agrawal, R., Imielinski, T., & Swami, A. (1993). Mining association rules between sets of items in large databases. In Proceedings of the 1993 ACM SIGMOD international conference on management of data (SIGMOD’93), Washington, DC, pp. 207–216. Agrawal, R., & Srikant, R. (1994). Fast algorithms for mining association rules. In Proceedings of the 20th international conference on Very Large Data Bases (VLDB’94), Santiago, Chile, pp. 487–499. Agur, A., Ng-Thow-Hing, V., Ball, K., Fiume, E., & McKee, N. (2003). Documentation and three-dimensional modelling of human soleus muscle architecture. Clinical Anatomy, 16(4), 285–293. Ahmed, B. (2007). Role players: Baria Ahmed looks at the many functions the expert may fulfil in ADR. In Expert Witness Supplement to The New Law Journal, 157(7294) (London, 26 October 2007), 1485. Aikenhead, M. (1996). The uses and abuses of neural networks in law. Santa Clara Computer and High Technology Law Journal, 12(1), 31–70. Aitken, C. (1995). Statistics and the evaluation of evidence for forensic scientists. Chichester: Wiley. Aitken, C., & Taroni, F. (2004). Statistics and the evaluation of evidence. Chichester: Wiley. Aitken, C., Taroni, F., & Garbolino, P. (2003). A graphical model for the evaluation of cross- transfer evidence in DNA profiles. Theoretical Population Biology, 63(3), 179–190. Aked, J. P. (1994). Individual constancies in written language expression.Ph.D.Thesis,University of Glasgow.

48 These are the Proceedings of the Third International Conference on Situation Theory and Its Applications Oiso, Japan, November 1991. References 1125

Akin, L. L. (2004). Blood spatter interpretation at crime and accident scenes: A step by step guide for medicolegal investigators. Austin, TX: On Scene Forensics. Akin, L. L. (2005). Blood interpretation at crime scenes. The Forensic Examiner, Summer 2005, 6–10. http://www.onsceneforensics.com/PDFs%20Forms/ACFEI_BLOOD_SPATTER_AKIN. pdf Albonetti, C. (1987). Prosecutorial discretion: The effects of uncertainty. Law and Society Review, 21, 291–313. doi://10.2307/3053523. Alchourrón, C. E., Gärdenfors, P., & Makinson, D. (1985). On the logic of theory change: Partial meet contraction and revision functions. The Journal of Symbolic Logic, 50, 510–530. Alcoff, L. M. (2010). Sotomayor’s reasoning. The Southern Journal of Philosophy, 48(1), 122–138. Aleven, V., & Ashley, K. D. (1997). Evaluating a learning environment for case-based argumen- tation skills. In Proceedings of the sixth international conference on artificial intelligence and law. New York: ACM Press, pp. 170–179. Alexander, R. (1992). Mediation, violence and the family. Alternative Law Journal, 17(6), 276–299. Alexander, R. (2000). Reflections on gender in family law decision making in Australia.Ph.D thesis, Faculty of Law, Monash University, Clayton, VIC. Alexy, R. (1989). A theory of legal argumentation. Oxford: Clarendon Press. Alheit, K. (1989). Expert systems in law: Issues of liability. In A. A. Martino (Ed.), Pre- proceedings of the third international conference on “Logica, informatica, diritto: Legal expert systems”, Florence, 1989 (2 vols. + Appendix) (Vol. 2, pp. 43–52). Florence: Istituto per la Documentazione Giuridica, Consiglio Nazionale delle Ricerche. Alker, H.R., Jr. (1996). Toynbee’s Jesus: Computational hermeneutics and the continuing pres- ence of classical Mediterranean civilization. Chapter 3 In H. R. Alker Jr. (Ed.), Rediscoveries and reformulations: Humanistic methodologies for international studies. (Cambridge Studies in International Relations, Vol. 41, pp. 104–143) Cambridge: Cambridge University Press. Extracted and revised from Alker et al. (1985). Alker, H. R., Jr., Lehnert, W. G., & Schneider, D. K. 1985. Two reinterpretations of Toynbee’s Jesus: Explorations in computational hermeneutics. In G. Tonfoni (Ed.), Artificial intelligence and text-understanding: Plot units and summarization procedures (pp. 49–94). Quaderni di Ricerca Linguistica, Vol. 6. Parma, Italy: Edizioni Zara. Al-Kofahi, K., Tyrrell, A., Vachher, A., & Jackson, P. (2001). A machine learning approach to prior case retrieval. In Proceedings of the eighth International Conference on Artificial Intelligence and Law (ICAIL’01), St. Louis, MO. New York: ACM Press, pp. 89–93. Allen, J. F. (1983a). Recognizing intentions from natural language utterances. Chapter 2 In M. Bradie & R. C. Berwick (Eds.), Computational models of discourse (pp. 108–166). Cambridge, MA: MIT Press. Allen, J. F. (1983b). Maintaining knowledge about temporal intervals. Communications of the ACM, 26, 832–843. Allen, J. F. (1984). Towards a general theory of action and time. Artificial Intelligence, 23(2), 123–154. Allen, J. F. (1991). Time and time again: The many ways to represent time. International Journal of Intelligent Systems, 6, 341–355. Allen, M., Bench-Capon, T., & Staniford, G. (2000). A multi-agent legal argument genera- tor. In Proceedings of the eleventh international workshop on Database and Expert Systems Applications (DEXA 2000), September 2000, Greenwich, London. New York: IEEE Computer Society, pp. 1080–1084. Allen, R., & Redmayne, M. (Eds.). (1997). Bayesianism and Juridical Proof, special issue, The International Journal of Evidence and Proof, 1, 253–360. (London: Blackstone) Allen, R. J. (1986). A reconceptualization of civil trials. Boston University Law Review, 66, 401–437. Allen, R. J. (1991). The nature of juridical proof. Cardozo Law Review, 13, 373–422. Allen, R. J. (1992). The hearsay rule as a rule of admission. Minnesota Law Review, 76, 797–812. 1126 References

Allen, R. J. (1994). Factual ambiguity and a theory of evidence. Northwestern University Law Review, 88, 604–640. Allen, R. J. (1997). Rationality, algorithms and juridical proof: A preliminary inquiry. International Journal of Evidence and Proof, 1, 254–275. Allen, R. J. (2000). Clarifying the burden of persuasion and Bayesian decision rules: A response to Professor Kaye. International Journal of Evidence and Proof, 4, 246–259. Allen, R. J. (2001a). Artificial intelligence and the evidentiary process: The challenges of formalism and computation. Artificial Intelligence and Law, 9(2/3), 99–114. Allen, R. J. (2001b). Clarifying the burden of persuasion and Bayesian decision rules: A response to Professor Kaye. International Journal of Evidence and Proof, 4, 246–259. Allen, R. J. (2003). The error of expected loss minimization. Law, Probability & Risk, 2, 1–7. Allen, R. J. (2008a). Explanationism all the way down. Episteme, 3(5), 320–328. Allen, R. J. (2008b). Juridical proof and the best explanation. Law & Philosophy, 27, 223–268. Allen, R. J., & Lively, S. (2003 [2004]). Burdens of persuasion in civil cases: Algorithms v. explanations. MSU Law Review, 2003, 893–944. Allen, R. J., & Pardo, M. S. (2007a). The problematic value of mathematical models of evidence. Journal of Legal Studies, 36, 107–140. Allen, R. J., & Pardo, M. S. (2007b). Probability, explanation and inference: A reply. International Journal of Evidence and Proof, 11, 307–317. Allen, R. J., & Pardo, M. S. (2008). Juridical proof and the best explanation. Law & Philosophy, 27, 223–268. Almirall, J. (2001). Manslaughter caused by a hit-and-run: Glass as evidence of association. Chapter 7 In: M. M. Houck (Ed.), Mute witnesses: Trace evidence analysis. London: Academic. ALS News. (2000). Laser time slicing promises ultrafast time resolution. ALS News, 156 (July 12, 2000, last updated on September 30, 2002), report posted at The Advanced Light Source website at http://www.als.lbl.gov/als/science/sci_archive/femto2.html Retrieved in March 2007. Alston, W. P. (1989). Epistemic justification. Ithaca, NY: Cornell University Press. Alston, W. P. (2005). Perception and representation. Philosophy and Phenomenological Research, 70, 253–289. Altman, A., & Tenneholtz, M. (2005). Ranking systems: The PageRank axioms. In EC ’05: Proceedings of the 6th ACM conference on Electronic Commerce (EC’05), Vancouver, Canada, 5–8 June 2005, pp. 1–8. http://stanford.edu/~epsalon/pagerank.pdf Alur, R, Henzinger, T. A., & Kupferman, O. (2002). Alternating-time temporal logic. Journal of the ACM, 49(5), 672–713. Alvarado, S. J. 1990. Understanding editorial text: A computer model of argument comprehension. Boston and Amsterdam: Kluwer. Cf. an earlier version at ftp://ftp.cs.ucla.edu/tech-report/198_- reports/890045.pdf Alvesalo, A. (2003). Economic crime investigators at work. Policing & Society, 13(2), 115–138 (Taylor & Francis). Amgoud, L., Caminada, M., Cayrol, C., Doutre, S., Lagasquie-Schiex, M.-C., Modgil, S., et al. (2004). Argument-based inference. In J. Fox (Ed.), Theoretical Framework for argumentation (pp. 3–46). ASPIC Consortium. Amgoud, L., & Maudet, N. (2002). Strategical considerations for argumentative agents. In S. Benferhat & E. Giunchiglia (Eds.), Proceedings of the 9th international workshop on Non-monotonic Reasoning (NMR) (pp. 399–407). Toulouse, France: IRIT. Amigoni, F., & Continanza, L. (2012, in press). A lattice-based approach to the problem of recruitment in multiagent systems. Computational Intelligence. Amin, R., Bramer, M., & Emslie, R. (2003). Intelligent data analysis for conservation: Experiments with rhino horn fingerprint identification. Knowledge Based Systems, 16(5–6). Amos, W. (1985). The originals: Who’s really who in fiction. London: Cape. Anderson, A. R. (1960). Completeness theorems for the system E of entailment and EQ of entail- ment with quantification. Zeitschrift für mathematische Logik und Grundlagen der Mathematik, 6, 201–216. References 1127

Anderson, A. R. (1967). Some nasty problems in the formal logic of ethics. Nous, 1, 354–360. Anderson, A. R., & Belnap, N. D. (1975). Entailment: The logic of relevance and necessity (Vol. 1). Princeton, NJ: Princeton University Press. Anderson, A. R., & Dunn, J. M. (1992). Entailment: The logic of relevance and necessity (Vol. 2). Princeton, NJ: Princeton University Press. Anderson, G. S. (2005). Forensic entomology. In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Anderson, M., & Perlis, D. (2005). Logic, self-awareness and self-improvement: The metacogni- tive loop and the problem of brittleness. Journal of Logic and Computation, 15(1), 21–40. Anderson, T., Schum, D., & Twining, W. (2005). Analysis of evidence: How to do things with facts. Based on Wigmore’s science of judicial proof. Cambridge, UK: Cambridge University Press. Anderson, T., & Twining, W. (1991). Analysis of evidence: How to do things with facts. (With a teacher’s manual.) London: Weidenfeld & Nicolson49; Boston: Little, Brown & Co., 1991; Evanston, IL: Northwestern University Press, 1998. [The 2nd edn. (extensively revised) is Anderson et al. (2005).] Anderson, T. J. (1999a). The Netherlands criminal justice system: An audit model of decision- making. Chapter 4 In M. Malsch & J. F. Nijboer (Eds.), Complex cases: Perspectives on the Netherlands criminal justice system (pp. 47–67). (Series: Criminal Sciences). Amsterdam: THELA THESIS. Anderson, T. J. (1999b). On generalizations: A preliminary exploration. South Texas Law Review, 40. André, E., Rist, T., & Müller, J. (1998). Integrating reactive and scripted behaviors in a life- like presentation agent. In K. P. Sycara & M. Wooldridge (Eds.), Proceedings of the second international conference on autonomous agents (pp. 261–268). New York: ACM Press. [Anon.] (2001). Rediscovery of long lost birds. (Rediscovery of rare birds makes us aware of the importance of micro-habitats for saving an endangered species.) Deccan Herald (Bangalore, India), Sunday, October 14, 2001. http://www.deccanherald.com/deccanherald/oct14/sh5.htm Anouncia, S. M., & Saravanan, R. (2007). Ontology based process plan generation for image processing. International Journal of Metadata, Semantics and Ontologies, 2(3), 211–222. Antoniou, G. (1997). Nonmonotonic reasoning with incomplete and changing information. Cambridge, MA: The MIT Press. Antoniou, G., Billington, D., Governatori, G., Maher, M. J., & Rock, A. (2000). A flexible framework for defeasible logics. In Proceedings of the 17th national conference on artificial intelligence and 12th conference on innovative applications of artificial intelligence, Austin, TX. Cambridge, MA: MIT Press for the AAAI Press, pp. 405–411. Antoniou, G., Billington, D., & Maher, M. J. (1999). The analysis of regulations using defeasible rules. In Proceedings of the 32nd Hawaii international conference on systems science.Maui, Hawaii, p. 225. Appelbaum, P. S., & Kemp, K. N. (1982). The evolution of commitment law in the nineteenth century: A reinterpretation. Law and Human Behavior, 6(3/4), 343–354. Appling, D. S., & Riedl, M. O. (2009). Representations for learning to summarise plots. In Intelligent narrative technologies, II: Papers from the AAAI spring symposium, 2009. Åqvist, L. (1967). Good Samaritans, contrary-to-duty imperatives, and epistemic obligations. Noûs, 1, 361–379. Åqvist, L. (1984). Deontic logic. In D. Gabbay & F. Guenthner (Eds.), Handbook of philosphical logic,vol.2:Extensions of classical logic (pp. 605–714). Dordrecht: Reidel (Kluwer). Åqvist, L. (1986). Introduction to deontic logic and the theory of normative systems. (Indices. Monographs in Philosophical Logic and Formal Linguistics, 4.) Naples, Italy: Bibliopolis.

49 There was a preliminary circulation draft already in 1984. 1128 References

Åqvist, L. (1992). Towards a logical theory of legal evidence: Semantic analysis of the Bolding- Ekelöf degrees of evidential strength. In A. A. Martino (Ed.), Expert systems in law (pp. 67–86). Amsterdam: North-Holland. Arbabi, E., Boulic, R., & Thalmann, D. (2007a). A fast method for finding maximum range of motion in the hip joint. In Computer assisted orthopaedic surgery: 7th annual meeting of CAOS, international proceedings (pp. 497–500). [CAOS’07.] Germany: Pro BUSINESS, on behalf of the International Society for Computer Assisted Orthopaedic Surgery50. Also, Report VRLAB- CONF-2007-139. Lausanne: Virtual Reality Lab at the Swiss Federal Institute of Technology. http://infoscience.epfl.ch/getfile.py?recid=109304&mode=best Arbabi, E., Boulic, R., & Thalmann, D. (2007b). A fast method for finding range of motion in the human joints. In Proceedings of the 29th annual international conference of the IEEE Engineering in Medicine and Biology Society (EMBS’07),51 Lyon, France, August 23–26, 2007. Also, Report VRLAB-CONF-2007-140. Lausanne: Virtual Reality Lab at the Swiss Federal Institute of Technology. Ardrey, B. (1994). Mass spectrometry in the forensic sciences. (VG Monographs in Mass Spectrometry.) Manchester: Fisons Instruments. Argamon, S., Bloom, K., Esuli, A., & Sebastiani, F. (2009). Automatically determining attitude type and force for sentiment analysis. In Z. Vetulani & H. Uszkoreit (Eds.), Responding to information society challenges: New advances in human language technologies (pp. 218–231). Lecture Notes in Artificial Intelligence, Vol. 5603. Berlin: Springer. Arnold, K. (1971). Johannes Trithemius (1462–1516). Würzburg, West Germany: Kommissionsverlag Ferdinand Schoningh. 2nd edn. (1991). Aron, J. (2012, January 14). Software could spot face-changing criminals. New Scientist, 213(2847), 18–19. Arrigoni Neri, M., & Colombetti, M. (2009). Ontology-based learning objects search and courses generation. In E. Nissan, G. Gini, & M. Colombetti (Eds.), Marco Somalvico memorial issue, Special issue of Applied Artificial Intelligence, 23(3), 233–260. Arthaber, A. (1929). Dizionario comparato di proverbi e modi proverbiali in sette lingue: italiana, latina, francese, spagnola, tedesca, inglese, greca antica. Milan: Hoepli (repr. 1972; curr. repr. 1991). Artikis, A., Sergot, M., & Pitt, J. (2003). An executable specification of an argumentation protocol. In G. Sartor (Ed.), Proceedings of the ninth International Conference on Artificial Intelligence and Law (ICAIL 2003), Edinburgh, Scotland, 24–28 June 2003 (pp. 1–11). New York: ACM Press. Arutyunov. (1963). See citations inside Keh (1984). Asaro, C., Nissan, E., & Martino, A. A. (2001). DAEDALUS: An integrated tool for the Italian examining magistrate and the prosecutor. A sample session: Investigating an extortion case. Computing and Informatics, 20(6), 515–554. Asher, N., & Sablayrolles, P. (1995). A typology and discourse semantics for motion verbs and spatial PPs in French. Journal of Semantics, 12(2), 163–209. Ashley, K. (1991). Modeling legal argument: Reasoning with cases and hypotheticals. Cambridge, MA: The MIT Press (Bradford Books). Astrova, I., & Kalja, A. (2008). Storing OWL ontologies in SQL3 object-relational databases. In Proceedings of the eighth conference on applied informatics and communications, Rhodes, Greece, August 20–22, 2008, pp. 99–103. Atib, H., & Zeleznikow, J. (2005). A methodology for constructing decision support systems for crime detection. In R. Khosla, R. J. Howlett, & L. C. Jain (Eds.), Knowledge-based intelligent information and engineering systems: Ninth international conference, KES 2005, Melbourne,

50 http://www.caos-international.org/ 51 http://www.embc07.ulster.ac.uk/ References 1129

Australia, September 14–16, 2005, Proceedings, Part IV (pp. 823–829). (Lecture Notes in Computer Science, Vol. 3684.) Berlin: Springer. Atkinson, J. M, & Drew, P. (1979). Order in court: The organization of verbal interaction in judicial settings. Atlantic Highlands, NJ: Humanities Press. Atkinson, K., Bench-Capon, T., & McBurney, P. (2005a). Generating intentions through argumen- tation. In F. Dignum, V. Dignum, S. Koenig, S. Kraus, & M. Wooldridge (Eds.), Proceedings of the fourth international joint conference on Autonomous Agents and Multi-agent Systems (AAMAS 2005), Utrecht, The Netherlands (pp. 1261–1262). New York: ACM Press. Atkinson, K., Bench-Capon, T., & McBurney, P. (2005b). A dialogue game protocol for multi- agent argument over proposals of action. In K. Sycara & M. Wooldridge (Eds.), Argumentation in Multi-Agent Systems, special issue of the Journal of Autonomous Agents and Multi-Agent Systems, 11(2), 153–171. Atkinson, K., Bench-Capon, T., & McBurney, P. (2005c). Persuasive political argument. In F. Grasso, C. Reed, & R. Kibble (Eds.), Proceedings of the fifth international workshop on Computational Models of Natural Argument (CMNA 2005),atIJCAI 2005, Edinburgh, Scotland. Atkinson, K., & Bench-Capon, T. J. M. (2007a). Argumentation and standards of proof. In Proceedings of the 11th International Conference on Artificial Intelligence and Law (ICAIL 2007), Stanford, CA, June 4–8, 2007. New York: ACM Press, pp. 107–116. Atkinson, K., & Bench-Capon, T. J. M. (2007b). Practical reasoning as presumptive argumen- tation using action based alternating transition systems. Artificial Intelligence, 171(10–15), 855–874. Aubel, A., & Thalmann, D. (2000). Realistic deformation of human body shapes. In Proceedings of computer animation and simulation 2000, Interlaken, Switzerland, 2000 (pp. 125–135). Posted at http://vrlab.epfl.ch/Publications of the Virtual Reality Lab at the Swiss Federal Institute of Technology in Lausanne. Aubel, A., & Thalmann, D. (2001). Efficient muscle shape deformation. In N. Magnenat- Thalmann & D. Thalmann (Eds.), Deformable avatars (pp. 132–142). Dordrecht, The Netherlands: Kluwer. Aubel, A., & Thalmann, D. (2005). MuscleBuilder: A modeling tool for human anatomy. Journal of Computer Science and Technology.Postedathttp://vrlab.epfl.ch/Publications of the Virtual Reality Lab at the Swiss Federal Institute of Technology in Lausanne. Audi, R. (1994). Dispositional beliefs and dispositions to believe. Noûs, 28(4), 419–434. August, S. (1991). ARIEL: An approach to understanding analogies in arguments. Technical Report 910051, Computer Science Department. Los Angeles, CA: University of California, Los Angeles. ftp://ftp.cs.ucla.edu/tech-report/1991-reports/910051.pdf Aulsebrook, W. A., Iscan, M. Y., Slabbert, J. H., & Becker, P. (1995). Superimposition and recon- struction in forensic facial identification: A survey. Forensic Science International, 75(2/3), 101–120. Aumann, R. J. (1987). Correlated equilibrium as an expression of Bayesian rationality. Econometrica, 55, 1–19. Aune, B. (1975). Vendler on knowledge and belief. In K. Gunderson (Ed.), Language, mind, and knowledge (pp. 391–399). (Minnesota Studies in the Philosophy of Science, 7). Minneapolis, MN: University of Minnesota Press. Aussenac-Gilles, N., & Sörgel, D. (2005). Text analysis for ontology and terminology engineering. Applied Ontology, 1(1), Amsterdam: IOS Press, pp. 35–46. Avcıba¸s, I.,˙ Bayram, S., Memon, N., Sankur, B., & Ramkumar, M. (2004). A classifier design for detecting image manipulations. In 2004 International Conference on Image Processing, ICIP ’04 (Vol. 4, pp. 2645–2648). Avery, J., Yearwood, J., & Stranieri, A. (2001). An argumentation based multi-agent system for eTourism dialogue. In A. Abraham & M. Köppen (Eds.), Hybrid information systems: Proceedings of the first international workshop on Hybrid Intelligent Systems (HIS 2001), Adelaide, Australia, December 11–12, 2001 (pp. 497–512). Advances in Soft Computing series. Heidelberg, Germany: Physica-Verlag (of Springer-Verlag). 1130 References

Aylett, R., Louchart, S., Tychsen, A., Hitchens, M., Figuereido, R., & Delgado Mata, C. (2008). Managing emergent character-based narrative. In INTETAIN ’08: Proceedings of the 2nd international conference on INtelligent TEchnologies for Interactive EnterTAINment. Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering (ICST), Brussels, Belgium. New York: ACM. Aziz Bek. (1991 [1933–1937]). Intelligence and Espionage in Lebanon, Syria and Palestine during the World War (1913–1918), Hebrew trans., ed. E. Tauber. (‘Iyyunim ba-Machtarot u-va-Meri, vol. 7.) Ramat-Gan, Israel: Bar-Ilan University Press, and Tel-Aviv: [Originally, the MS (either originally in Turkish and translated, or ghost-written) was published in Arabic, ed. F. Midani, in the Beirut newspaper Al H. arar¯ (1932), then in book form (1933); further memoirs appeared in the Beirut newspaper S. awt. Al H. arar¯ (1936), then in book form (1937). Stauber’s annotated Hebrew includes an introduction.] Azuelos-Atias, S. (2007). A pragmatic analysis of legal proofs of criminal intent. Amsterdam: Benjamins. Baader, F., Calvanese, D., McGuinness, D. L., Nardi, D., & Patel-Schneider, P. F. (Eds.). (2003). The description logic handbook: Theory, implementation, and applications. Cambridge, England: Cambridge University Press. Baber, C. (2010). Distributed cognition at the crime scene. AI & Society, 25, 423–432. doi://10.1007/s00146-010-0274-6 Backstrom, L., Huttenlocher, D., Kleinberg, J., & Lan, X. (2006). Group formation in large social networks: Membership, growth, and evolution. In Proceedings of the 12th ACM SIG KDD international conference on knowledge discovery and data mining (pp. 44–54). Backway, H. (2007). Video replacing identity parades. News Shopper, Bexley edition (South East London), 28 February, p. 8. Baddeley, A. D. (1979). Applied cognitive and cognitive : The case of face recognition. In L. G. Nilsson (Ed.), Perspectives on memory research. Hillsdale, NJ: Lawrence Erlbaum Associates. Badiru, A. B., Karasz, J. M., & Holloway, R. T. (1988). Arest: [sic] Armed robbery eidetic suspect typing expert system. Journal of Police Science and Administration, 16(3), 210–216. Baeza-Yates, R., & Ribeiro-Neto, B. (1999). Modern information retrieval. Boston: Addison Wesley. Bain, W. M. (1986). Case-based reasoning: A computer model of subjective assessment.Ph.D. thesis. New Haven, CT: Computer Science Department, Yale University. Bain, W. M. (1989a). JUDGE. In C. K. Riesbeck & R. C. Schank (Eds.), Inside case-based reasoning (pp. 93–140). Hillsdale, NJ: Lawrence Erlbaum Associates. Bain, W. M. (1989b). MICROJUDGE.InC.K.Riesbeck&R.C.Schank(Eds.),Inside case-based reasoning (pp. 141–163). Hillsdale, NJ: Lawrence Erlbaum Associates. Bainbridge, D. (1991). CASE: Computer assisted sentencing in magistrates’ courts. Paper presented at the BILETA Conference 1991. Balding, D. J. (2005). Weight-of-evidence for forensic DNA profiles. Chichester: Wiley. Balding, D. J., & Donnelly, P. (1995). Inferring identity from DNA profile evidence. Proceedings of the National Academy of Sciences, USA, 92(25), 11741–11745. Baldus, D., & Cole, J. W. L. (1980). Statistical proof of discrimination. Colorado Springs, CO: Shepard’s/McGraw-Hill. Ball, E., Chadwick, D. W., & Basden, A. (2003). The implementation of a system for evaluating trust in a PKI environment. In O. Petrovic, M. Ksela, M. Fallebblock, & C. Kittl (Eds.), Trust in the network economy (pp. 263–279). Berlin: Springer. Ball, G. R., Kasiviswanathan, H., Srihari, S. N., & Narayanan, A. (2010). Analysis of line struc- ture in handwritten documents using the Hough transform. In Proceedings of the SPIE 17th conference on document recognition and retrieval, San José, CA, January 2010, pp. DRR 1–10. Ball, G. R., & Srihari, S. N. (2009). Comparison of statistical models for writer verification. In Proceedings of the SPIE 16th conference on document recognition and retrieval, San José, CA, January 2009, pp. 7247OE 1–8.pdf. References 1131

Ball, G. R., Stittmeyer, R., & Srihari, S. N. (2010). Writer verification in historical documents. In Proceedings of the SPIE 17th conference on document recognition and retrieval, San José, CA, January 2010. Downloadable from http://www.cedar.buffalo.edu/papers/publications.html Ball, W. J. (1994). Using Virgil to analyse public policy arguments: A system based on Toulmin’s informal logic. Social Science Computer Review, 12(1), 26–37. Ballim, A., & Wilks, Y. (1991). Artificial believers: The ascription of belief. Hillsdale, NJ: Erlbaum. Ballim, A., By, T., Wilks, Y., & Liske, C. (2001). Modelling agent attitudes in legal reasoning. Computing and Informatics, 20(6), 581–624. Ballim, A., Wilks, Y., & Barnden, J. (1990). Belief ascription, metaphor, and intensional identification. Chapter 4 In S. L. Tsohatzidis (Ed.), Meanings and prototypes: Studies in lin- guistic categorization (pp. 91–131). London: Routledge, with a consolidated bibliography on pp. 558–581. Ballou, S. (2001). Wigs and the significance of one fiber. Chapter 2 In M. M. Houck (Ed.), Mute witnesses: Trace evidence analysis. London: Academic. Balsamo A., & Lo Piparo A. (2004). La prova ‘per sentito dire’. La testimonianza indiretta tra teoria e prassi applicativa. Milan: Giuffrè. Banko, M., Mittal, V. O., & Witbrock, M. J. (2000). Headline generation based on statistical trans- lation. In Proceedings of the 38th meeting of the Association for Computational Linguistics (ACL’2000), pp. 318–325. Barb, A. A. (1972). Cain’s murder-weapon and Samson’s jawbone of an ass. Journal of the Warburg and Courtauld Institutes (London),35, 386–389. Barber, H., & Kudenko, D. (2008). Generation of dilemma-based interactive narratives with a changeable story goal. In INTETAIN ’08: Proceedings of the 2nd international conference on INtelligent TEchnologies for Interactive EnterTAINment. Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering (ICST), Brussels, Belgium. New York: ACM. Bargis, M. (1994). Le dichiarazioni di persone imputate in un procedimento connesso. Milan: Giuffrè. Barnden, J. A. (2001). Uncertain reasoning about agents’ beliefs and reasoning. Artificial Intelligence and Law, 9(2/3), 115–152. Barnett, V., & Lewis, T. (1994). Outliers in statistical data (3rd ed.). New York: Wiley. Baron, J. (1994). Nonconsequentialist decisions. With open peer commentary and the author’s response. Behavioral and Brain Sciences, 17(1), 1–42. Barron, J. (2004). In a futuristic house, speak clearly and carry a manual. Daily Telegraph, London, 28 October 2004, on p. 7 in The New York Times selection supplement. Barragán, J. (1989). Bargaining and uncertainty. In A. A. Martino (Ed.), Pre-proceedings of the third international conference on “Logica, informatica, diritto: Legal expert sys- tems”, Florence, 1989 (2 vols. + Appendix) (Vol. 1, pp. 49–64). Florence: Istituto per la Documentazione Giuridica, Consiglio Nazionale delle Ricerche. Barwise, J. (1993). Constraints, channels and the flow of information. In P. Aczel, D. Israel, Y. Katagiri, & S. Peters (Eds.), Situation theory and its applications (Vol. 3, pp. 3–27). (CSLI Lecture Notes, Vol. 37.) Stanford, CA: Center for the Study of Language and Information (CSLI)52. Barzilay, R., & Elhadad, M. (1999). Using lexical chains for text summarization. In I. Many & M. T. Maybury (Eds.), Advances in automatic text summarization (pp. 111–121). Cambridge, MA: The MIT Press. Basden, A., Ball, E., & Chadwick, D. W. (2001). Knowledge issues raised in modelling trust in a public key infrastructure. Expert Systems, 18(5), 233–249.

52 These are the Proceedings of the Third International Conference on Situation Theory and Its Applications, Oiso, Japan, November 1991. Cf. Aczel et al. (1993). 1132 References

Basta, S., Giannotti, F., Manco, G., Pedreschi, D., & Spisanti, L. (2009). SNIPER: A data mining methodology for fiscal fraud detection. In Mathematics for Finance and Economy, special issue of ERCIM News, 78 (July), 27–28. Accessible at the webpage http://ercim-news.ercim.org/ of the European Research Consortium for Informatics and Mathematics. Batagelj, V., & Mrvar, A. (1998). Pajek: Program for large network analysis. Connections, 21(2), 47–57. Bates, J. (1992). Virtual reality, art, and entertainment. Presence: The Journal of Teleoperators and Virtual Environments, 1(1), 133–138. Bates, J., Loyall, A. B., & Reilly, W. S. (1992). Integrating reactivity, goals, and emotion in a broad agent. In Proceedings of the fourteenth annual conference of the cognitive science society. http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/oz/web/papers/CMU-CS-92-142.ps Bauer, E., & Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, 35, 1–38. Bauer-Bernet, H. (1986). Temporal aspects of the formalization and computerization of law. In A. A. Martino & F. Socci Natali (Eds.), Automated analysis of legal texts: Logic, informatics, law (pp. 451–472). Amsterdam: North-Holland. Baumes, J., Goldberg, M., Hayvanovych, M., Magdon-Ismail, M., & Wallace, W. A. (2006). Finding hidden groups in a stream of communications. In Proceedings of the IEEE interna- tional conference on Intelligence and Security Informatics (ISI–2006), pp. 201–212. Baxendale, P. B. (1958). Machine-made index for technical literature: An experiment. IBM Journal of Research and Development, 2(4), 354–361. Bayles, M. D. (1990). Procedural justice: Allocating to individuals. Dordrecht, The Netherlands: Kluwer. Bayse, W. A., & Morris, C. G. (1987). FBI automation strategy: Development of AI applications for national investigative programs. Signal Magazine,May. Beatie, B. A. (1976). “Romances traditionales” and Spanish Traditional Ballads: Menéndez Pidal vs. Vladimir Propp. Journal of the Folklore Institute, 13(1), 37–55. (Indiana University). Beecher-Monas, E. (2008). Paradoxical validity determinations: A decade of antithetical approaches to admissibility of expert evidence. International Commentary on Evidence, 6(2), Article 2. http://www.bepress.com/ice/vol6/iss2/art2 Behrman, B. W., & Davey, S. L. (2001). Eyewitness identification in actual criminal cases: An archival analysis. Law and Human Behavior, 25, 475–491. Behrman, B. W., & Richards, R. E. (2005). Suspect/foil identification in actual crimes and in the laboratory: A reality monitoring analysis. Law and Human Behavior, 29, 279–301. Bekerian, D. A. (1993). In search of the typical eye-witness. American , 48, 574–576. Belfrage, H. (1995). Variability in forensic psychiatric decisions: Evidence for a positive crime preventive effect with mentally disordered violent offenders? Studies in Crime and Crime Prevention, 4(1), 119–123. Belis, M. (1973). On the causal structure of random processes. In R. J. Bogdan & I. Niiniluoto (Eds.), Logic, language, and probability (pp. 65–77). Dordrecht, The Netherlands: Reidel (now Spinger). Belis, M. (1995). Causalité, propension, probabilité. Intellectica, 1995/2, 21, 199–231. http://www. intellectica.org/archives/n21/21_11_Belis.pdf Belis, M., & Snow, P. (1998). An intuitive data structure for the representation and explanation of belief and evidentiary support. In Proceedings of the seventh international conference on Information Processing and Management of Uncertainty in knowledge-based systems (IPMU 1998), Paris, 6–10 July 1998. Paris: EDK, pp. 64–71. Bell, A., Swenson-Wright, J., & Tybjerg, K. (Eds.). (2008). Evidence. (Darwin College Lectures Series.) Cambridge: Cambridge University Press. Bell, B. E., & Loftus, E. F. (1988). Degree of detail of eyewitness testimony and mock juror judgments. Journal of Applied , 18, 1171–1192. Bell, B. E., & Loftus, E. F. (1989). Trivial persuasion in the courtroom: The power of (a few) minor details. Journal of Personality and Social Psychology, 56, 669–679. References 1133

Bellucci, E., & Zeleznikow, J. (2005). Developing negotiation decision support systems that sup- port mediators: A case study of the Family_Winner system. Artificial Intelligence and Law, 13(2), 233–271. Belnap, N., & Perloff, M. (1988). Seeing to it that: A canonical form for agentives. Theoria, 54, 175–199. Reprinted with corrections in Kyberg, H. E., Loui, R. P., & Carlson, G. N. (Eds.). (1990). Knowledge representation and defeasible reasoning (pp. 167–190). Dordrecht: Kluwer. Bem, D. J. (1966). Inducing belief in false confessions. Journal of Personality and Social Psychology, 3, 707–710. Ben-Amos, D. (1980) The concept of motif in folklore. In V. J. Newall (Ed.), Folklore studies in the twentieth century: Proceedings of the centenary conference of the folklore society (pp. 17–36). Royal Holloway College, 1978. Woodbridge, England: Brewer, Rowman and Littlefield. Bench-Capon, T. J. M. (1993a). In defence of rule based representations for legal knowledge based systems. In I. M. Carr (Ed.), Proceedings of the 4th national conference on law, computers and artificial intelligence, Exeter, England, 21–22 April 1993. Cf. Bench-Capon (1994). Bench-Capon, T. J. M. (1993b). Neural networks and open texture. In Proceedings of the fourth International Conference on Artificial Intelligence and Law (ICAIL’93).NewYork:ACM Press, pp. 292–297. Bench-Capon, T. J. M. (1994). In defence of rule based representations for legal knowledge based systems. Law, Computers and Artificial Intelligence, 3(1), 15–28. Cf. Bench-Capon (1993a). Bench-Capon, T. J. M. (1997). Argument in artificial intelligence and law. Artificial Intelligence and Law, 5, 249–261. Bench-Capon, T. J. M. (1998). Specification and implementation of Toulmin dialogue game. In J. C. Hage, T. Bench-Capon, A. Koers, C. de Vey Mestdagh, & C. Grutters (Eds.), Jurix 1998: Foundation for legal knowledge based systems (pp. 5–20). Nijmegen, The Netherlands: Gerard Noodt Institut. Bench-Capon, T. J. M. (2002). Agreeing to differ: Modelling persuasive dialogue between parties without a consensus about values. Informal Logic, 22(3), 231–245. Bench-Capon, T. J. M. (2003a). Try to see it my way: Modelling persuasion in legal discourse. Artificial Intelligence and Law, 11(4), 271–287. Bench-Capon, T. J. M. (2003b). Persuasion in practical argument using value based argumentation frameworks. Journal of Logic and Computation, 13(3), 429–448. http://www.csc.liv.ac.uk/~ tbc/publications/jcl03.pdf Bench-Capon, T. J. M., Coenen, F., & Leng, P. (2000). An experiment in discovering association rules in the legal domain. In Proceedings of the eleventh international workshop on Database and Expert Systems Applications (DEXA 2000), Greenwich, London, September 2000. New York: IEEE Computer Society, 2000, pp. 1056–1060. Bench-Capon, T. J. M., Doutre, S., & Dunne, P. E. (2007). Audiences in argumentation frame- works. Artificial Intelligence, 171(1), 42–71. Bench-Capon, T. J. M., & Dunne, P. E. (2005). Argumentation in AI and law: Editors’ introduction. (Special issue.) Artificial Intelligence and Law, 13, 1–8. Bench-Capon, T. J. M., & Dunne, P. E. (2007). Argumentation in artificial intelligence. Artificial Intelligence, 171, 619–641. Bench-Capon, T. J. M., Freeman, J. B., Hohmann, H., & Prakken, H. (2003). Computational models, argtumentation theories and legal practice. In C. Reed & T. J. Norman (Eds.), Argumentation machines: New frontiers in argument and computation (pp. 85–120). Dordrecht, The Netherlands: Kluwer. Bench-Capon, T. J. M., Geldard, T., & Leng, P.H. (2000). A method for the computational modelling of dialectical argument with dialogue games. Artificial Intelligence and Law, 8, 233–254. Bench-Capon, T. J. M., Lowes, D., & McEnery, A. M. (1991). Argument-based explanation of logic programs. Knowledge Based Systems, 4(3), 177–183. Bench-Capon, T. J. M., & Staniford, G. (1995). PLAID: Proactive legal assistance. In Proceedings of the fifth International Conference on Artificial Intelligence and Law (ICAIL’95), College Park, MD, May 1995, pp. 81–87. 1134 References

Bench-Capon, T. J. M., & Visser P. R. S. (1997). Ontologies in legal information systems: The need for explicit specifications of domain conceptualizations. In Proceedings of the sixth International Conference on Artificial Intelligence and Law (ICAIL’97),NewYork:ACM Press, pp. 132–141. Benderly, B. L. (1997). Turning a blind eye to mad science. (Review of: R. Firstman & J. Talan, The Death of Innocents, Bantam.) The Washington Post, November 17, 1997, final edn., Section “Style”, p. C08. Benenson, I., & Torrens, P. M. (2004). Geosimulation: Object-based modeling of urban phenom- ena. Computers, Environment and Urban Systems, 28(1/2), 1–8. Benferhat, S., Cayrol, C., Dubois, D., Lang, J., & Prade, H. (1993). Inconsistency management and prioritized syntax-based entailment. In Proceedings of the 13th International Joint Conference on Artificial Intelligence (IJCAI’93), pp. 640–645. Benferhat, S., Dubois, D., & Prade, H. (2001). A computational model for belief change. In M. A. Williams & H. Rott (Eds.), Frontiers in belief revision (pp. 109–134). (Applied Logic Series, 22). Dordrecht: Kluwer. Benajmins, V. R., Casanovas, P., Breuker, J., & Gangemi, A. (Eds.). (2005). In Proceedings of law and the semantic web [2005]: Legal ontologies, methodologies, legal information retrieval, and applications. (Lecture Notes in Computer Science, Vol. 3369.) Berlin: Springer. Ben-Menahem, Y. (1990). The Inference to the best explanation. Erkenntnis, 33(3), 319–344. Bennett, B. (1994). Spatial reasoning with propositional logics. In J. Doyle, E. Sandewall, & P. Torasso (Eds.), Principles of Knowledge Representation and reasoning: Proceedings of the fourth international conference (KR94). San Francisco: Morgan Kaufmann. Bennett, K. P., & Campbell, C. (2000). Support vector machines: Hype or Hallelujah? SIGKDD Explorations, 2(2), 1–13. New York: ACM Press. Bennett, W. L., & Feldman, M. S. (1981). Reconstructing reality in the courtroom: Justice and judgement in American culture. New Brunswick, NJ: Rutgers University Press; London: Tavistock. Bennun, M. E. (1996). Computerizing criminal law: Problems of evidence, liability and mens rea. Information & Communications Technology Law, 5(1), 29–44. Bergslien, E., Bush, P., & Bush, M. (2006). Application of field portable Xray fluorescence (FPXRF) spectrometry in forensic and environmental geology: Theory and examples (abstract). In A. Ruffell (Ed.), Abstract book of geoscientists at crime scenes: First, inaugural meet- ing of the Geological Society of London, 20 December 2006 (pp. 17–20). London: Forensic Geoscience Group. http://www.geolsoc.org.uk/pdfs/FGtalks&abs_pro.pdf. Berlière, J.-M. (2005). L’Affaire Scheffer: une victoire de la science contre le crime? La pre- mière identification d’un assassin à l’aide de ses empreintes digitales (octobre 1902). Les Cahiers de la sécurité, 56(1), 349–360. Posted at http://www.inhes.interieur.gouv.fr/fichiers/ CS56BerliereINHES2005.pdf by the Institut national des hautes études de sécurité (INHES), France. Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The semantic web. Scientific American, 284(5), 34–43. Bernez, M. O. (1994). Anatomy and aesthetics: Gérard de Lairesse’s illustrations for Bidloo’s Anatomia Humani Corporis. In M. Baridon (Ed.), Interfaces: Image, texte, langage,5 (pp. 207–229). Dijon, France: Université de Bourgogne. Berry, M., & Browne, M. (2005). Understanding search engines: Mathematical modeling and text retrieval (Software, Environments, Tools). Philadelphia, PA: SIAM. Berry, M. W. (2003). Survey of text mining: Clustering, classification, and retrieval. Berlin: Springer. Bertin, J. (1983). Semiology of graphics: Diagrams, networks, maps. Madison, WI: University of Wisconsin Press. Bertino, E., Catania, B., & Wong, L. (1999). Finitely representable nested relations. Information Processing Letters, 70(4), 165–173. Besag, J. (1986). On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society, Series B, 48, 259–302. References 1135

Besnard, P., & Hunter, A. (2008). Elements of argumentation. Cambridge, MA: The MIT Press. Best, E., & Hall, J. (1992). The box calculus: A new causal algebra with multilabel communication. (Technical Report Series, 373.) Newcastle upon Tyne, England: University of Newcastle upon Tyne, Computing Laboratory. Bettini, C., Jajodia, S., & Wang, S. (2000). Time granularities in databases, data mining, and temporal reasoning. Berlin: Springer. Bettini, C., Wang, X. S., & Jajodia, S. (2002). Solving multi-granularity temporal constraint networks. Artificial Intelligence, 140(1/2), 107–152. Bevan, B. W. (1991). The search for graves. Geophysics, 56, 1310–1319. Bevel, T., & Gardner, R. M. (2008). Bloodstain pattern analysis, with an introduction to crime scene reconstruction (3rd ed.). (CRC Series in Practical Aspects of Criminal and Forensic Investigations.) Boca Raton, FL: CRC Press (of Taylor & Francis). The 1st edn. was of 1997, and the 2nd edn. was of 2002. Bex, F., Bench-Capon, T., & Atkinson, K. (2009). Did he jump or was he pushed? Abductive prac- tical reasoning. Artificial Intelligence and Law, 17(2), 79–99. http://www.computing.dundee. ac.uk/staff/florisbex/Papers/AILaw09.pdf Bex, F. J. (2011). Arguments, stories and criminal evidence: A formal hybrid theory.(Lawand Philosophy Series, 92.) Dordrecht, The Netherlands: Springer. Bex, F. J., & Bench-Capon, T. (2010). Persuasive stories for multi-agent argumentation. In Proceedings of the 2010 AAAI fall symposium on computatonal narratives. AAAI Technical Report FS-10-04. (AAAI Fall Symposium Series.) Menlo Park, CA: AAAI Press, Menlo Park, CA, pp. 4–5 (sic). http://www.aaai.org/ocs/index.php/FSS/FSS10/paper/view/2174/2840 http:// www.computing.dundee.ac.uk/staff/florisbex/Papers/AAAI-TBC10.pdf Bex, F. J., & Prakken, H. (2004). Reinterpreting arguments in dialogue: An application to evidential reasoning. In T. F. Gordon (Ed.), Legal knowledge and information systems. JURIX 2004: The seventeenth annual conference (pp. 119–129). Amsterdam: IOS Press. Bex, F. J., Prakken, H., Reed, C., & Walton, D. N. (2003). Towards a formal account of reasoning about evidence: Argumentation schemes and generalisations. Artificial Intelligence and Law, 12, 125–165. http://www.computing.dundee.ac.uk/staff/florisbex/Papers/AILaw03.pdf Bex, F. J., Prakken, H., & Verheij, B. (2006). Anchored narratives in reasoning about evidence. In T. M. van Engers (Ed.), Legal knowledge and information systems. JURIX 2006: The nineteenth annual conference (pp. 11–20). Amsterdam: IOS Press. Bex, F. J., van den Braak, S. W., van Oostendorp, H., Prakken, H., Verheij, H. B., & Vreeswijk, G. A. W. (2007). Sense-making software for crime investigation: How to combine stories and arguments? Law, Probability & Risk, 6, 145–168. http://www.computing.dundee.ac.uk/staff/ florisbex/Papers/LPR07.pdf The same article with diagrams in colour: http://www.cs.uu.nl/ research/projects/evidence/publications/lpr07submitted.pdf Bex, F. J., van Koppen, P. J., Prakken, H., & Verheij, B. (2010). A hybrid formal theory of arguments, stories and criminal evidence. Artificial Intelligence and Law, 18(2), 123–152. http://www.cs.uu.nl/groups/IS/archive/henry/Bexetal10.pdf http://www.computing.dundee.ac. uk/staff/florisbex/Papers/AILaw10.pdf Bex, F. J., & Walton, D. (2010). Burdens and standards of proof for inference to the best expla- nation. In R. Winkels (Ed.), Legal knowledge and information systems. JURIX 2010: The 23rd annual conference (pp. 37–46). (Frontiers in Artificial Intelligence and Applications, 223.) Amsterdam: IOS Press. Bie, R., Jin, X., Chen, C., Xu, C., & Huang, R. (2007). Meta learning intrusion detection in real time network. In Proceedings of the 17th international conference on artificial neural networks, Porto, Portugal. Berlin: Springer, pp. 809–816. Biermann, T. W., & Grieve, M. C. (1996). A computerized data base of mail order garments: A contribution toward estimating the frequency of fibre types found in clothing. Part 1: The system and its operation. Part 2: The content of the data bank and its statistical evaluation. Forensic Science International, 77(1/2), 75–92. Amsterdam: Elsevier. 1136 References

Binder, D. A., & Bergman, P. (1984). Fact investigation: From hypothesis to proof (American Casebook Series.). St Paul, MN: West Publ. Binmore, K. (1985). Modelling rational players, Part 1 (ICERD Discussion Paper). London: London School of Economics. Binsted, K., Bergen, B., Coulson, S., Nijholt, A., Stock, O., Strapparava, C., et al. (2006). Computational humor. IEEE Intelligent Systems, 21(2), 59–69. http://doc.utwente.nl/66729/ Biondani, P. (2010, June 10). Giustizia Bocciata. (Subheadline: Prescrizione breve. Garanzie fasulle. Ricorsi infiniti. Formalismi. Condanne non eseguite. Un rapporto europeo indica i veri problemi dei nostri tribunali.) L’espresso, 56(23), 73–74. Birkhoff, G. (1967). Lattice theory (3rd ed.) (Colloquium Publications, 25). Providence, RI: American Mathematical Society. Reprinted 1984. Bistarelli, S., Santini, F., & Vaccarelli, A. (2006). An asymmetric fingerprint matching algorithms for Java CardTM. Pattern Analysis Applications, 9, 359–376. doi://10.1007/s10044-006-0048-4 Bivens, A., Gao, L., Hulber, M. F., & Szymanski, B. (1999). Agent-based network monitoring. In Proceedings of the autonomous agents99 conference, workshop 1, agent based high perfor- mance computing: Problem solving applications and practical deployment, Seattle, WA, May 1999, pp. 41–53. Bivens, A., Palagiri, C., Smith, R., Szymanski, B., & Embrechts, M. (2002). Network-based intru- sion detection using neural networks. In Proceedings of intelligent engineering systems through Artificial Neural Networks ANNIE-2002, St. Louis, MO, Vol. 12. New York: ASME Press, 2002, pp. 579–584. Black, H. C. (1990). Black’s law dictionary. St. Paul, MN: West Publishing Company. Black, J. B., & Wilensky, R. (1979). An evaluation of story grammars. Cognitive Science, 3, 213–229. Blackman, S. J. (1988). Expert systems in case-based law: The rule against hearsay. LL.M. thesis, Faculty of Law, University of British Columbia, Vancouver, BC. Blair, D., & Meyer, T. (1997). Tools for an interactive virtual cinema. In R. Trappl & P. Petta (Eds.), Creating personalities for synthetic actors: Towards autonomous personality agents. Heidelberg: Springer. Blair, J. P. (2005). A test of the unusual false confession perspective using cases of proven false confessions. Criminal Law Bulletin, 41, 127–144. Bleay, S. (2009). Fingerprint development and imaging: Fundamental research to operational implementation. PowerPoint presentation, 1 July 2009. Home Office Scientific Development Branch, Sandridge, England. Posted at the website of the Higher Education Academy, York Science Park, York, England. Retrieved in 2010 at http://www.heacademy.ac.uk/assets/ps/ documents/FORREST/2009/presentations/k2_bleay.pdf Block, A. (1994). Space, time, and organised crime. New Brunswick, NJ: Transaction. Bloy, D. (1996). Criminal law (2nd ed.) (Lecture Notes Series). London: Cavendish. Blueschke, A., & Lacis, A. (1996). Examination of line crossings by low KV scanning electron microscopy (SEM) using photographic stereoscopic pairs. Journal of Forensic Science, 41(1), 80–85. Boba, R. (2003). Problem analysis in policing. Washington, DC: Police Foundation. http://www. policefoundation.org/pdf/problemanalysisinpolicing.pdf Boba, R. (2005). Crime analysis and crime mapping. Thousand Oaks, CA & London: Sage. Bobrow, D., & Winograd, T. (1977). An overview of KRL, a knowledge representation language. Cognitive Science, 1(1), 3–46. Bodard, F., Hella, M., Poullet, Y., & Stenne, P. (1986). A prototype ADP system to assist judicial decision making. In A. A. Martino, F. Socci Natali, & S. Binazzi (Eds.), Automated analysis of legal texts, logic, informatics, law (pp. 187–210). Amsterdam: North-Holland. Boddington, A., Garland, A. N., & Janaway, R. C. (Eds.). (1987). Death, decay and reconstruction: Approaches to archaeology and forensic science. Manchester: Manchester University Press. Bodenhausen, G. V. (1988). Stereotypic biases in social decision making and memory: Testing process models of stereotype use. Journal of Personality and Social Psychology, 55, 726–737. References 1137

Bodziak, W. J. (2000). Footwear impression evidence. Boca Raton, FL: CRC Press. Bodziak, W. J. (2005a). Forensic tire impression and tire track evidence. Chapter 18 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Bodziak, W. J. (2005b). Forensic footwear evidence. Chapter 19 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Boer, A., van Engers, T., & Winkels, R. (2003). Using ontologies for comparing and harmonizing legislation. In G. Sartor (Ed.), Proceedings of the ninth International Conference on Artificial Intelligence and Law (ICAIL 2003), Edinburgh, Scotland, 24–28 June 2003 (pp. 60–69). New York: ACM Press. Bohan, T. L. (1991). Computer-aided accident reconstruction: Its role in court (SAE Technical Paper Series (12 p.)). Warrendale, PA: Society of Automotive Engineers (SAE). Bolding, P. O. (1960). Aspects of the burden of proof. Scandinavian Studies in Law, 4, 9–28. Bolelli, T. (1993). Figlio mio figlio di cane. In his L’italiano e gli italiani: Cento stravaganze linguistiche (pp. 126–128). Vicenza, Italy: Neri Pozza Editore. (Originally, in La Stampa Turin, 11 January 1991) Bolettieri, P., Esuli, A., Falchi, F., Lucchese, C., Perego, R., & Rabitti, F. (2009). Enabling content- based image retrieval in very large digital libraries. In Proceedings of the second workshop on very large digital libraries, 2 October 2009, Corfu, Greece. Pisa, Italy: DELOS, an Association for Digital Libraries, pp. 43–50. Bologna, J., & Lindquist, R. J. (1995). Fraud auditing and forensic accounting: New tools and techniques (2nd ed.). New York: Wiley. Bond, C., Solon, M., & Harper, P. (1999). The expert witness in court: A practical guide. Crayford, Kent: Shaw & Sons. Bondarenko, A., Dung, P. M., Kowalski, R., & Toni, F. (1997). An abstract argumentation-theoretic approach to default reasoning. Artificial Intelligence, 93(1/2), 63–101. BonJour, L. (1998). The elements of coherentism. In L. M. Alcoff (Ed.), Epistemology: The big questions (pp. 210–231). Oxford: Blackwell.(page numbers are referred to in the citation as in Alcoff.) (Originally, In: BonJour, L. (1985). Structure of empirical knowledge (pp. 87–110). Cambridge, MA: Harvard University Press) Bookspan, S., Gravel, A. J., & Corley, J. (2002). Site history: The first tool of the environ- mental forensic team. Chapter 2 In B. L. Murphy & R. D. Morrison (Eds.), Introduction to environmental forensics (pp. 19–42). San Diego, CA & London: Academic. Boone, K. B. (Ed.). (2007). Assessment of feigned cognitive impairment: A neuropsychological perspective. New York: Guilford Press. Borchard, E. M. (1932). Convicting the innocent: Errors of criminal justice.GardenCity,NY: Garden City Publishing Company, Inc.; New Haven, CT: Yale University Press. Borges, F., Borges, R., & Bourcier, D. (2002). A connectionist model to justify the legal reasoning of the judge. In Proceedings of fifteenth international conference on legal knowledge based system. Amsterdam: IOS Publications, pp. 113–122. Borges, F., Borges, R., & Bourcier, D. (2003). Artificial neural networks and legal categoriza- tion. In Proceedings of sixteenth international conference on legal knowledge based system. Amsterdam: IOS Publications, pp. 11–20. Borgulya, I. (1999). Two examples of decision support in the law. Artificial Intelligence and Law, 7(2/3), 303–321. Bourcier, D. (1995). Une approche sémantique de l’argumentation juridique. Revue L’année sociologique. Paris: PUF (June). Bowers, M. C. (2002). Forensic dentistry: A field investigator’s handbook. London: Academic. Boykov, Y., Veksler, O., & Zabin, R. (1998). Markov random fields with efficient approxima- tions. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), Santa Barbara, CA, 23–25 June 1998. New York: IEEE Computer Society, pp. 648–655. 1138 References

Boykov, Y., Veksler, O., & Zabin, R. (2001). Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE PAMI), 23, 1222–1239. Brace, N., Pike, G., Kemp, R., Tyrner, J., & Bennet, P. (2006). Does the presentation of multiple facial composites improve suspect identification? Applied Cognitive Psychology, 20, 213–226. Bradac, J. J., Hemphill, M. R., & Tardy, C. H. (1981). Language style on trial: Effects of ‘powerful’ and ‘powerless’ speech upon judgements of victims and villains. Western Journal of Speech Communication, 45, 327–341. Bradfield, A. L., & Wells, G. L. (2000). The perceived validity of eyewitness identification testimony: A test of the five Biggers criteria. Law and Human Behavior, 24, 581–594. Bradfield, A. L., Wells, G. L, & Olson, E. A. (2002). The damaging effect of confirming feedback on the relation between eyewitness certainty and identification accuracy. Journal of Applied Psychology, 87, 112–120. Brady, R. (Ed.). (2003). Relevant logics and their Rivals, II. Aldershot: Ashgate. Vol. 1 is Routley et al. (1983). Brainerd, C. J., & Reyna, V. F. (2004). Fuzzy-trace theory and memory development. Developmental Review, 24, 396–439. Branch, J., Bivens, A., Chan, C.-Y., Lee, T.-K., & Szymanski, B. (2002). Denial of service intru- sion detection using time dependent deterministic finite automata. In Proceedings of research conference. Troy, NY, October 2002. Brandenburger, A., & Dekel, E. (1987). Rationalizability and correlated equilibria. Econometrica, 55, 1391–1402. Brandes, U., Kenis, P., Raab, J., Schneider, V., & Wagner, D. (1999). Explorations into the visualization of policy networks. Journal of Theoretical Politics, 11(1), 75–106. Brandes, U., Raab, J., & Wagner, D. (2001). Exploratory network visualization: Simultaneous display of actor status and connections. Journal of Social Structure, 2(4). http://www.cmu.edu/ joss/content/articles/volume2/BrandesRaabWagner.html Brann, N. L. (1981). The Abbot Trithemius (1462–1516): The renaissance of monastic humanism. Leiden, The Netherlands: Brill. Brann, N. L. (1999). Trithemius and magical theology: A chapter in the controversy over occult studies in early modern Europe. Albany, NY: SUNY Press. Brann, N. L. (2006). Trithemius, Johannes. In W. J. Hanegraaff with A. Faivre, R. van den Broek, & J.-P. Brach (Eds.), Dictionary of Gnosis and Western esotericism (pp. 1135–1139). Leiden, The Netherlands: Brill. Branting, K., Callaway, C., Mott, B., & Lester, J. (1999). Integrating discourse and domain knowl- edge for document drafting. In Proceedings of seventh international conference on artificial intelligence and law. New York: ACM Press, pp. 214–220. Branting, K. L. (1994). A computational model of ratio decidendi. Artificial Intelligence and Law, 2, 1–31. Bratman, M. (1987). Intention, plans and practical reason. Cambridge, MA: Harvard University Press. Breeze, A. (1992). Cain’s jawbone, Ireland, and the prose Solomon and Saturn. Notes and Queries, 39(4), 433–436. (Notes and Queries Vol. 237, new series, Oxford University Press.) Breiger, R. L. (2004). The analysis of social networks. In M. Hardy & A. Bryman (Eds.), Handbook of data analysis (pp. 505–526). London: Sage. Breiman, L. (1996). Bagging predictors. Machine Learning, 24, 123–140. Breiman L., Friedman J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and regres- sion trees. Belmont, CA: Wadsworth; New York: Chapman and Hall; San Mateo, CA: Morgan Kaufmann. Brenner, J. C. (2000). Forensic science glossary. Boca Raton, FL: CRC Press. Bressan, S. (Ed.). (2003). Efficiency and effectiveness of XML tools and techniques [EEXTT] and data integration over the web: Revised papers from the VLDB workshop, at the 28th very large data bases international conference, Hong Kong, China, 2002. Berlin: Springer. References 1139

Breuker, J., Elhag, A., Petkov, E., & Winkels, R. (2002). Ontologies for legal information serving and knowledge management. In Proceedings of Jurix 2002: 15th annual conference on legal knowledge and information systems. Amsterdam, The Netherlands: IOS Press, pp. 73–82. Breuker, J., Valente, A., & Winkels, R. (2005). Use and reuse of legal ontologies in knowledge engineering and information management. In V. R. Benajmins, P. Casanovas, J. Breuker, & A. Gangemi (Eds.), Proceedings of law and the semantic web [2005]: Legal ontologies, methodologies, legal information retrieval, and applications (pp. 36–64). (Lecture Notes in Computer Science, Vol. 3369.) Berlin: Springer. Brewka, G., Prakken, H., & Vreeswijk, G. (2003). Special issue on computational dialectics: An Introduction. (Special issue.) Journal of Logic and Computation, 13, 317–318. Brigham, J. C. (1981). The accuracy of eyewitness evidence: How do attorneys see it? The Florida Bar Journal, November, 714–721. Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. In WWW 1998: Proceedings of the seventh international conference on world wide web, pp. 107–117. Bringsjord, S., & Ferrucci, D. A. (2000). Artificial intelligence and literary creativity.Mahwah, NJ: Erlbaum. [On the BRUTUS project.] Brigsjord, S., Shilliday, A., Taylor, J., Clark, M., & Khemlani, S. (2006). Slate: An argument- centered intelligent assistant to professional reasoners. At the Sixth International Workshop on Computational Models of Natural Argument, held with ECAI’06 Riva del Garda, Italy, August 2006. Brislawn, C. M., Bradley, J. N., Onyshczak, R. J., & Hopper, T. (1996). The FBI compression standard for digitized fingerprint images. In Proceedings of the international society for optical engineering, Denver, CO, pp. 344–355. Brkic, J. (1985). Legal reasoning: Semantic and logical analysis. New York: Peter Lang. Bromby, M. (2002). To be taken at face value? Computerised identification. Information & Communication Technology Law Journal, 11(1), 63–73. Bromby, M. (2003, February 28). At face value? The use of facial mapping and CCTV image analysis for identification. New Law Journal, 153(7069), 302–304. Bromby, M. (2010). Identification, trust and privacy: How biometrics can aid certification of digital signatures. International Review of Law, Computers and Technology, 24(1), 1–9. Bromby, M. C., & Hall, M. J. J. (2002). The development and rapid evaluation of the knowledge model of ADVOKATE: An advisory system to assess the credibility of eyewitness testimony. In T. Bench-Capon, A. Daskalopulu, & R. Winkels (Eds.), Legal knowledge and informa- tion systems, JURIX 2002: The fifteenth annual conference (pp. 143–152). Amsterdam: IOS Publications. Bromby, M., MacMillan, M., & McKellar, P. (2003). A common-KADS representation for a knowledge based system to evaluate eyewitness identification. International Review of Law Computers and Technology, 17(1), 99–108. Bromby, M., MacMillan, M., & McKellar, P. (2007). An examination of criminal jury directions in relation to eyewitness identification in commonwealth jurisdictions. Common Law World Review, 36(4), 303–336. Brooks, K. M. (1996). The theory and implementation of one model for computational narrative. InW.Hall&T.D.C.Little(Eds.),ACM multimedia ’96, Boston, MA (pp. 317–328). New York: The Association of Computing Machinery. Brooks, K. M. (1999). Metalinear cinematic narrative: Theory, process, and tool. PhD dissertation in Media, Arts and Sciences (advisor: G. Davenport). Cambridge, MA: Program in Media Arts and Sciences, School of Architecture and Planning, Massachusetts Institute of Technology. http://xenia.media.mit.edu/~brooks/dissertation.html Brooks, K. M. (2002). Nonlinear narrative structures for interactive TV. In M. Damásio (Ed.), Interactive television authoring and production 2002 (pp. 43–56). Lisbon, Portugal: Universidade Lusófona de Humanidades e Tecnologias. Brown, C. T. (1989a, June). Relating Petri nets to formulae of linear logic (Internal report, CSR- 304-89). Edinburgh, Scotland: Department of Computer Science, University of Edinburgh. 1140 References

Brown, C. T. (1989b, November). Petri nets as quantales. Internal report, CSR-314-89. Edinburgh, Scotland: Department of Computer Science, University of Edinburgh. Browne, M., & Berry, M. W. (2005). Email surveillance using nonnegative matrix factorization. In Proceedings of the SIAM international conference on data mining, SIAM workshop on link analysis, counterterrorism and security. Philadelphia, PA: SIAM. Bruce, V., & Hancock, P. (2002). CRIME-VUs: Combined recall images from multiple experts and viewpoints. Scotland: Department of Psychology, University of Stirling. http://www.stir.ac.uk/ Departments/HumanSciences/Psychology/crimevus/index.htm Bruce, V., Henderson, Z., Greenwood, K., Hancock, P. J. B., Burton, A. M., & Miller, P. I. (1999). Verification of face identities from images captured on video. Journal of Experimental Psychology: Applied, 5(4), 339–360. Bruce, V., Ness, H., Hancock, P. J. B., Newman, C., & Rarity, J. (2002). Four heads are better than one: Combining face composites yields improvements in face likeness. Journal of Applied Psychology, 87(5), 894–902. Bruce, V., & Young, A. W. (1986). Understanding face recognition. British Journal of Psychology, 77, 305–327. Brüninghaus, S., & Ashley, K. (2001). Improving the representation of legal case texts with infor- mation extraction methods. In Proceedings of the 8th International Conference on Artificial Intelligence and Law (ICAIL’01), St. Louis, Missouri. New York: ACM Press, pp. 42–51. Brüninghaus, S., & Ashley, K. (2003). Predicting outcomes of casebased legal arguments. In Proceedings of the 9th International Conference on Artificial Intelligence and Law (ICAIL’03), Edinburgh, Scotland. New York: ACM Press, pp. 233–242. Bruyer, R. (Ed.). (1986). The of face perception and facial expression. (Neuropsychology and Neurolinguistics Series.) Lillington, NC: Psychology Press. Bryan, M. (1997). SGML and HTML explained (2nd ed.). Harlow, Essex: Addison Wesley Longman. Bryant, V. M., Jr., Jones, J. G., & Mildenhall, D. C. (1996). Studies in forensic palynology. Chapter 23G In J. Jansonius, & D. C. McGregor (Eds.), Palynology: Principles and applications (pp. 957–959). American Association of Stratigraphic Palynologists Foundation Vol. 3. Bryant, V. M., Jr., & Mildenhall, D. C. (1996). Forensic palynology in the United States of America. Palynology, 14, 193–208. Bryson, J., & Thórisson, K. R. (2000). Dragons, bats and evil knights: A three-layer design approach to character based creative play. Virtual Reality, 5(2), 57–71. Buber, M. (1947). Tales of the Hasidim: The Early Masters (Trans.byO.Marxfromthe German: Die Erzahlungen der Chassidim). London: Thames and Hudson, 2 vols., 1956–1961; New York: Schoken, 1947, 1961, 1975, 1991. Budescu, D. V., & Wallsten, T. S. (1985). Consistency in interpretation of probabilistic phrases. Organizational Behaviour and Human Decision Processes, 36, 391–405. Bugental, D. B., Shennum, W., Frank, M., & Ekman, P. (2000). “True lies”: Children’s abuse history and power attributions as influences on deception detection. In V. Manusov & J. H. Harvey (Eds.), Attribution, communication behavior, and close relationships (pp. 248–265). Cambridge: Cambridge University Press. Bull, R. (1979). The influence of stereotypes on person identification. In D. P. Farrington, K. Hawkins, & S. M. Lloyd-Bostock (Eds.), Psychology, law and legal processes (pp. 184–194). London: Macmillan. Bull, R., & Carson, D. (1995). Handbook of psychology in legal contexts. Chichester: Wiley. Burgoon, J. K., & Buller, D. B. (1994). Interpersonal deception: IV. Effects of deceit on per- ceived communication and nonverbal behavior dynamics. Journal of Nonverbal Behavior, 18, 155–184. Burnett, D. G. (2007). Trying leviathan: The nineteenth-century New York court case that put the whale on trial and challenged the order of nature. Princeton, NJ: Princeton University Press. Burt, R. S. (1980). Models of social structure. Annual Review of Sociology, 6, 79–141. References 1141

Burton, A. M., Bruce, V., & Hancock, P. J. B. (1999). From pixels to people: A model of familiar face recognition. Cognitive Science, 23(1), 1–31. Burton, A. M., Miller, P., Bruce, V., Hancock, P. J. B., & Henderson, Z. (2001). Human and automatic face recognition: A comparison across image formats. Vision Research, 41, 3185–3195. Butler, J. M. (2001). Forensic DNA typing: Biology and technology behind STR markers. London: Academic. BVA. (1979). Veterinary surgeons acting as witnesses in RSPCA prosecutions. London: BVA Publications (British Veterinary Association). Byrne, M. D. (1995). The convergence of explanatory coherence and the story model: A case study in juror decision. In J. D. Moore & J. F. Lehman (Eds.), Proceedings of the 17th annual conference of the cognitive science society (pp. 539–543). Hillsdale, NJ: Lawrence Erlbaum. Caballero, J., Poosankam, P., Kreibich, C., & Song, S. X. (2009). Dispatcher: Enabling active botnet infiltration using automatic protocol reverse-engineering. In E. Al-Shaer, S. Jha, & A. D. Keromytis (Eds.), Proceedings of the 2009 [i.e., 16th] ACM conference on Computer and Communications Security (CCS 2009), Chicago, IL, November 9–13, 2009 (pp. 621–634). New York: ACM Press. Cabras, C. (1996). Un mostro di carta. In C. Cabras (Ed.), Psicologia della prova (pp. 233–258). Milan: Giuffrè. Caldwell, C., & Johnston, V. S. (1991). Tracking a criminal suspect through ‘face-space’ with a genetic algorithm. In R. Belew & L. Booker (Eds.), Proceedings of the fourth international conference on genetic algorithms (pp. 416–421). San Mateo, CA: Morgan Kaufmann. Callan, R. (1999). The essence of neural networks. Hemel Hempstead: Prentice Hall Europe. Callaway, C. (2000). Narrative prose generation. Ph.D. thesis, North Carolina State University. http://tcc.itc.it/people/callaway/pubs.html Callaway, C. B., & Lester, J. C. (2001). Narrative prose generation. In Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI’2001), Seattle, WA, 2001, pp. 1241–1248. Callaway, C. B., & Lester, J. C. (2002). Narrative prose generation. Artificial Intelligence, 139(2), 213–252. Callen, C. R. (2002). Othello could not optimize: Economics, hearsay, and less adversary systems. In M. MacCrimmon & P. Tillers (Eds.), The dynamics of judicial proof: Computation, logic, and common sense (pp. 437–453). (Studies in Fuzziness and Soft Computing, Vol. 94). Heidelberg: Physica-Verlag. Calzolari, N., Monachini, M., Quochi, V., Socia, C., & Toral, A. (2010). Lexicons, terminolo- gies, ontologies: Reflections from experiences in resource construction. In N. Dershowitz & E. Nissan (Eds.), Language, culture, computation: Essays in honour of Yaacov Choueka Vol. 2: Tools for text and language, and the cultural dimension (in press). Berlin: Springer. Caminada, M., Doutre, S., Modgil, S., Prakken, H., & Vreeswijk, G. A. W. (2004). Implementations of argument-based inference. In J. Fox (Ed.), Review of argumentation technology: State of the art, technical and user requirements (pp. 2–13). ASPIC Consortium. Campbell, C., & Ying, Y. (2011). Learning with support vector machines. Synthesis Lectures on Artificial Intelligence and Machine Learning, 5(1), 1–95. Published online in .pdf in February 2011 by Morgan and Claypool in the United States.53 doi://10.2200/S00324ED1V01Y201102AIM010 Camptepe, A., Goldberg, M., Magdon-Ismail, M., & Krishnamoorthy, M. (2005). Detecting con- versing groups of chatters: A model, algorithms and tests. In Proceedings of the IADIS international conference on applied computing 2005, pp. 145–157.

53 See http://www.morganclaypool.com/doi/abs/10.2200/S00324ED1V01Y201102AIM010 Until early 2011, there were five issues available, published between 2007 and February 2011. 1142 References

Camurri, A., & Ferrentino, P. (1999). Interactive environments for music and multimedia. Multimedia Systems, 7(1), 32–47. Canter, D. (2000). Offender profiling and criminal differentiation. Legal and Criminological Psychology, 5, 23–46. Capobianco, M. F., & Molluzzo, J. C. (1979/80). The strength of a graph and its application to organizational structure. Social Networks, 2, 275–284. Capstick, P. H. (1998). Warrior. New York: St. Martin’s Press. Caputo, D., & Dunning, D. (2006). Distinguishing accurate identifications from erroneous ones: Post dictive indicators of eyewitness accuracy. In R. C. L. Lindsay, D. F. Ross, J. D. Read, & M. P. Toglia (Eds.), Handbook of eyewitness psychology: Memory for people (pp. 427–451). Mahwah, NJ: Lawrence Erlbaum Associates. Carbogim, D., Robertson, D., & Lee, J. (2000). Argument-based applications to knowledge engineering. The Knowledge Engineering Review, 15(2), 119–149. Carbonell, J. (1979). Subjective understanding: Computer models of belief systems.Ph.D.the- sis, Technical Report YALE/DCS/tr150. Computer Science Department, Yale University, New Haven, CT. Carbonell, J. (1981). POLITICS; Micro POLITICS. Chapters 11 and 12 In R. G. Schank & C. K. Riesbeck (Eds.), Inside computer understanding: Five programs plus miniatures (pp. 259–307 and 308–317). Hillsdale, NJ: Erlbaum. Carbonell, J. G., Jr. (1978). POLITICS: Automated ideological reasoning. Cognitive Science, 2(1), 27–51. Carenini, G., Grasso, F., & Reed, C. (Eds.). (2002). Proceedings of the ECAI-2002 workshop on computational models of natural argument,atECAI 2002, Lyon, France. Carenini, G., & Moore, J. (1999). Tailoring evaluative arguments to user’s preferences. In Proceedings of the seventh international conference on User Modeling (UM-99),Banff, Canada. Carenini, G., & Moore, J. (2001). An empirical study of the influence of user tailoring on evaluative argument effectiveness. In Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI 2001), Seattle, WA. Carmo, J., & Jones, A. (1996). A new approach to contrary-to-duty obligations. In D. Nute (Ed.), Defeasible deontic logic (pp. 317–344). (Synthese Library, 263.) Dordrecht: Kluwer. Carmo, J., & Jones, A. (2002). Deontic logic and contrary-to-duties. In D. Gabbay & F. Guenthner (Eds.), Handbook of philosophical logic (Vol. 8, 2nd ed., pp. 265–343). Dordrecht, The Netherlands: Kluwer. Carofiglio, V., & de Rosis, F. (2001a). Ascribing and weighting beliefs in deceptive informa- tion exchanges. In M. Bauer, P. J. Gmytrasiewicz, & J. Vassileva (Eds.), User modeling 2001 (pp. 222–224). (Springer Lecture Notes in Artificial Intelligence, 2109). Berlin: Springer. Carofiglio, V., & de Rosis, F. (2001b). Exploiting uncertainty and incomplete knowledge in decep- tive argumentation. In Computational science, ICCS 2001 (pp. 1019–1028). (Lecture Notes in Computer Science, 2073). Berlin: Springer. Carofiglio, V., de Rosis, F., & Grassano, R. (2001). An interactive system for generating arguments in deceptive communication. In F. Esposito (Ed.), Proceedings of AI∗IA 2001: Advances in artificial intelligence (pp. 255–266). (Springer Lecture Notes in Artificial Intelligence, 2175). Berlin: Springer. Carr, C. S. (2003). Using computer supported argument visualization to teach legal argumentation. In P. A. Kirschner, S. J. Buckingham Shum, & C. S. Carr (Eds.), Visualizing argumentation: Software tools for collaborative and educational sense-making (pp. 75–96). London: Springer. Carr, D. (2008). Narrative explanation and its malcontents. History and Theory,54 47, 19–30.

54 The journal History and Theory is published in Middletown, Connecticut by Wesleyan University, and is distributed in New York & Chichester, West Sussex, England, by John Wiley & Sons. References 1143

Carrier, B. (2005). File system forensic analysis. Upper Saddle River, NJ: Addison-Wesley Professional. Carrier, B., & Spafford, E. (2004). Defining event reconstruction of digital crime scenes. Journal of Forensic Sciences, 49(6), 1291–1298. Carroll, G., & Charniak, E. (1991). A probabilistic analysis of marker-passing techniques for plan recognition. Technical Report CS-91-44, Computer Science Department. Providence, RI: Brown University. Carter, A. L. (2001a). The directional analysis of bloodstain patterns theory and experimental validation. Canadian Society for Forensic Science Journal, 34(4), 173–189. Carter, A. L. (2001b). Carter’s compendium for bloodstain analysis with computers: Directional analysis of bloodstain patterns. Ottawa, ON: Forensic Computing of Ottawa Inc. BackTrack Analysis page (electronic book provided with BackTrack Suite). Carter, A. L., Illes, M., Maloney, K., Yamashita, A. B., Allen, B., Brown, B., et al. (2005). Further Validation of the backtrackTM computer program for bloodstain pattern analysis: Precision and accuracy. IABPA News,55 21(3), 15–22. Carter, D. L. (2004). Law enforcement intelligence: A guide for state, local, and tribal law enforcement agencies. Washington, DC: Office of Community Oriented Policing Services, U.S. Department of Justice. http://www.cops.usdoj.gov/files/ric/Publications/leintelguide.pdf Caruso, S. (2001). Una sorta di “confronto all’americana” ante litteram nel bìos di S. Elia Speleota da Reggio (BHG 581). In Miscellanea di studi in memoria di Cataldo Roccaro,spe- cial issue of Pan: Studi del Dipartimento di Civiltà Euro-Mediterranee e di Studi Classici, Cristiani, Bizantini, Medievali, Umanistici, 18/19. Palermo, Sicily: Università degli Studi di Palermo. Posted at the journal’s website at: http://www.unipa.it/dicem/html/pubblicazioni/ pan2001/pan10-2001.pdf Casalinuovo, I. A., Di Pierro, D., Coletta, M., & Di Francesco, P. (2006). Application of electronic noses for disease diagnosis and food spoilage detection. Sensors, 6, 1428–1439. Casas-Garriga, G. (2003). Discovering unbounded episodes in sequential data. In Proceedings of the Seventh European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD 2003), pp. 83–94. Casey, E. (2000). Digital evidence and computer crime: Forensic science, computers, and the internet. London: Academic. New edn., 2004. Casey, E. (Ed.). (2001). Handbook of computer crime investigation: Forensic tools and technology. London: Academic. Cassel, J., & Ryokai, K. (2001). Making space for voice: Technologiesto support children’s fantasy and storytelling. Personal and Ubiquitous Computing, 5, 169–190. Cassell, J., Sullivan, J., Prevost, S., & Churchill, E. (Eds.). 2000. Embodied conversational characters. Cambridge, MA: MIT Press. Cassinis, R., Morelli, L. M., & Nissan, E. (2007). Emulation of human feelings and behaviours in an animated artwork. International Journal on Artificial Intelligence Tools, 16(2), 291–375. Full-page contents of the article on p. 158. Castelfranchi, C., & Falcone, R. (1998). Towards a theory of delegation for agent-based systems. Robotics and Autonomous Systems, 24, 141–157. Castelfranchi, C., & Falcone, R. (2000). Trust and control: A dialectic link. Applied Artificial Intelligence, 14, 799–823. Castelfranchi, C., & Falcone, R. (2010). Trust theory: A socio-cognitive and computational approach. Chichester: Wiley. Castelfranchi, C., & Poggi, I. (1998). Bugie, finzioni e sotterfugi. Florence: Carocci. Castelfranchi, C., & Tan, Y. (2002). Trust and deception in virtual societies. Dordrecht, The Netherlands: Kluwer.

55 The journal is published by the International Association of Bloodstain Pattern Analysts (IABPA). See http://iabpa.org 1144 References

Castelle, G., & Loftus, E. F. (2001). Misinformation and wrongful convictions. In: S. D. Westervelt & J. A. Humphrey (Eds.). Wrongly convicted: perspectives on failed justice (pp. 17–35). Newark, NJ: Rutgers University Press. Catarci, T., & Sycara, K. (2004). Ontologies, databases, and applications of semantics (ODBASE) 2004 international conference. Berlin: Springer. Catts, E. P., & Goff, M. L. (1992). Forensic entomology in criminal investigations. Annual Review of Entomology, 37, 253–272. Catts, E. P., & Haskell, N. H. (1990). Entomology & death: A procedural guide. Clemson, SC: Joyce’s Print Shop. Cavazza, M., Charles, F., & Mead, S. J. (2001). Narrative representations and causality in character- based interactive storytelling. In Proceedings of CAST01, living in mixed realities, Bonn, Germany, September 2001, pp. 139–142. Cavazza, M., Charles, F., & Mead, S. (2002a). Planning characters’ behaviour in interactive storytelling. Journal of Visualization and Computer Animation, 13(2), 121–131. Cavazza, M., Charles, F., & Mead, S. J. (2002b). Character-based interactive storytelling. IEEE Intelligent Systems, 17(4), 17–24. Cavazza, M., & Donikian, S. (Eds.). (2007). Proceedings of the fourth international conference on virtual storytelling: Using virtual reality technologies for storytelling (ICVS’07).NewYork: ACM. Cayrol, C. & Lagasquie-Schiex, S. (2006). Coalitions of arguments in bipolar argumentation frameworks. At the Seventh international workshop on computational models of natural argument. Chadwick, D. W., & Basden, A. (2001). Evaluating trust in a public key certification authority. Computers and Security, 20(7), 592–611. Chadwick, D. W., Basden, A., Evans, J., & Young, A. J. (1998). Intelligent computation of trust. Short paper at the TERENA Networking Conference ’98 (TNC’98), Dresden, 5–8 October 1998. TERENA, the Trans-European Research and Education Networking Association. Chae, M., Shim, S., Cho, H., & Lee, B. (2007). An empirical analysis of fraud detection in online auctions: Credit card phantom transactions. In HICSS 2007: Proceedings of the 40th annual Hawaii international conference on system sciences. Chaib-Draa, B., & Dignum, F. (Eds.). (2002). Trends in agent communication language. Special issue of Computational Intelligence, 18, 89–101. Chaiken, S. (1987). The heuristic model of persuasion. In M. P. Zanna, J. M. Olson, & C. P. Herman (Eds.), Social influence: The Ontario symposium (Vol. 5, pp. 3–39). Hillsdale, NJ: Erlbaum. Chaiken, S., Liberman, A., & Eagly, A. H. (1989). Heuristic and systematic information processing within and beyond the persuasion context. In J. S. Uleman & J. A. Bargh (Eds.), Unintended thought (pp. 212–252). New York: Guilford Press. Chaiken, S., Wood, W., & Eagly, A. H. (1996). Principles of persuasion. In E. T. Higgins & A. Kruglanski (Eds.), Social psychology: Handbook of basic mechanisms and processes (pp. 702–742). New York: Guilford Press. Champod, C. (1995). Edmond Locard – Numerical standards and ‘probable’ identifications. Journal of Forensic Identification, 45, 136–163. Champod, C., Lennard, C., Margot, P., & Stilovic, M. (2004). Fingerprints and other ridge skin impressions. Boca Raton, FL: CRC Press. Chan, H., Lee, R., Dillon, T., & Chang, E. (2001). E commerce: Fundamentals and applications. Chichester: Wiley. Chan, J. (1991). A computerised sentencing system for New South Wales courts. Computer Law and Practice, 1991, 137 ff. Chan, J., Brereton, D., Legosz, M., & Doran, S. (2001). E-policing: The impact of information technology on police practices. Brisbane, QLD: Criminal Justice Commission. Chance, J. E., & Goldstein, A. G. (1995). The other-race effect and eyewitness identification. In S. L. Sporer, R. S. Malpass, & G. Köhnken (Eds.), Psychological issues in eyewitness identification (pp. 153–176). Hillsdale, NJ: Lawrence Erlbaum Associates. References 1145

Channell, R. C., & Tolcott, M. A. (1954). Arrangement of equipment. In supplement to Human Factors in Undersea Warfare. Washington, DC: National Academy of Sciences, National Research Council. Chaoji, V., Hoonlor, A., & Szymanski, B. K. (2008a). Recursive data mining for author and role identification. In Proceedings of the third Annual Inormation Assuarance Workshop ASIA’08, Albany, NY, June 4–5, 2008, pp. 53–62. Chaoji, V., Hoonlor, A., & Szymanski, B. K. (2008b). Recursive data mining for role identification. In Proceedings of the fifth international Conference on Soft Computing as Transdisciplinary Science and Technology CSTST’08, Paris, France, October 27–31, 2008, pp. 218–225. Chaoji, V., Hoonlor, A., & Szymanski, B. K. (2010). Recursive data mining for role identification in electronic communications. International Journal of Hybrid Information Systems, 7(3), 89– 100. Also at: http://www.cs.rpi.edu/~szymansk/papers/ijhis.09.pdf Chapanis, A. (1969). Research techniques in human engineering. Baltimore, MD: John Hopkins Press. Chapanond, A., Krishnamoorthy, M. S., & Yener, B. (2005). Graph theoretic and spectral analysis of Enron email data. Computational & Mathematical Organization Theory, 11(3), 265–281. Charles, J. (1998). AI and law enforcement. IEEE Intelligent Systems, January–February, 77–80. Charniak, E. (1972). Toward a model of children’s story comprehension. Technical Report AI TR 266. Cambridge, MA: Artificial Intelligence Laboratory, Massachusetts Institute of Technology. ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-20266.pdf Charniak, E. (1977a). Ms. Malaprop, a language comprehension program. In Proceedings of the fifth international conference on artificial intelligence. Charniak, E. (1977b). A framed PAINTING: The representation of a common sense knowledge fragment. Cognitive Science, 1, 355–394. Charniak, E. (1983). Passing markers: A theory of contextual influence in language comprehen- sion. Cognitive Science, 7, 171–190. Charniak, E. (1986). A neat theory of marker passing. In Proceedings of the fifth national conference on artificial intelligence. Menlo Park, CA: AAAI Press, pp. 584–588. Charniak, E. (1991). A probabilistic analysis of marker-passing techniques for plan-recognition. In B. D’Ambrosio & P. Smets (Eds.), UAI ’91: Proceedings of the seventh annual conference on uncertainty in artificial intelligence, July 13–15, 1991, University of California at Los Angeles, Los Angeles, CA (pp. 69–76). San Mateo, CA: Morgan Kaufmann. Charniak, E., & Shimony, S. E. (1990). Probabilistic semantics for cost-based abduction. In Proceedings of the 11th annual national conference on artificial intelligence (AAAI’90).Menlo Park, CA: AAAI Press, pp. 106–111. Charniak, E., & Shimony, S. E. (1994). Cost-based abduction and MAP explanation. Artificial Intelligence, 66, 345–374. Charniak, E., & Wilks, Y. (1976). Computational semantics. New York: North-Holland. Chau, “P.” [= D. H.] (2011). Catching bad guys with graph mining. In The Fate of Money.An issue of Crossroads: The ACM Magazine for Students, 17(3), 16–18. Chau, D. H., Nachenberg, C., Wilhelm, J., Wright, A., & Faloutsos, C. (2010). Polonium: Tera- scale graph mining for malware detection. In Proceedings of the second workshop on Large- scale Data Mining: Theory and Applications (LDMTA 2010), Washington, DC, 25 July 2010. http://www.ml.cmu.edu/current_students/DAP_chau.pdf Chau, D. H., Pandit, S., & Faloutsos, C. (2006). Detecting fraudulent personalities in networks of online auctioneers. In Proceedings of the European Conference on Machine Learning (ECM) and Principles and Practice of Knowledge Discovery in Databases (PKDD) 2006,Berlin, 18–22 September 2006, pp. 103–114. Chau, M., Schroeder, J., Xu, J., & Chen, H. (2007). Automated criminal link analysis based on domain knowledge. Journal of the American Society for Information Science and Technology, 58(6), 842–855. Chellas, B. F. (1974). Conditional obligation. In S. Stenlund (Ed.), Logical theory and semantic analysis (pp. 23–33). Dordrecht: Reidel. 1146 References

Chen, H., Chung, W., Xu, J. J., Wang, G., Qin, Y., & Chau, M. (2004). Crime data mining: A general framework and some examples. IEEE Computer, 37(4), 50–56. Chen, H., & Lynch, K. J. (1992). Automatic construction of networks of concepts characterizing document databases. IEEE Transactions on Systems, Sept./Oct. 1992, 885–902. Chen, H., Schroeder, J., Hauck, R., Ridgeway, L., Atabakhsh, H., Gupta, H., et al. (2003). COPLINK Connect: Information and knowledge management for law enforcement. In Digital Government: Technologies and Practices, special issue, Decision Support Systems, 34(3), 271–285. Chen, H., Zeng, D., Atabakhsh, H., Wyzga, W., & Schroeder, J. (2003). COPLINK managing law enforcement data and knowledge. Communications of the ACM, 46(1), 28–34. Chen, P. (2000a). An automatic system for collecting crime on the Internet. The Journal of Information, Law and Technology (JILT), 3 (online). http://elj.warwick.ac.uk/jilt/00-3/chen. html Chen, X., Wang, M., & Zhang, H. (2011). The use of classification trees for bioinformatics. Wiley Interdisciplinary Reviews (WIREs): Data Mining and Knowledge Discovery, 1(1), 55–63. doi://10.1002/widm.14 Chen, Z. (2000b). Java card technology for smart cards: Architecture and programmer’s guide. Boston: Addison-Wesley. Chen, Z. (2001). Data mining and uncertain reasoning: An integrated approach.NewYork:Wiley. Chen, Z., & Kuo, C. H. (1991). Topology-based matching algorithm for fingerprint authentication. In Proceedings of 25th annual IEEE international carnahan conference on security technology, pp. 84–87. Cherry, M., & Imwinkelried, E. J. (2006). A cautionary note about fingerprint analysis and reliance on digital technology. Judicature, 89, 334–338. Cheswick, B. (1992). An evening with Berferd: In which a cracker is lured, endured, and stud- ied. In Proceedings of the Winter Usenix Conference, San Francisco, 1992, pp. 163–174. Published again in several places elsewhere. Retrieved in May 2011 http://www.cheswick.com/ ches/papers/berferd.pdf Chisholm, R. M. (1963). Contrary-to-duty imperatives and deontic logic. Analysis, 24, 33–36. Chisholm, R. M. (1965). The problem of empiricism. In R. J. Swartz (Ed.), Perceiving, sensing, and knowing. Berkeley, CA: University of California Press. Chiswick, D., & Cope, R. (Eds.). (1995). Seminars in practical forensic psychiatry. (Royal College of Psychiatrists, College Seminars Series.) London: Gaskell. Choo, R. K. K. (2008). Organised crime groups in cyberspace: A typology. Trends in Organized Crime, 11, 270–295. Choudhary, A.N., Honbo, D., Kumar, P., Ozisikyilmaz, B., Misra, S., & Memik, G. (2011). Accelerating data mining workloads: Current approaches and future challenges in system architecture design. Wiley Interdisciplinary Reviews (WIREs): Data Mining and Knowledge Discovery, 1(1), 41–54. doi://10.1002/widm.9 Christie, G. C. (1964). Vagueness and legal language. Minnesota Law Review, 48, 885–911. Christie, G. C. (1984a). Mechanical jurisprudence. In The guide to American law: Everyone’s legal encyclopedia (Vol. 7, pp. 321–322). St. Paul, MN: West Publishing Company. 12 vols., 1983–1985. Christie, G. C. (1984b). Due process of law: A confused and confusing notion. In C. Perelman & R. Vande Elst (Eds.), Les notions à contenu variable en droit (Travaux du Centre National Belge de Recherche de Logique.) (pp. 57–79). Brussels: E. Bruylant. Christie, G. C. (1986). An essay on discretion. Duke Law Journal, 1986(5), 747–778. http://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1017&context= faculty_scholarship&sei-redir=1#search=““Duke+law+journal”+Christie+“an+essay+on+ discretion”” Christie, G. C. (2000). The notion of an ideal audience in legal argument. Dordrecht: Kluwer. Also in French, L’Auditoire universel dans l’argumentation juridique (G. Haarscher, Trans.). Belgium: E. Bruylant, 2005. Christopher, S. (2004). A practitioner’s perspective of UK strategic intelligence. In J. H. Ratcliffe (Ed.), Strategic thinking in criminal intelligence (1st ed.). Sydney, NSW: Federation Press. References 1147

Chua, C. H., & Wareham, J. (2002). Self-regulation for online auctions: An analysis. In ICIS 2002: Proceedings of international conference on information systems. Chua, C. H., & Wareham, J. (2004). Fighting internet auction fraud: An assessment and proposal. IEEE Computer, 37(10), 31–37. Church, A. (1951). The weak theory of implication. In A. Menne, A. Wilhelmy, & H. Angell (Eds.), Kontroliertes Denken: Untersuchungen zum Logikkalkül und zur Logik der Einzelwissenschaften (pp. 22–37). Munich: Kommissions-Verlag Karl Alber. Cialdini, R. (1993). Influence: Science and practice (3rd ed.). New York: HarperCollins. Ciampolini, A., & Torroni, P. (2004). Using abductive logic agents for modelling judicial evaluation of criminal evidence. Applied Artificial Intelligence, 18(3/4), 251–275. Ciocoiu, M., Nau, D. S., & Grüninger, M. (2001). Ontologies for integrating engineering applications. Journal of Computing and Information Science and Engineering, 1(1), 12–22. Cios, K. J., Pedrycz, W., & Swiniarski, R. (1998). Data mining methods for knowledge discovery. Boston: Kluwer. Clark, M., & Crawford, C. (Eds.). (1994). Legal medicine in history. Cambridge History of Medicine. Cambridge: Cambridge University Press. Clark, P. (1991). A model of argumentation and its application in a cooperative expert system. Ph.D. thesis, Department of Computer Science, Turing Institute, University of Strathclyde, Glasgow, Scotland. Clark, R. A., & Delia, J. G. (1976). The development of functional persuasive skills in childhood and early adolescence. Child Development, 47, 1008–1014. Clark, S. E., & Wells, G. L. (2007). On the diagnosticity of multiple-witness identifica- tions. Law and Human Behavior, 32, 406–422. Published online on 18 December 2007. doi://10.1007/s10979-007-9115-7 Clarke, P. H. (1985). The surveyor in court. London: . Clarke, R. V., & Eck, J. (2005). Crime analysis for problem solvers in 60 small steps. Washington, DC: Office of Community Oriented Policing Services, U.S. Department of Justice. http://www. popcenter.org/Library/RecommendedReadings/60Steps.pdf Clarke, R. V., & Felson, M. (1993). Introduction: Criminology, routine activity, and rational choice. In R. V. Clarke & M. Felson (Eds.), Routine activity and rational choice. (Advances in Criminological Theory, 5.) New Brunswick, NJ: Transaction Publishers. Clay, M., & Lehrer, K. (Eds.). (1989). Knowledge and skepticism. Boulder, CO: Westerview Press. Clements, R. V. (1994). Safe practice in obstetrics and gynaecology: A medico-legal handbook. Edinburgh: . Clifford, B., & Bull, R. (1978). The psychology of person identification. London: Routledge. Coady, W. F. (1985). Automated link analysis: Artificial Intelligence-based tool for investigators. Police Chief, 52(9), 22–23. Cocker, M. (1990). Soldier, scientist and spy. London: Secker & Warburg. London: Mandarin, 1990. Cocker, M. (2000). Soldier, scientist, spy ...fraud. (sound cassette, recorded from BBC Radio 4, 11.00–11.30am, 26 December 2000.) London: BBC. Cohen, D. (2005). Arguments that backfire. In D. Hitchcock (Ed.), The uses of argument (pp. 58–65). Hamilton, ON: OSSA. Cohen, F. (2009). Two models of digital forensic analysis. In Proceedings of the fourth international IEEE workshop on Systematic Approaches to Digital Forensic Engineering (SADFE-2009), Oakland, CA, 21 May 2009, pp. 42–53. Cohen, F. S. (1935). Transcendental nonsense and the functional approach. Columbia Law Review, 35(6), 809–849. http://www.jstor.org/stable/1116300 Cohen, L. E., & Felson, M. (1979). Social change and crime rate trends: A routine activity approach. American Sociological Review, 44, 588–608. Cohen, L. J. (1977). The probable and the provable. Oxford: Oxford University Press. Cohen, P. (1985). Heuristic reasoning about uncertainty: An artificial intelligence approach. London: Pitman. 1148 References

Cohen, P. R., & Levesque, H. J. (1990). Intention is choice with commitment. Artificial Intelligence, 42(2/3), 213–261. Cohn, A. G., Gooday, J. M., & Bennett, B. (1995). A comparison of structures in spatial and temporal logics. In R. Casati & G. White (Eds.), Philosophy and the cognitive sciences (pp. 409–422). Vienna: Holder-Pichler-Tempsky. Cohn, A. G., Gotts, N. M., Cui, Z., Randell, D. A., Bennett, B., & Gooday, J. M. (1994). Exploiting temporal continuity in qualitative spatial calculi. In R. G. Golledge & M. J. Egenhofer (Eds.), Spatial and temporal reasoning in geographical information systems. Amsterdam: Elsevier. Colby, K. M. (1975). Artificial paranoia. Oxford: . Colby, K. M. (1981). Modeling a paranoid mind. The Behavioral and Brain Sciences, 4(4), 515–560. Colby, K. M. (1983). Limits on the scope of PARRY as a model of paranoia. [Response to Manschreck (1983).] The Behavioral and Brain Sciences, 6(2), 341–342. Cole, D. J., & Ackland, P. R. (1994). The detective and the doctor: A murder casebook. London: Hale. Cole, S. A. (1999). What counts for identity? The historical origins of the methodology of latent fingerprint identification. Science in Context,56 12, 139–172. Cole, S. A. (2001). Suspect identities: A history of fingerprinting and criminal identification. Cambridge, MA: Harvard University Press. Cole, S. A. (2004). Grandfathering evidence: Fingerprint admissibility rulings from Jennings to Llera Plaza and back again. American Criminal Law Review, 41, 1189–1276. Cole, S. A. (2005). More than zero: Accounting for error in latent print identification. Journal of Criminal Law and Criminology, 95, 985–1078. Cole, S. A. (2006a). The prevalence and potential causes of wrongful conviction by fingerprint evidence. Golden Gate University Law Review, 37, 39–105. Cole, S. A. (2006b). Is Fingerprint identification valid? Rhetorics of reliability in fingerprint proponents’ discourse. Law Policy, 28, 109–135. Cole, S. A. (2009). Daubert revisited. Don’t shoot the messenger by one of the messengers: A response to Merlino et al. Tulsa Law Review, 45, 111–132. http://www.tulsalawreview.com/ wp-content/uploads/2010/10/Cole.Final_.pdf Cole, S. A., Welling, M., Dioso-Villa, R., & Carpenter, R. (2008). Beyond the individuality of fingerprints: A measure of simulated computer latent print source attribution accuracy. Law, Probability and Risk, 7, 165–189. Coleman, K. M. (Ed.). (2006). Martial, Liber spectaculorum [so on , vs. frontispiece: M. Valerii Martialis Liber spectaculorum], edited with introduction, translation [from Latin into English] and commentary. Oxford: Oxford University Press. Colombetti, M., Gini, G., & Nissan, E. (2007). Guest editorial: Marco Somalvico Memorial Issue. International Journal on Artificial Intelligence Tools, 16(2), 149–159. Colombetti, M., Gini, G., & Nissan, E. (2008a). Guest editorial: Papers in sensing and in reasoning (Marco Somalvico Memorial Issue). Cybernetics and Systems, 39(4), 305–309. Colombetti, M., Gini, G., & Nissan, E. (2008b). Guest editorial: Robotics, virtual reality, and agents and their body: A special issue in memory of Marco Somalvico. Journal of Intelligent and Robotic Systems, 52(3/4), 333–341. Colwell, K., Hiscock-Anisman, C., Memon, A., Woods, D., & Yaeger, H. (2006). Strategies of impression management among deceivers and truth tellers: How liars attempt to convince. American Journal of Forensic Psychology, 24(2), 31–38. Colwell, K., Hiscock-Anisman, C., Memon, A., Rachel, A., & Colwell, L. (2007). Vividness and spontaneity of statement detail characteristics as predictors of witness credibility. American Journal of Forensic Psychology, 25, 5–30. Combrink-Kuiters, C. J. M., De Mulder, R. V., & van Noortwijk, C. (2000). Jurimetrical research on judicial decision-making: A review. At Intelligent Decision Support for Legal Practice (IDS

56 Science in Context is a journal published by Cambridge University Press. References 1149

2000).InProceedings of the international ICSC congress “Intelligent Systems & Applications” (ISA 2000), Wollongong, NSW, Australia, December 2000 (Vol. 1, pp. 109–117). Wetaskiwin, AB: ICSC . Conan Doyle, A. (1987). See Doyle (1887).57 Conant, E. (2003). Man of 1,000 faces: The forensic genius of Mikhail Gerasimov. Archaeology, 56(4). The Archaeological Institute of America, July/August 2003. Cone, E. J., & Deyl, Z. (Eds.). (1992). Toxicology and forensic applications. Special issue, Journal of Chromatography, B: Biomedical Applications, 580(1/2). Amsterdam: Elsevier. Conklin, J., & Begeman, M. L. (1988). gIBIS: A hypertext tool for exploratory policy discussion. ACM Transactions on Office Information Systems, 4(6), 303–331. Conley, J. M., & O’Barr, W. M. (1998). Just words: Law, language and power. Chicago: University of Chicago Press. Conley, J. M., & O’Barr, W. M. (1990). Rules versus relationships: The ethnography of legal discourse. Chicago: University of Chicago Press. Console, L., & Torasso, P. (1991). A spectrum of logical definitions of model-based diagnosis. Computational Intelligence, 7(3), 133–141. Conte, R., & Paolucci, M. (2002). Reputation in artificial societies. Social beliefs for social order. Dordrecht: Kluwer. Conway, J. V. P. (1959). Evidential documents. Springfield, IL: Charles C. Thomas. Cook, R., Evett, I., Jackson, G., Jones, P., & Lambert, J. (1998). A model for case assessment and interpretation. Science and Justice, 38, 151–156. Cook, T., & Tattersall, A. (2008). Blackstone’s senior investigating officer’s handbook. Oxford: Oxford University Press. Cooper, J. (2008). Net marks crime capital. Bexley Times (South East London, of the Kentish Times Group), 11 September, p. 7, col. 5. Cope, N. (2003). Crime analysis: Principles and practice. In T. Newburn (Ed.), Handbook of policing (pp. 340–362). Cullompton: Willan Publishing. Cope, N. (2004). Intelligence led policing or policing led intelligence? Integrating volume crime analysis into policing. British Journal of Criminology, 44(2), 188–203. Correira, A. (1980). Computing story trees. American Journal of Computational Linguistics, 6(3/4), 135–149. Cortes, C., Pregibon, D., & Volinsky, C. (2001). Communities of interest. In Proceedings of the fourth international symposium of intelligent data analysis. Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20, 273–297. Costa, M., Sousa, O., & Neves, J. (1999). Managing legal precedents with case retrieval nets. In H. J. van den Herik, M.-F. Moens, J. Bing, B. van Buggenhout, J. Zeleznikow, & C. Grütters (Eds.), Legal knowledge based systems. JURIX 1999: The twelfth conference (pp. 13–22). Nijmegen, The Netherlands, Gerard Noodt Instituut (GNI). Costello, B. D., Gunawardena, C. A., & Nadiadi, Y. M. (1994). Automated coincident sequencing for fingerprint identification. In Proceedings of the IEE colloquium on image processing for biometric measurement, London, pp. 3.1–3.5. Coull, S., Branch, J., Szymanski, B. K., & Breimer, E. (2003). Intrusion detection: A bioinformatics approach. In Proceedings of the 19th annual computer security applications conference,Las Vegas, NV, December 2003, pp. 24–33. Coull, S., & Szymanski, B. K. (2008). Sequence alignment for masquerade detection. Computational Statistics and Data Analysis, 52(8), 4116–4131. Coulthard, M. (1992). Forensic discourse analysis. In M. Coulthard (Ed.), Advances in spoken discourse analysis (pp. 242–258). London: Routledge.

57 In fact, Sir Arthur Conan Doyle was the son of Charles Doyle, who illustrated the book when it first appeared in book form, in 1888. 1150 References

Cox, M., & Mays, S. (Eds.). (2000). Human osteology in archaeology and forensic science. London: Greenwich Medical Media. Cox, M. T. (1994). Machines that forget: Learning from retrieval failure of mis indexed expla- nations. In A. Ram & K. Eiselt (Eds.), Proceedings of the sixteenth annual conference of the cognitive science society (pp. 225–230). Hillsdale, NJ: LEA. http://mcox.org/Papers/mach- forget.ps.gz Cox, M. T. (1996a). Introspective multistrategy learning: Constructing a learning strategy under reasoning failure. Doctoral dissertation. Technical Report GIT-CC-96-06. Atlanta, GA: College of Computing, Georgia Institute of Technology. http://hcs.bbn.com/personnel/Cox/thesis/ Cox, M. T. (1996b). An empirical study of computational introspection: Evaluating introspec- tive multistrategy learning in the Meta-AQUA system. In R. S. Michalski & J. Wnek (Eds.), Proceedings of the third international workshop on multistrategy learning (pp. 135–146). Menlo Park, CA: AAAI Press. http://mcox.org/Ftp/eval-paper.ps.Z Cox, M. T. (2005). Metacognition in computation: A selected research review. Artificial Intelligence, 169(2), 104–114. Cox, M. T. (2007a). Perpetual self-aware cognitive agents. AI Magazine, 28(1), 32–45. Cox, M. T. (2007b). Metareasoning, monitoring and self-explanation. In Proceedings of the first international workshop on metareasoning in agent-based systems at AAMAS 2007.Also: Technical report. Cambridge, MA: BBN Technologies, Intelligent Computing, pp. 46–60. http://mcox.org/Papers/self-explan7.pdf Cox, M. T., & Raja, A. (2007). Metareasoning: A manifesto. Technical Report: BBN Technical Memo, BBN TM 2028. Cambridge, MA: BBN Technologies, Intelligent Computing. Cox, M. T., & Ram, A. (1999). Introspective multistrategy learning: On the construction of learning strategies. Artificial Intelligence, 112, 1–55. Cozman, F. J. (2001). JavaBayes: Bayesian networks in Java. http://www-2.cs.cmu.edu/~ javabayes/ Crandall, D., Backstrom, L., Cosley, D., Suri, S., Huttenlocher, D., & Kleinberg, J. (2010, December 28). Inferring social ties from geographic coincidences. Proceedings of the National Academy of Sciences, 107(52), 22436–22441. http://www.pnas.org/content/early/2010/12/02/ 1006155107.full.pdf+html Crandall, J. R., Wu, S. F., & Chong, F. T. (2005). Experiences using Minos as a tool for capturing and analyzing novel worms for unknown vulnerabilities. In K. Julisch & C. Krügel (Eds.), Detection of Intrusions and Malware, and Vulnerability Assessment: Proceedings of the second international conference (DIMVA 2005), Vienna, Austria, July 7–8, 2005 (pp. 32–50). (Lecture Notes in Computer Science, Vol. 3548.) Berlin: Springer. Crittendon, C. (1991). Unreality: The metaphysics of fictional objects. Ithaca, NY: Cornell University Press. Cross, R., & Tapper, C. (1985). Cross on evidence (6th ed.). London: Butterworth. Crump, D. (1997). On the uses of irrelevant evidence. Houston Law Review, 34, 1–53. Cui, Z., Cohn, A. G., & Randell, D. A. (1992). Qualitative simulation based on a logical formalism of space and time. In Proceedings of AAAI-92. Menlo Park, CA: AAAI Press, pp. 679–684. Culhane, S. E., & Hosch, H. M. (2002). An alibi witness’s influence on jurors’ verdicts. University of Texas-El Paso.58 [Cited before publication in a passage I quoted from Olson & Wells (2002).] Culhane, S. E., & Hosch, H. M. (2004). An alibi witness’s influence on juror’s decision making. Journal of Applied Social Psychology, 34, 1604–1616. Culhane, S. E., & Hosch, H. M. (2005). Law enforcement officers serving as jurors: Guilty because charged? Psychology, Crime and Law, 11, 305–313. Culhane, S. E., Hosch, H. M., & Weaver, W. G. (2004). Crime victims serving as jurors: Is a bias present? Law and Human Behavior, 28, 649–659.

58 At the time, Scott Culhane was pursuing there a doctoral degree in . References 1151

Cullingford, R. E. (1978). Script application: Computer understanding of newspaper stories. Technical Report YALE/DCS/tr116. New Haven, CT: Computer Science Department, Yale University. Cullingford, R. E. (1981). SAM (Ch. 5); Micro SAM (Ch. 6). In R. G. Schank & C. K. Riesbeck (Eds.), Inside computer understanding: Five programs plus miniatures (pp. 75–119 and 120– 135). Hillsdale, NJ: Erlbaum. Cummins, H., & Midlo, C. (1943). Finger prints, palms and soles: An introduction to dermato- glyphics. Philadelphia, PA: Blakiston. Curzon, L. B. (1997). Criminal law (8th ed.). London: Pitman. Cutler, B. L. (Ed.). (2009). Expert testimony on the psychology of eyewitness identification (American Psychology-Law Society Series.) New York: Oxford University Press. Cutler, B. L., & Penrod, S. D. (1995). Assessing the accuracy of eye-witness identifications. Chapter 3.3 In R. Bull & D. Carson (Eds.), Handbook of psychology in legal contexts (pp. 193–213). Chichester: Wiley. Cutler, B. L., Penrod, S. D., & Martens, T. K. (1987). The reliability of eye-witness identifications: The role of system and estimator variables. Law and Human Behavior, 11, 223–258. Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2, 303–314. Dagan, H. (2007). The realist conception of law. Toronto Law Journal, 57(3), 607–660. Dahlgren, K., McDowell, J., & Stabler, E. P., Jr. (1989). Knowledge representation for common- sense reasoning with text. Computational Linguistics, 15(3), 149–170. http://acl.ldc.upenn.edu/ J/J89/J89-3002.pdf Dalton, R. (2005). Ornithologists stunned by bird collector’s deceit. Nature, 437, 302–303. http:// www.nature.com/news/index.html Danet, B. (1994). Review of R. W. Shuy, Language crimes: The use and abuse of language evi- dence in the courtroom (Oxford: Blackwell, 1993), in the Journal of Language and Social Psychology, 13(1), 73–78. Daniels, J. J., & Rissland, E. L. (1997). Finding legally relevant passages in case opinions. In Proceedings of the sixth international conference on artificial intelligence and law, Melbourne, Australia. New York: ACM Press, pp. 39–46. Darling, S., Valentine, T., & Memon, A. (2008). Selection of lineup foils in operational contexts. Applied Cognitive Psychology, 22, 159–169. Darr, T., & Birmingham, W. (1996). An attribute-space representation and algorithm for concurrent engineering. Artificial Intelligence for Engineering Design, Analysis and Manufacturing (AI EDAM), 10(1), 21–35. Dauer, F. W. (1995). The nature of fictional characters and the referential fallacy. The Journal of Aesthetics and Art Criticism, 53(1), 32–38. Daugherty, W. E., & Janowitz, M. (1958). A psychological warfare casebook. Baltimore, MD: John Hopkins Press, for Operations Research Office, Johns Hopkins University. Now available from Ann Arbor, MI: University Microfilms International (UMI) books on demand. Dauglas, M. (1993). Emotion and culture in theories of justice. Economy and Society, 22(4), 501–515. Dave, K., Lawrence, S., & Pennock, D. M. (2003). Mining the peanut gallery: Opinion extrac- tion and semantic classification of product reviews. In WWW 2003: Proceedings of the 12th international conference on world wide web. Davenport, G., & Murtaugh, M. (1997). Automatic storyteller systems and the shifting sands of story. IBM Systems Journal, 36(3), 446–456. Davenport, G., Bradley, B., Agamanolis, S., Barry, B., & Brooks, K. (2000). Synergistic sto- ryscapes and constructionist cinematic sharing. IBM Systems Journal, 39(3/4), 456–469. Davenport, G. C., France, D. L., Griffin, T. J., Swanburg, J. G., Lindemann, J. W., Tranunell, V., et al. (1992). A multidisciplinary approach to the detection of clandestine graves. Journal of Forensic Science, 37(6), 1445–1458. (Also, posted at http://www.terraplus.ca/case-histories/ dave1.htm) 1152 References

David, R., & Alla, H. (1992). Petri nets and Grafcet: Tools for modelling discrete event systems. New York: Prentice Hall. Translation of: Du Grafcet aux reseaux de Petri. Davide, F. A. M., Di Natale, C., & D’Amico, A. (1995). Self-organizing sensory maps in odour classification mimicking. Biosensors and Bioelectronics, 10, 203–218. Davies, G. M., & Christie, D. (1982). Face recall: An examination of some factors limiting composite production accuracy. Journal of Applied Psychology, 67(1), 103–109. Davies, G. M., Ellis, H. D., & Shepherd, J. W. (1978). Face identification: The influence of delay upon accuracy of Photofit construction. Journal of Police Science and Administration, 6(1), 35–42. Davies, G. M, Ellis, H. D., & Shepherd, J. W. (1981). Perceiving and remembering faces.New York: Academic. Davis, D., & Follette, W. C. (2002). Rethinking the probative value of evidence: Base rates, intuitive profiling, and the “postdiction” of behavior. Law and Human Behavior, 26, 133–158. Davis, D., & Follette, W. C. (2003). Toward an empirical approach to evidentiary ruling. Law and Human Behavior, 27, 661–684. Davis, G., & Pei, J. (2003). Bayesian networks and traffic accident reconstruction. Proceedings of the ninth international conference on artificial intelligence and law, Edinburgh, Scotland (pp. 171–176). New York: ACM Press. Davis, O. (1999). Palynomorphs. At the website whose address is Retrieved in March 2007 http:// www.geo.arizona.edu/palynology/ppalydef.html Dawid, A. P. (1994). The island problem: Coherent use of identification evidence. Chapter 11 In P. R. Freeman & A. F. M. Smith (Eds.), Aspects of uncertainty: A tribute to D. V. Lindley (pp. 159–170). Chichester: Wiley. Dawid, A. P. (1998). Modelling issues in forensic inference. In 1997 ASA proceedings, Section on Bayesian Statistics, pp. 182–186. Alexandria, VA: The American Statistical Association. Dawid, A. P. (2001a). Comment on Stockmarr’s ‘Likelihood ratios for evaluating DNA evi- dence when the suspect is found through a database search’ (with response by A. Stockmarr). Biometrics, 57, 976–980. Dawid, A. P. (2001b). Bayes’s theorem and weighing evidence by juries. Research Report 219, April. Department of Statistical Science, University College London. Dawid, A. P. (2002). Bayes’s theorem and weighing evidence by juries. In R. Swinburne (Ed.), Bayes’s theorem. Proceedings of the British Academy, 113, 71–90. Dawid, A. P. (2004a). Which likelihood ratio? (Comment on ‘Why the effect of prior odds should accompany the likelihood ratio when reporting DNA evidence’, by R. Meester & M. Sjerps). Law, Probability and Risk, 3, 65–71. Dawid, A. P. (2004b). Statistics on trial. Research Report 250, December. London: Department of Statistical Science, University College London. Then published as: Statistics on trial. Significance, 2(2005), 6–8. Dawid, A. P. (2005a). Probability and statistics in the law. In Z. Ghahramani & R.G. Cowell (Eds.) Proceedings of the tenth international workshop on artificial intelligence and statistics, January 6–8 2005, Barbados. (Online at http://tinyurl.com/br8fl). Dawid, A. P. (2005b). Probability and proof. On-line Appendix to Analysis of evidence (2nd ed., pp. 119–148), by T. J. Anderson, D. A. Schum, & W. L. Twining. Cambridge: Cambridge University Press. Posted at http://tinyurl.com/7g3bd, 94 pp. Dawid, A. P. (2008). Statistics and the law. In A. Bell, J. Swenson-Wright, & K. Tybjerg (Eds.), Evidence. Cambridge.: Cambridge University Press, pp. 119–148. Also: Research Report 244, May 2004, Department of Statistical Science, University College London, May 2004. Dawid, A. P., & Evett, I. W. (1997). Using a graphical method to assist the evaluation of complicated patterns of evidence. Journal of Forensic Science, 42, 226–231. Dawid, A. P., & Evett, I. W. (1998). Authors’ response to ‘Commentary on Dawid, A. P. and Evett, I. W. Using a graphical method to assist the evaluation of complicated patterns of evidence. J. Forensic Sci. (1997) Mar; 42(2): 226–231’ by Ira J. Rimson. Journal of Forensic Science, 43, 251. References 1153

Dawid, A. P., Hepler, A. B., & Schum, D. A. (2011). Inference networks: Bayes and Wigmore. Chapter5In:A.P.Dawid,W.L.Twining,&D.Vasilaki(Eds.),Evidence, inference and enquiry (to appear). Oxford: Oxford University Press. Dawid, A. P., & Mortera, J. (1994). Elementary Watson!: Coherent analysis of forensic evidence. Research Report 136, May. Department of Statistical Science, University College London. Dawid, A. P., & Mortera, J. (1996). Coherent analysis of forensic identification evidence. Journal of the Royal Statistics Society, B, 58, 425–443. Dawid, A. P., & Mortera, J. (1998). Forensic identification with imperfect evidence. Biometrika, 85, 835–849. Correction: Biometrika, 86 (1999), p. 974. Dawid, A. P., Mortera, J., Dobosz, M., & Pascali, V. L. (2003). Mutations and the probabilistic approach to incompatible paternity tests. In B. Brinkmann & A. Carracedo (Eds.), Progress in forensic genetics 9: Proceedings from the 19th congress of the international society for forensic haemogenetics (pp. 637–638). (International Congress Series, Vol. 1239.) Amsterdam: Elsevier Science. Dawid, A. P., Mortera, J., & Pascali, V. L. (2001). Non-fatherhood or mutation? A probabilis- tic approach to parental exclusion in paternity testing. Forensic Science International, 124, 55–61. Dawid, A. P., Mortera, J., Pascali, V. L., & van Boxel, D. W. (2002). Probabilistic expert systems for forensic inference from genetic markers. Scandinavian Journal of Statistics, 29, 577–595. Dawid, A. P., Mortera, J., & Vicard, P. (2006). Representing and solving complex DNA identi- fication cases using Bayesian networks. In Progress in forensic genetics 11 (Proceedings of the 21st international ISFG congress). International Congress Series, Vol. 1288. Amsterdam: Elsevier Science, pp. 484–91. doi:10.1016/j.ics.2005.09.115 Dawid, A. P., Mortera, J., & Vicard, P. (2010). Paternity testing allowing for uncertain mutation rates. In A. O’Hagan & M. West (Eds.), The Oxford handbook of applied bayesian analysis (pp. 188–215). Oxford: Oxford University Press. Dawid, A. P., & Pueschel, J. (1999). Hierarchical models for DNA profiling using heterogeneous databases (with Discussion). In J. M. Bernardo, J. O. Berger, A. P. Dawid, & A. F. M. Smith (Eds.), Bayesian statistics 6 (pp. 187–212). Oxford: Oxford University Press. Dawid, A. P., van Boxel, D. W., Mortera, J., & Pascali, V. L. (1999). Inference about disputed paternity from an incomplete pedigree using a probabilistic expert system. Bulletin of the International Statistics Institute, 58, 241–242. Contributed Papers Book 1. Dawson, L., Macdonald, L., Ball, J., & “other members of the SoilFit team”. (2006). Integration of soil fingerprinting techniques for forensic application (abstract). In A. Ruffell (Ed.), Abstract book of geoscientists at crime scenes: First, inaugural meeting of the Geological Society of London 20 December 2006 (p. 23). London: Forensic Geoscience Group. http://www.geolsoc. org.uk/pdfs/FGtalks&abs_pro.pdf. Daye, S. J. (1994). Middle-class blacks in Britain. Basingstoke: Macmillan. De Antonellis, V., Pozzi, G., Schreiber, F. A., Tanca, L., & Tosi, L. (2005). A Web-geographical information system to support territorial data integration. In M. Khosrow-Pour (Ed.), Encyclopedia of information science and technology (4 Vols., pp. 33–37). Hershey, PA: Idea Group Publishing. http://home.dei.polimi.it/schreibe/papers/encyclopedia deafin.pdf (sic: /schreibe/ not /schreiber/) Debevec, P. (1998). Rendering synthetic objects into real scenes: Bridging traditional and image- based graphics with global illumination and high dynamic range photography. In SIGGRAPH ’98: Proceedings of the 25th annual conference on computer graphics and interactive techniques. New York: ACM Press, pp. 189–198. de Cataldo Neuburger, L., & Gulotta, G. (1996). Trattato della menzogna e dell’inganno.Milan: Giuffrè. Dechter, R., Geffner, H., & Halpern, J. Y. (Eds.). (2010). Heuristics, probability and causality: A tribute to Judea Pearl. London: College Publications. Deedrick, D. W. (2001). Fabric processing and “nubs”. Chapter 1 In M. M. Houck (Ed.), Mute witnesses: Trace evidence analysis. London: Academic. 1154 References

Deffenbacher, K. A., & Loftus, E. F. (1982). Do jurors share a common understanding concerning eyewitness behaviour? Law and Human Behavior, 6, 15–29. Dehn, N. (1981). Memory in story invention. In Proceedings of the 3rd Annual Conference of the Cognitive Science Society. Berkeley, CA: Cognitive Science Society, pp. 213–215. Dehn, N. (1989). Computer story-writing: The role of reconstructive and dynamic memory (Technical Report YALE/DCS/tr712). New Haven, CT: Computer Science Department, Yale University. DeJong, G. F. (1979). Skimming stories in real time: An experiment in integrated understand- ing. Ph.D. thesis, Technical Report YALE/DCS/tr158. Department of Computer Science, Yale University, New Haven, CT. DeJong, G. F. (1982). An overview of the FRUMP system. In W. G. Lehnert & M. H. Ringle (Eds.), Strategies for natural language processing (pp. 149–176). Hillsdale, NJ: Erlbaum. de Kleer, J. (1984). Choices without backtracking. In Proceedings of the fourth national conference on artificial intelligence, Austin, TX. Menlo Park, CA: AAAI Press, pp. 79–84. de Kleer, J. (1986). An assumption-based TMS. Artificial Intelligence, 28, 127–162. de Kleer, J. (1988). A general labeling algorithm for assumption-based truth maintenance. In Proceedings of the 7th national conference on artificial intelligence, pp. 188–192. Delannoy, J. F. (1999). Argumentation mark-up: A proposal. In Proceedings of the workshop: Towards standards and tools for discourse tagging, Association for Computational Linguistics, pp. 18–25. Article W99-0303 in the online version of the proceedings, accessible in ACL Anthology (at http://acl.ldc.upenn.edu//W/W99/). Del Boca, A. (1987). Gli italiani in Africa orientale (Vol. 4). Bari & Rome: Laterza. Repr. Milan: Mondadori, 1996, 2001. The series of 4 vols. was first published by Laterza (Vol. 1: 1976, Vol. 2: 1980, Vol. 3: 1986, Vol. 4: 1987). The current edition is Mondadori’s (Vol. 1: 1999, Vol. 2: 2000, Vol. 3: 2000, Vol. 4: 2001). Demelas-Bohy, M.-D., & Renaud, M. (1995). Instability, networks and political parties: A political history expert system prototype. In E. Nissan & K. M. Schmidt (Eds.), From information to knowledge: Conceptual and content analysis by computer (pp. 228–260). Oxford: Intellect Books. De Nicola, A., Missikoff, M., & Navigli, R. (2009). A software engineering approach to ontology building. Information Systems, 34(2), 258–275. Denney, R. L., & Sullivan, J. P. (Eds.). (2008). Clinical neuropsychology in the criminal forensic setting. New York: Guilford Press. Denning, D. (1986). An intrusion detection model. In Proceedings of the seventh IEEE symposium on security and privacy, May 1986, pp. 119–131. DePaulo, B. M., & Kashy, D. A. (1998). Everyday lies in close and casual relationships. Journal of Personality and Social Psychology, 74(1), 63–79. DePaulo, B. M., Kirkendol, S. E., Tang, J., & O’Brien, T. P. (1988). The motivational impairment effect in the communication of deception: replications and extensions. Journal of Nonverbal Behavior, 12(3), 177–202. DePaulo, B. M., Lanier, K., & Davis, T. (1983). Detecting the deceit of the motivated liar. Journal of Personality and Social Psychology, 45(5), 1096–1103. DePaulo, B. M., LeMay, C. S., & Epstein, J. A. (1991). Effects of importance of success and expec- tations for success on effectiveness at deceiving. Personality and Social Psychology Bulletin, 17(1), 14–24. DePaulo, B. M., Lindsay, J. L., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129, 74–118. DePaulo, B. M., & Pfeifer, R. L. (1986). On-the-job experience and skill at detecting deception. Journal of Applied Social Psychology, 16, 249–267. DePaulo, B. M., Stone, J. I., & Lassiter, G. D. (1984). Deceiving and detecting deceit. In B. R. Schlenker (Ed.), The self and social life (pp. 323–370). New York: McGraw-Hill. de Rosis, F., Castelfranchi, C., & Carofiglio, V. (2000). On various sources of uncertainty in mod- eling suspicion and how to treat them. In Proceedings of the workshop on deception, fraud and trust in agent societies,AttheAutonomous Agents 2000 Conference, pp. 61–72. References 1155

Dershowitz, A. M. (1986). Reversal of fortune: Inside the von Bülow case. New York: Random House. de Vel, O., Anderson, A., Corney, M., & Mohay, G. (2001). Mining E-mail content for author identification forensics. SIGMOD Record, 30(4), 55–64. de Vey Mestdagh, K. (1999). Can computers administer the law? An expert system for environ- mental permit law. In Proceedings of legal knowledge based systems: JURIX 1999. Nijmegen, The Netherlands: Gerard Noodt Instituut, pp. 134–135. De Vey Mestdagh, C. N. J. (2003). Administrative Normative Information Transaction Agents (ANITA): Legitimacy and information technology, the best of two worlds. In Access to knowl- edge and its enhancements: Proceedings of the ToKeN2000 symposium, Delft University of Technology, Delft, The Netherlands, 21 February 2003. de Ville, B. (2006). Decision trees for business intelligence and data mining: Using SAS enterprise miner. Cary, NC: SAS Publishing. Devlin, P. (1976). Report to of state for the Home Department of the Departmental Committee on Evidence of identification in criminal cases. London: HMSO. Dewey, J. (1929). Experience and nature (2nd ed.). LaSalle, IL: Open Court. DeWolf, H. (1966). My ride on a torpedo. Reader’s Digest, 89(531), July, U.S. edition for American service personnel abroad, pp. 167–171. Diaz, R. M. (1981). Topics in the logic of relevance. Munich: Philosophia Verlag. Díaz-Agudo, B., & González-Calero, P. A. (2003). Knowledge intensive CBR through ontologies. Expert Update, 6(1), 44–54. Díaz-Agudo, B., Gervás, P., & Peinado, F. (2004). A case based reasoning approach to story plot generation. In Advances in case-based reasoning. Proceedings of the 7th european conference on case based reasoning, Madrid, 30 August – 2 September 2004. (Lecture Notes in Artificial Intelligence, Vol. 3155.) Berlin: Springer, pp. 142–156. Di Battista, G., Eades, P., Tamassia, R., & Tollis, I. G. (1999). Graph drawing: Algorithms for the visualization of graphs. Englewood Cliffs, NJ: Prentice-Hall. Dick, J. P. (1987). Conceptual retrieval and case law. In Proceedings of the first international conference on artificial intelligence and law, Boston. New York: ACM Press, pp. 106–115. Dick, J. P. (1991). A conceptual, case-relation representation of text for intelligent retrieval.Ph.D. thesis, University of Toronto, Toronto, ON. Dickey, A. (1990). Family law (2nd ed.). Sydney: The Law Book Company Ltd. Diesner, J., & Carley, K. (2005). Exploration of communication networks from the Enron email corpus. In Proceedings of the SIAM international conference on data mining, SIAM workshop on link analysis, counterterrorism and security. Philadelphia, PA: SIAM. Dijkstra, P., Bex, F. J., Prakken, H., & De Vey Mestdagh, C. N. J. (2005). Towards a multi-agent system for regulated information exchange in crime investigations. Artificial Intelligence and Law, 13, 133–151. http://www.computing.dundee.ac.uk/staff/florisbex/Papers/AILaw05.pdf Ding, Y., Fensel, D., Klein, M. C. A., Omelayenko, B., & Schulten, E. (2004). The role of ontolo- gies in eCommerce. In S. Staab & R. Studer (Eds.), Handbook on ontologies (pp. 593–616). International Handbooks on Information Systems. Berlin: Springer. Dintino, J. J., & Martens, F. T. (1983). Police intelligence systems in crime control. Springfield, IL: Charles C. Thomas. Dix, J., Parsons, S., Prakken, H., & Simari, G. (2009). Research challenges for argumentation. Computer Science: Research and Development, 23(2009), 27–34. Dixon, D. (1999). Police investigative procedures. In C. Walker & K. Starmer (Eds.), Miscarriage of justice: A review of justice in error (2nd ed.). London: Blackstone Press. do Carmo Nicoletti, M., & Quinteiro Uchõa, J. (2001). A family of algorithms for implement- ing the main concepts of the rough set theory. In A. Abraham & M. Köppen (Eds.), Hybrid information systems: Proceedings of the first international workshop on Hybrid Intelligent Systems (HIS 2001), Adelaide, Australia, December 11–12, 2001 (pp. 583–595). Advances in Soft Computing Series. Heidelberg: Physica-Verlag (of Springer-Verlag), 2002. 1156 References

Dolan, C. (1989). Tensor manipulation networks: Connectionist and symbolic approaches to com- prehension, learning, and planning. Technical Report 890030. Los Angeles, CA: Computer Science Department, University of California. ftp://ftp.cs.ucla.edu/tech-report/198_-reports/ 890030.pdf Doležel, L. (1972). From motifeme to motifs. Poetics, 4, 55–90. Dolnik, L., Case, T. I., & Williams, K. D. (2003). Stealing thunder as a courtroom tactic revisited: Processes and boundaries. Law and Human Behavior, 27(3), 267–287. Dolz, M. S., Cina, S. J., & Smith, R. (2000). Stereolithography: A potential new tool in forensic medicine. American Journal of Forensic Medicine and Pathology, 21(2), 119–123. Domike, S., Mateas, M., & Vanouse, P. (2003). The recombinant history apparatus presents termi- nal time. In M. Mateas & P. Sengers (Eds.), Narrative intelligence (pp. 155–173). Amsterdam: Benjamins. Domshlak, C., & Shimony, S. E. (2003). Efficient probabilistic reasoning in Bayes nets with mutual exclusion and context specific independence. In Special Issue on Uncertain Reasoning (Part 1), in the International Journal of Intelligent Systems, 19(8), 703–725. Donnelly, L. J. (2002a, May). Finding the silent witness. Geoscientist, 12(5), 16–17. The Geological Society of London. Donnelly, L. J. (2002b). Earthy clues. Geologists can help the police to solve serious crime. The Times, London, Monday 5th August 2002, p. 10, T2. Donnelly, L. J. (2002c, May). Finding the silent witness: How forensic geology helps solve crimes. All-Party Parliamentary Group for Earth Science, Westminster Palace, Houses of Parliament. The Geological Society of London. Geoscientist, 12(5), 24. Donnelly, L. J. (2003, December). The applications of forensic geology to help the police solve crimes. European Geologist: Journal of the European Federation of Geologists, 16, 8–12. Donnelly, L. J. (2004, March–April). Forensic geology: The discovery of spades on Saddleworth Moor. Geology Today, 20(2), 42. Oxford: Blackwell. Donnelly, L. J. (2006). Introduction & welcome. In A. Ruffell (Ed.), Abstract book of geosci- entists at crime scenes: First, inaugural meeting of the Geological Society of London,20 December 2006 (pp. 3–5). London: Forensic Geoscience Group. http://www.geolsoc.org.uk/ pdfs/FGtalks&abs_pro.pdf Doob, A. N. (1978). Research Paper on the Canadian Juror’s View of the Criminal Jury Trial. Publication D-88 of the Law Reform Commission of Canada, Ottawa, ON, 1978. Also, microfiche, Buffalo, NY: Hein, 1984. Doob, A., & Park, N. (1987–1988). Computerized sentencing information for judges: An aid to the sentencing process. Criminal Law Quarterly, 30, 54–72. Doob, A. N. (1990). Sentencing aids: Final report to the Donner Canadian foundation. Toronto, ON: Centre of Criminology, University of Toronto. Doob, A. N., Baranek, P. M., & Addario, S. M. (1991). Understanding justices: A study of Canadian justices of the peace. Research Report 25. Toronto, ON: Centre of Criminology, University of Toronto. Dore, A., & Vellani, S. (1994). Materiali lateniani nelle collezioni del Museo Civico Archeologico di Bologna. OCNUS: Quaderni della Scuola di Specializzazione in Archeologia, 2, 43–51. Doyle, A. C. (1887). A study in scarlet.InBeeton’s Christmans annual. Ward, Lock (illustrated by D. H. Friston). Then in book form: Ward, Lock & Co., 1888 (illustrated by C. Doyle); 2nd edn., 1889 (illustrated by G. Hutchinson); 1st American edn., J. B. Lippincott & Co., 1890. Reprinted, e.g. London: Murray, 1967. Doyle, J. (1979). A truth maintenance system. Artificial Intelligence, 12, 231–272. Dozier, C., Jackson, P., Guo, X., Chaudhary, M., & Arumainayagam, Y. (2003). Creation of an expert witness database through text mining. In Proceedings of the nineth international conference on artificial intelligence and law, Edinburgh, Scotland. New York: ACM Press, pp. 177–184. DPRC. (2000). Website of the Declassification Productivity Research Center, George Washington University, Washington, DC. http://dprc.seas.gwu.edu/dprc5/research_projects/dwpa_n.htm References 1157

Dragoni, A. F., & Animali, S. (2003). Maximal consistency, theory of evidence, and Bayesian conditioning in the investigative domain. Cybernetics and Systems, 34(6/7), 419–465. Dragoni, A. F., Giorgini, P., & Nissan, E. (2001). Distributed belief revision as applied within a descriptive model of jury deliberations. In a special issue on “Artificial Intelligence and Law”, Information & Communications Technology Law, 10(1), 53–65. Dragoni, A. F., & Nissan, E. (2004). Salvaging the spirit of the meter-models tradition: A model of belief revision by way of an abstract idealization of response to incoming evidence delivery during the construction of proof in court. Applied Artificial Intelligence, 18(3/4), 277–303. Dreger, H., Kreibich, C., Paxson, V., & Sommer, R. (2005). Enhancing the accuracy of network-based intrusion detection with host-based context. In K. Julisch & C. Krügel (Eds.), Detection of intrusions and malware, and vulnerability assessment: Proceedings of the Second International Conference (DIMVA 2005), Vienna, Austria, July 7–8, 2005 (pp. 206–221). Lecture Notes in Computer Science, Vol. 3548. Berlin: Springer. Dror, I., & Hamard, S. (2009). Cognition distributed: How cognitive technology extends our minds. Amsterdam: Benjamins. Dror, I. E., & Charlton, D. (2006). Why experts make errors. Journal of Forensic Identification, 56, 600–616. http://www.bioforensics.com/sequential_unmasking/Dror_Errors_JFI.pdf Dror, I. E., Charlton, D., & Péron, A. (2006). Contextual information renders experts vulnerable to making erroneous identifications. Forensic Science International, 156, 74–78. http://www. bioforensics.com/sequential_unmasking/Dror_Contextual_FSI_2006.pdf Dror, I. E., Péron, A., Hind, S.-L., & Charlton, D. (2005). When emotions get the bet- ter of us: The effect of contextual top-down processing on matching fingerprints. Applied Cognitive Psychology, 19(6), 799–809. http://www.bioforensics.com/sequential_unmasking/ Dror_emotions.pdf Dror, I. E., & Rosenthal, R. (2008). Meta-analytically quantifying the reliability and biasability of forensic experts. Journal of Forensic Sciences, 53(4), 900–903. http://www.bioforensics.com/ sequential_unmasking/dror_meta-analysis_JFS_2008.pdf Dror, I. E., & Stevenage, S. V. (Eds.). (2000). Facial information processing: A multidisciplinary perspective. Special issue of Pragmatics & Cognition, 8(1). Amsterdam: Benjamins. Du, X., Li, Y., Chen, W., Zhang, Y., & Yao, D. (2006). A Markov random field based hybrid algorithm with simulated annealing and genetic algorithm for image segmentation. In L. Jiao, L. Wang, X. Gao, J. Liu, & F. Wu (Eds.), Advances in natural computation: Second inter- national conference (ICNC 2006), Xi’an, China, September 24–28, 2006 (pp. 706–715). Proceedings, Part I. (Lecture Notes in Computer Science, Vol. 4221). Berlin: Springer. Duda, R. O., Hart, P. E., & Stork, D. G. (2001a). Pattern classification (2nd ed.). New York: Wiley Interscience. Duda, R. O., Hart, P. E., & Stork, D. G. (2001b). Unsupervised learning and clustering. Chapter 10 in their Pattern classification (2nd ed.). New York: Wiley Interscience. Dulaunoy, A. (2010). Honeynets: Introduction to Honeypot/Honeynet technologies and its his- torical perspective. ASBL CSRRT-LU (Computer Security Research and Response Team, Luxembourg) http://www.csrrt.org/ January 15, 2010 http://www.foo.be/cours/dess-20092010/ honeynet-intro.pdf (accessed in May 2011). Duncan, G. T., Tacey, M. L., & Stauffer, E. (2005). Techniques of DNA analysis. In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Dundes, A. (1975). From etic [sic] to emic units in the structural study of folklore. In A. Dundes (Ed.), Analytic essays in folklore (pp. 61–72). Studies in Folklore, 2. The Hague, The Netherlands: Mouton. (Originally: From etic to emic units in the structural study of folktales. Journal of American Folklore, 75 (1962), pp. 95–105) Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in non-monotonic reasoning, logic programming and n person games. Artificial Intelligence, 77(2), 321–357. Dung, P. M., Thang, P. M., & Hung, N. D. (2010). Modular argumentation for modelling legal doctrines of performance relief. Argument & Computation, 1(1), 47–69. 1158 References

Dunn, J. M. (1986). Relevance logic and entailment. In F. Guenthner & D. Gabbay (Eds.), Handbook of philosophical logic (Vol. 3, pp. 117–124). Dordrecht, The Netherlands: Reidel (now Springer). Rewritten as Dunn & Restall (2002). Dunn, J. M., & Restall, G. (2002). Relevance logic and entailment. In F. Guenthner & D. Gabbay (Eds.), Handbook of philosophical logic (New Edition, Vol. 6, pp. 1–128). Dordrecht, The Netherlands: Kluwer (now Springer). A revised version of Dunn (1986). Dunne, P. E. (2003). Prevarication in duspute protocols. In G. Sartor (Ed.), Proceedings of the ninth International Conference on Artificial Intelligence and Law (ICAIL 2003), Edinburgh, Scotland, 24–28 June 2003 (pp. 12–21). New York: ACM Press. Dunne, P. E., & Bench-Capon, T. J. M. (2002). Coherence in finite argument systems. Artificial Intelligence, 141(1/2), 187–203. http://www.csc.liv.ac.uk/~tbc/publications/ulcs-01-006.pdf Dunne, P. E., & Bench-Capon, T. J. M. (Eds.). (2005). Argumentation in AI and law. (IAAIL Workshop Series, 2.) Nijmegen, The Netherlands: Wolff Publishers. Dunne, P. E., Doutre, S., & Bench-Capon, T. J. M. (2005). Discovering inconsistency through examination dialogues. In Proceedings of the 18th International Joint Conferences on Artificial Intelligence (IJCAI’05), Edinburgh, pp. 1560–1561. http://ijcai.org/search.php Durandin, G. (1972a). Les fondements du mensonge. Paris: Flammarion. Durandin, G. (1972b). La publicité en tant qu’idéologie. In Revue des travaux de l’Académie des Sciences Morales et Politiques(2ème trimestre, pp. 101–124). Durandin, G. (1977). De la difficulté à mentir. Paris: Publications de la Sorbonne, and Louvain, Belgium: Nauwelaerts. Durandin, G. (1978). La manipulation de l’opinion. In Revue des travaux de l’Académie des Sciences Morales et Politiques (pp. 143–173). Durandin, G. (1982). Les mensonges en propagande et en publicité. Paris: Presses Universitaires de France (PUF). Durandin, G. (1993). L’information, la désinformation et la réalité. (Collection “Le psychologue”.) Paris: Presses Universitaires de France (PUF). Durfee, E., Lesser, V., & Corkill, D. (1987). Coherent cooperation among communicating problem solvers. IEEE Transactions on Computers, 36(11), 1275–1291. Dworkin, R. (1977). Taking rights seriously. Cambridge, MA: Harvard University Press. Dworkin, R. (1986). Law’s empire. London: Duckworth. Dyer, M. G. (1983a). In-depth understanding: A computer model of integrated processing of narrative comprehension. Cambridge, MA: The MIT Press. Dyer, M. G. (1983b). The role of affect in narratives. Cognitive Science, 7, 211–242. Dyer, M. G. (1987). Emotions and their computations: Three computer models. Cognition and Emotion, 1(3), 323–347. Dyer, M. G. (1991a). Symbolic neuroengineering for natural language processing: A multi- level research approach. In J. Barnden & J. Pollack (Eds.), High-level connectionist models (pp. 32–86). Norwood, NJ: Ablex. Dyer, M. G. (1991b). Goal/plan analysis of text with distributed representations. In Proceedings of international workshop on fundamental research for the future generation of natural lan- guage processing. ATR Interpreting Telephony Research Laboratories, Kyoto International Community House, Kyoto, Japan, July 23–24, pp. 33–48. Dyer, M. G. (1995). Connectionist natural language processing: A status report. Chapter 12 In R. Sun & L. Bookman (Eds.), Computational architectures integrating neural and symbolic processes (pp. 389–429). Boston & Dordrecht, The Netherlands: Kluwer. Dyer, M. G., Flowers, M., & Wang, Y. A. (1992). Distributed symbol discovery through sym- bol recirculation: Toward natural language processing in distributed connectionist networks. Chapter 2 In R. Reilly & N. Sharkey (Eds.), Connectionist approaches to natural language understanding (pp. 21–48). Hillsdale, NJ: Lawrence Erlbaum Associates. Dysart, J. E., Lindsay, R. C. L., MacDonald, T. K., & Wicke, C. (2002). The intoxicated witness: Effects of alcohol on identification accuracy. Journal of Applied Psychology, 87, 170–175. References 1159

Eades, P. (1984). A heuristic for graph drawing. Congressus Numerantium, 42, 149–160. Earl, L. L. (1970). Experiments in automatic extracting and indexing. Information Storage and Retrieval, 6, 313–334. Easteal, S., McLeod, N., & Reed, K. (1991). DNA profiling: Principles, pitfalls and potential. Chur, Switzerland: Harwood. Ebert, J. I. (2002). Photogrammetry, photointerpretation, and digital imaging and mapping in envi- ronmental forensics. Chapter 3 In B. L. Murphy & R. D. Morrison (Eds.), Introduction to environmental forensics (pp. 43–69). San Diego, CA & London, U.K.: Academic. Ebert, L. C., Ptacek, W., Naether, S., Fürst, M., Ross, S., Buck, U., et al. (2010). Virtobot: A multi-functional robotic system for 3D surface scanning and automatic post mortem biopsy. International Journal of Medical Robotics, 6(1), 18–27. Eck, J. E., & Spelman, W. (1987). Problem solving: Problem-oriented policing in newport news. Washington, DC: Police Executive Research Forum. Eckert, W. G., & James, S. H. (1993). Interpretation of bloodstain evidence at crime scenes.Boca Raton, FL: CRC Press. [Later, Eckert & James (1999).] Eco, U. (1989). Foucault’s Pendulum (W. Weaver, Trans.). London: Secker & Warburg, 1989. (Italian original: Il pendolo di Foucault, Milan: Bompiani, 1988). Eco, U. (1995). The search for the perfect language (in The Making of Europe series), Oxford: Blackwell; London: FontanaPress, 1997. English translation from La ricerca della lingua perfetta nella cultura europea. Rome: Laterza, 1993 (in the series Fare l’Europa), 1996 (Economica Laterza, 85). Other , Paris: du Seuil (French); Munich: Beck (German); Barcelona: Editorial Crítica (Spanish & Catalan). Ecoff, N. L., Ekman, P., Mage, J. J., & Frank, M. G. (2000). Lie detection and language loss. Nature, 405, 139. Edmundson, H. P. (1969). New methods in automatic extracting. Journal of the Association for Computing Machinery, 16(2), 264–285. Edwards, D., & Potter, J. (1995). Attribution. Chapter 4 In R. Harré & P. Stearns (Eds.), Discursive psychology in practice (pp. 87–119). London and Thousand Oaks, CA: Sage. Egashira, M., & Shimizu, Y. (1993). Odor sensing by semiconductor metal oxides. Sensors & Actuators, 14, 443–446. Egeland, T., Mostad, P., & Olaisen, B. (1997). A computerised method for calculating the prob- ability of pedigrees from genetic data. Science & Justice, 37(4), 269–274. http://www.nr.no/~ mostad/pater Egeth, H. E. (1993). What do we not know about eyewitness identification? American Psychologist, 48, 577–580. Egger, S. A. (1990). Serial murder: An elusive phenomenon.NewYork:Praeger. Eggert, K. (2002). Held up in due course: Codification and the victory of form over intent in negotiable instruments law. Creighton Law Review, 35, 363–431. http://papers.ssrn.com/sol3/ papers.cfm?abstract_id=904656 Eigen, J. P. (1995). Witnessing insanity: Madness and mad-doctors in the English court.New Haven, CT: Yale University Press. Einhorn, H. J., & Hogarth, R. M. (1985). Ambiguity and uncertainty in probabilistic inference. Psychological Review, 92, 433–461. Ekelöf, P. O. (1964). Free evaluation of evidence. Scandinavian Studies in Law (Faculty of Law, Stockholm University), 8, 45–66. Ekman, P. (1981). Mistakes when deceiving. Annals of the New York Academy of Sciences, 364, 269–278. Ekman, P. (1985). Telling lies. New York: Norton. Ekman, P. (1988a). Lying and nonverbal behavior: Theoretical issues and new findings. Journal of Nonverbal Behavior, 12, 163–175. Ekman, P. (1988b). Self deception and detection of misinformation. In J. S. Lockhard & D. L. Paulhus (Eds.), Self-deception: An adaptive mechanism? (pp. 229–257). Englewood Cliffs, NJ: Prentice-Hall. 1160 References

Ekman, P. (1989). Why lies fail and what behaviors betray a lie. In J. C. Yuille (Ed.), Credibility assessment (pp. 71–81). Dordrecht, The Netherlands: Kluwer. Ekman, P. (1996). Why don’t we catch liars? Social Research, 63, 801–817. Ekman, P. (1997a). Lying and deception. In N. L. Stein, P. A. Ornstein, B. Tversky, & C. Brainerd (Eds.), Memory for everyday and emotional events (pp. 333–347). Hillsdale, NJ: Lawrence Erlbaum Associates. Ekman, P. (1997b). Deception, lying and demeanor. In D. F. Halpern & A. E.Voiskounsky (Eds.), States of mind: American and post-soviet perspectives on contemporary issues in psychology (pp. 93–105). New York: Oxford University Press. Ekman, P., & Frank, M. G. (1993). Lies that fail. In M. Lewis & C. Saarni (Eds.), Lying and deception in everyday life (pp. 184–200). New York: Guilford Press. Ekman, P., & Friesen, W. V. (1969). Nonverbal leakage and clues to deception. Psychiatry, 32, 88–105. Ekman, P., & Friesen W. V. (1974). Detecting deception from body or face. Journal of Personality and Social Psychology, 29(3), 288–298. Ekman, P., Friesen, W. V., & O’Sullivan, M. (1988). Smiles when lying. Journal of Personality and Social Psychology, 54, 414–420. Ekman, P., & O’Sullivan, M. (1989). Hazards in detecting deceit. In D. Raskin (Ed.), Psychological methods for investigation and evidence (pp. 297–332). New York: Springer. Ekman, P., & O’Sullivan, M. (1991a). Facial expression: Methods, means and moues [sic]. In R. S. Feldman & B. Rime (Eds.), Fundamentals of nonverbal behavior (pp. 163–199). Cambridge: Cambridge University Press. Ekman, P., & O’Sullivan, M. (1991b). Who can catch a liar? American Psychologist, 46, 113–120. Ekman, P., & O’Sullivan, M. (2006). From flawed self-assessment to blatant whoppers: The utility of voluntary and involuntary behavior in detecting deception. Behavioral Sciences and the Law, 24, 673–686. Ekman, P., O’Sullivan, M., & Frank, M. (1999). A few can catch a liar. Psychological Science, 10, 263–266. Ekman, P., O’Sullivan, M., Friesen, W. V., & Scherer, K. R. (1991). Face, voice and body in detecting deception. Journal of Nonverbal Behavior, 15, 125–135. Eliot, L. B. (1993). Prefilter your neurons. AI Expert, 8(7), 9. Ellen, D. (2005). Scientific examination of documents: Methods and techniques (3rd ed.). Boca Raton, FL: CRC Press. Ellis, H. D., Davies, G. M., & Shepherd, J. W. (1986). Introduction: Processes underlying face recognition. In R. Bruyer (Ed.), The neuropsychology of face perception and facial expression (pp. 1–38). Hillsdale, NJ: Lawrence Erlbaum Associates. Ellis, H. D., Shepherd, J. W., & Davies G. M. (1975). An investigation of the use of the Photofit technique for recalling faces. British Journal of Psychology, 66(1), 29–37. Elsayed, T., & Oard, D. W. (2006). Modeling identity in archival collections of email: A prelim- inary study. At the Third Conference on Email and Anti-Spam, CEAS 2006, Mountain View, CA, July 27–28, 2006. Elsner, M., Austerweil, J., & Charniak, E. (2007). A unified local and global model for dis- course coherence. In C. L. Sidner, T. Schultz, M. Stone, & Ch. X. Zhai (Eds.), Human Language Technology conference of the North American Chapter of the Association of Computational Linguistics, proceedings (HLT-NAACL 2007), Rochester, NY, April 22–27, 2007. The Association for Computational Linguistics, 2007, pp. 436–443. Elsner, M., & Charniak, E. (2008). You talking to me? A corpus and algorithm for conversation disentanglement. In ACL 2008, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, Columbus, Ohio, June 15–20, 2008. The Association for Computer Linguistics, 2008, pp. 834–842. Elson, D. K., Dames, N., & McKeown, K. R. (2010). Extracting social networks from literary fic- tion. In Proceedings of the 48th annual meeting of the association for computational linguistics, Uppsala, Sweden, 11–16 July 2010, pp. 138–147. References 1161

Emiroglu, I., & Akhan, M. B. (1997). Preprocessing of fingerprint images. In Proceedings of the IEE European conference on security and detection, Hertfordshire, England, pp. 147–151. Endres-Niggermeyer, B. (1998). Summarizing information. Berlin: Springer. Engel, M. (1992). Is epistemic luck compatible with knowledge? Southern Journal of Philosophy, 30, 59–75. Engelmore, R., & Morgan, T. (Eds.). (1988). Blackboard systems. Reading, MA: Addison-Wesley. EnVision (2002). Computational anatomy: An emerging discipline. EnVision, 18(3). National Partnership for Advanced Computational Infrastructure. http://www.npaci.edu/enVision/v18. 3/anatomy.html Epstein, R. (2002). Fingerprints meet Daubert: The myth of fingerprint “science” is revealed. Southern California Law Review, 75, 605–657. Eraly, A. (2004). The Mughal Throne: The Saga of India’s great emperors. London: Weidenfeld & Nicolson, 2003; London: Phoenix (of Orion Books), 2004, pbk. = 2nd edn. of Emperors of the Peacock Throne: The Saga of the Great Mughals, Penguin Books India, 1997, 2000. ERCIM News. (2007). The digital patient. Special issue, ERCIM News, 69 (April). The papers in this issue can be downloaded from the site http://www.ercim.org/publication/Ercim_News/ of the European Research Consortium for Informatics and Mathematics. ERCIM News. (2010). Andrea Esuli winner of the 2010 ERCIM Cor Baayen Award. ERCIM News, 83 (October), 11. Erickson, B., Lind, A. E., Johnson, B. C., & O’Barr, W. M. (1978). Speech style and impres- sion formation in a court setting: The effects of ‘powerful’ and ‘powerless’ speech. Journal of Experimental Social Psychology, 14, 266–279. Ericson, R. V. (1981). Making crime: A study of detective work. Toronto, ON: Butterworths. Ernst, D. R. (1998). The critical tradition in the writings of American legal history. Yale Law Journal, 102, 1019–1044. Erwin, D. H., & Droser, M. L. (1993). Elvis taxa. Palaios, 8, 623–624. Eshghi, K., & Kowalski, R. (1989). Abduction compared with negation by failure. In G. Levi & M. Martelli (Eds.), Sixth international conference on logic programming (pp. 234–254). Cambridge, MA: MIT Press. Esmaili, M., Safavi-Naini, R., Balachandran, B., & Pieprzyk, J. (1996). Case-based reasoning for intrusion detection. At the IEEE twelfth annual computer security applications conference, pp. 214–223. Espar, T., & Mora, E. (1992). L’Expertise linguistique dans le procès pénal: langage et identité du sujet. International Journal for the Semiotics of Law, 5(13), 17–37. Esparza, J., & Bruns, G. (1994). Trapping mutual exclusion in the box calculus. (LFCS Report Series, ECS-LFCS-94-295.) Edinburgh, Scotland: LFCS, Department of Computer Science, University of Edinburgh. Espinosa-Duró, V. (2002). Minutiae detection algorithm for fingerprint recognition. IEEE AESS Systems Magazine, 2002, 264–266. The Aerospace & Electronic Systems Society of the Institute of Electrical and Electronics Engineers. Esposito, A., Bratanic,´ M., Keller, E., & Marinaro, M. (2007). Fundamentals of verbal and nonver- bal communication and the biometric issue. (NATO Security Through Science Series: Human and Societal Dynamics, 18.) Amsterdam: IOS Press. Esuli, A. (2009a). PP-Index: Using permutation prefixes for efficient and scalable approximate similarity search. In Proceedings of the seventh workshop on Large-Scale Distributed Systems for information retrieval (LSDS-IR’09), Boston, MA, 2009, pp. 17–24. http://www.esuli.it/fp- content/attachs/publications/LSDS-IR09.pdf Esuli, A. (2009b). MiPai: using the PP-Index to build an efficient and scalable similarity search system. In Proceedings of the second International Workshop on Similarity Search and Applications (SISAP’09), Prague, 2009, pp. 146–148. Esuli, A. (2010). PP-Index: Using permutation prefixes for efficient and scalable similarity search. In Proceedings of the eighteenth Italian Symposium on Advanced Database Systems (SEBD 2010), Rimini, Italy, 2010, pp. 318–325. 1162 References

Esuli, A., & Sebastiani, F. (2010). Sentiment quantification. In H. Chen (Ed.), AI and opinion mining, part 2, under the rubric Trends & Controversies. IEEE Intelligent Systems, 25(4), July/August 2010, 72–75. Esuli, E., Fagni, T., & Sebastiani, F. (2008). Boosting multi-label hierarchical text categorization. Information Retrieval, 11(4), 287–313. Evangelista, P. F., Embrechts, M. J., & Szymanski, B. K. (2006). Taming the curse of dimension- ality in kernels and novelty detection. In A. Abraham, B. Baets, M. Koppen, & B. Nickolay (Eds.), Applied soft computing technologies: The challenge of complexity. Berlin: Springer. Evett, I. W. (1993). Establishing the evidential value of a small quantity of material found at a crime scene. Journal of the Forensic Science Society, 33(2), 83–86. Evett, I. W., & Williams, R. L. (1996). A review of the sixteen points fingerprint standard in England and Wales. Journal of Forensic Identification, 46, 49–73. Expert. (1985). EXPERT: A guide to forensic engineering and service as an expert witness (47 pp). Silver Springs, MD: The Association of Soil and Foundation Engineers (47 pp.). Faegri, K., & Iversen, J. (1989). Textbook of pollen analysis. Fourth Edition by K. Faegri, P. E. Kaland, & K. Krzywinski. New York: Wiley. First edition, Copenhagen: Munksgaars, 1950. Fahlman, S. E. (1989). Faster-learning variations on back-propagation: An empirical study. In D. Touretzky, G. Hinton, & T. Sejnowski (Eds.), Proceedings of the 1988 connectionist models summer school (pp. 38–51). San Mateo, CA: Morgan Kaufmann. Fakher-Eldeen, F., Kuflik, T., Nissan, E., Puni, G., Salfati, R., Shaul, Y., et al. (1993). Interpretation of imputed behaviour in ALIBI (1 to 3) and SKILL. Informatica e Diritto (Florence), Year 19, 2nd Series, 2(1/2), 213–242. Falkenhainer, B., & Forbus, K. (1991). Compositional modeling: finding the right model for the job. Artificial Intelligence, 51, 95–143. Fan, G., Huang, H., & Jin, Sh. (2008). An extended contract net protocol based on the personal assistant. In ISECS international colloquium on computing, communication, control, and man- agement, 2008. CCCM ’08. Guangzhou, China, 3–4 August 2008. Los Alamitos, CA: IEEE, Vol. 2, pp. 603–607. Farber, P. L. (1977). The development of taxidermy and the history of ornithology. Isis, 68, 550–566. Farid, H. (2008, June). Digital image forensics. Scientific American, 42–47. Farina, A., Kovács-Vajna, Z. M., & Leone, A. (1999). Fingerprint minutiae extraction from skeletonized binary images. Pattern Recognition, 32(5), 877–889. Farley, A. M., & Freeman, K. (1995). Burden of proof in legal argumentation. In Proceedings of the fifth international conference on artificial intelligence and law. New York: ACM Press, pp. 156–164. Farook, D. Y., & Nissan, E. (1998). Temporal structure and enablement representation for mutual wills: A Petri-net approach. In A. A. Martino & E. Nissan (Eds.), Formal models of legal time, special issue, Information and Communications Technology Law, 7(3), 243–267. Farrington, D., Mackenzie, D. L., Sherman, L., & Welsh, B. C. (Eds.). (2006). Evidence-based crime prevention. London: Routledge. Farzindar, A., & Lapalme, G. (2004). Legal texts summarization by exploration of the thematic structures and argumentative roles. In Text summarization branches out conference held in con- junction with the association for computational linguistics 2004, Barcelona, Spain, July 2004. http://www.iro.umontreal.ca/~farzinda/FarzindarAXL04/pdf Fasel, I. R., Bartlett, M. S., & Movellan, J. R. (2002). A comparison of Gabor filter methods for automatic detection of facial landmarks. In Proceedings of the fifth international conference on automatic face and gesture recognition, Washington, DC, May 2002, pp. 242–246. Faught, W. S. (1975). Affect as motivation for cognitive and conative processes. In Proceedings of the fourth international joint conference on artificial intelligence, Tbilisi, Georgia, USSR, pp. 893–899. References 1163

Faught, W. S. (1978). Conversational action patterns in dialogs. In D. A. Waterman & F. Hayes- Roth (Eds.), Pattern-directed inference systems (pp. 383–397). Orlando, FL: Academic. Faulk, M. (1994). Basic forensic psychiatry (2nd ed.). Oxford: Blackwell Scientific. Feeney, F., Dill, F., & Weir, A. (1983). Arrests without conviction: How often they occur and why. Washington, DC: Government Printing Office. Feinbert, S., Blascovich, J. J., Cacioppo, J. T., Davidson, R. J., Ekman, P., et al. (2002, October). The polygraph and lie detection. National Research Council. Washington, DC: National Academy of Sciences. Feldman, R., Fresko, M., Goldenberg, J., Netzer, O., & Ungar, L. (2010). Analyzing product comparisons on discussion boards. In N. Dershowitz & E. Nissan (Eds.), Language, cul- ture, computation: Essays in honour of Yaacov Choueka (2 vols.), Vol. 1: Theory, techniques, and applications to e-science, law, narratives, information retrieval, and the cultural heritage (in press). Berlin: Springer. Feldman, R., & Sanger, J. (2007). The text mining handbook: Advanced approaches in analyzing unstructured data. Cambridge: Cambridge University Press. Fellbaum, C. (1998). WordNet: An electronic lexical database (Language, Speech, and Communication). Cambridge, MA: The MIT Press. Felson, M. (1992). Routine activities and crime prevention: Armchair concepts and practical action. Studies on Crime and Crime Prevention, 1, 30–34. Feng, Y., & Chen, W. (2004). Brain MR image segmentation using fuzzy clustering with spatial constraints based on Markov random field theory. In G.-Z. Yang & T. Jiang (Eds.), Medical imaging and augmented reality: Proceedings of the second international workshop (MIAR 2004), Beijing, China, August 19–20, 2004 (pp. 188–195). Lecture Notes in Computer Science 3150. Berlin: Springer. Fenning, P. J., & Donnelly, L. J. (2004). Geophysical techniques for forensic investigations (pp. 11–20) (Geological Society of London Special Publication, 232). Fensel, D. (2003). Ontologies: A silver bullet for knowledge management and electronic commerce. Seacaucus, NJ: Springer. Fensel, D., van Harmelen, F., Horrocks, I., McGuinness, D. L., & Patel-Schneider, P. F. (2001). OIL: An ontology infrastructure for the Semantic Web. IEEE Intelligent Systems, 16(2), 38–45. Fenton, N. E., & Neil, M. (2000). The jury observation fallacy and the use of Bayesian net- works to present probabilistic legal arguments. Mathematics Today: Bulletin of the Institute of Mathematics and its Application (IMA), 36(6), 180–187. Paper posted on the Web at http:// www.agena.co.uk/resources.html Ferber, J. (1999). Multiagent systems: An introduction to distributed artificial intelligence. Reading, MA: Addison-Wesley. Fernandez, C., & Best, E. (1988). Nonsequential processes: A petri net view (EATCS Monographs on Theoretical Computer Science, 13). Berlin: Springer. Ferrario, R., & Oltramari, A. (2004). Towards a computational ontology of mind. In Proceedings of the international conference on Formal Ontology in Information Systems (FOIS 2004), November 2004. Amsterdam: IOS Press, pp. 287–297. Ferrua, P. (2010). Il giudizio penale: fatto e valore giuridico. In P. Ferrua, F. Grifantini, G. Illuminati, & R. Orlandi (Eds.), La prova nel dibattimento penale (4th ed., in press), Turin, Italy: Giappichelli. The third edition appeared in 2007, 2007, p. 293 ff. Festinger, L. (1957). A theory of . Evanston, IL: Row Peterson. Reissues of the same edition, Stanford, California: Stanford University Press, 1962, 1970; London: Tavistock Publications, 1962. Revised and enlarged German translation: Theorie der kognitiven Dissonanz (1978). Feu Rosa, P. V. (2000). The Electronic Judge. In Proceedings of the AISB’00 symposium on arti- ficial intelligence & legal reasoning, 17 April 2000 (at the 2000 Convention of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour), Birmingham, England, 17 April 2000, pp. 33–36. Field, D., & Raitt, F. (1996). Evidence. Edinburgh, Scotland: W.Green. 1164 References

Fikes, R. E., & Nilsson, N. J. (1971). STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2, 89–205, 189–208. Fillmore, C. J. (1968). The case for case. In E. Bach & R. T. Harms (Eds.), Universals in linguistic theory. New York: Holt, Rinehart and Winston. Findlay, M., & Duff, P. (Eds.). (1988). The jury under attack. London: Butterworths. Findler, N. V. (Ed.). (1979). Associative networks: Representation and use of knowledge by computers. New York: Academic. Finkelstein, M. (1978). Quantitative methods in law. New York: The Free Press. Finkelstein, M. (1980). The judicial reception of multiple regression studies in race and sex discrimination cases. Columbia Law Review, 80, 737–754. Finkelstein, M. O., & Levin, B. (2003). On the probative value of evidence from a screening search. Jurimetrics Journal, 43, 265–290. Fiorelli, P. (1953–1954). La tortura giudiziaria nel diritto comune (2 vols) (“Ius nostrum”: Studi e testi pubblicati dall’Istituto di Storia del diritto italiano dell’Università di Roma, 1 & 2.). Milan: Giuffrè. Fiorenza, E. (1977). Re Cecconi: La morte assurda. (“Instant book” series.) Rome: Editore Centro dell’Umorismo Italia. Firestein, S. (2001). How the olfactory system makes sense of scents. Nature, 413, 211–218. Fischhoff, B., & Beyth, R. (1975). “I knew it would happen”: Remembered probabilities of once- future things. Organizational Behavior and Human Performance, 13, 1–16. Fisher, M., Gabbay, D., & Vila, L. (Eds.). (2005). Handbook of temporal reasoning in artifi- cial intelligence (electronic resource; Foundations of Artificial Intelligence, 1). Amsterdam: Elsevier. Fissell, M. E. (2003). Hairy women and naked truths: Gender and the politics of knowledge in Aristotle’s Masterpiece.InSexuality in early America, special issue of The William and Mary Quarterly, Third Series, 60(1), 43–74. Published by the Omohundro Institute of Early American History and Culture. http://www.jstor.org/stable/3491495 Fitts, P. M., Jones, R. E., & Milton, J. L. (1950, February). Eye movements of aircraft pilots during instrument landing approaches. Aeronautical Engineering Review,9. Fitzgerald, P. J. (1961). Voluntary and involuntary acts. In A. G. Guest (Ed.), Oxford essays in jurisprudence. Oxford: Oxford University Press. Corrected edn., Oxford: Clarendon Press, 1968. Fitzmaurice, C., & Pease, K. (1986). The psychology of judicial sentencing. Manchester: Manchester University Press. Fix, E., & Hodges, J. L. (1951). Discriminatory analysis, nonparametric discrimination consis- tency properties. Technical Report 4. Randolph Field, TX: U.S. Air Force. Flowe, H. D., Finklea, K. M., & Ebbesen, E. B. (2009a). Limitations of expert psychology testi- mony on eyewitness identification. In B. L. Cutler (Ed.), Expert testimony on the psychology of eyewitness identification (pp. 220–221). American Psychology-Law Society Series. New York: Oxford University Press. doi://10.1093/acprof:oso/9780195331974.003.003 Flowe, H. D., Mehta, A., & Ebbesen, E. B. (2009b). The role of eyewitness identification evi- dence in felony case dispositions. Leicester, England: School of Psychology, Forensic Section, University of Leicester. Draft of June 2010. http://www2.le.ac.uk/departments/psychology/ppl/ hf49/FloweMehtaEbbesenDraftJune10.pdf Flowers, M., McGuire, R., & Birnbaum, L. (1982). Adversary arguments and the logic of per- sonal attacks. Chapter 10 In W. Lehnert & M. Ringle (Eds.), Strategies for natural language processing (pp. 275–294). Hillsdale, NJ: Lawrence Erlbaum Associates. Flycht-Eriksson, A. (2004). Design and Use of ontologies in information-providing dialogue sys- tems. Ph.D. Thesis. Linköping Studies in Science and Technology, Vol. 875. Dissertation, no. 874. Linköping, Sweden: Department of Computer and Information Science, University of Linköping. Foresman, T. W. (Ed.). (1998). The history of geographic information systems: Perspectives from the pioneers. Upper Saddle River, NJ: Prentice Hall PTR. References 1165

Forgas, J. P., & Williams, K. D. (Eds.). (2001). Social influence: Direct and indirect processes. Lillington, NC: Psychology Press. FORident Software. (2009). HemoSpat validation. [A white paper.] FORidenti Software Technical Paper. Ottawa, ON: FORident Software. 9 August 2009. http://hemospat.com/technical_papers/ pdf/FORident%20Software%20Technical%20Paper%20-%20HemoSpat%20Validation.pdf Forrester, J. W. (1984). Gentle murder, or the adverbial Samaritan. Journal of Philosophy, 81, 193–197. Foster, D. (2001). Author unknown: On the trail of anonymous. London: Macmillan; with respect to New York: Holt, 2000, the U.K. edition is corrected, and includes as well the new Ch. 7, on current British journalism. Foster, J. C., & Liu, V. (2005). Catch me if you can. In Blackhat briefings. http://www.blackhat. com/presentations/bh-usa-05/bh-us-05-foster-liu-update.pdf (The web link did no longer seem to work in the summer of 2011; contact Blackhat for a copy) Fox, F. (1971, April). Quaker, Shaker, rabbi: Warder Cresson, the story of a Philadelphia mystic. Pennsylvania Magazine of History and Biography, 147–193. Fox, J. (1986). Knowledge, decision making and uncertainty. Chapter 3 In W. A. Gale (Ed.), Artificial intelligence and statistics (pp. 57–76). Reading, MA: Addison-Wesley. Fox, J., & Parsons, S. (1998). Arguing about beliefs and actions. In A. Hunter & S. Parsons (Eds.), Applications of uncertainty formalisms (pp. 266–302). Berlin: Springer. Fox, R., & Josephson, J. R. (1994). Software: PEIRCE-IGTT. In J. R. Josephson & S. G. Josephson (Eds.), Abductive inference: Computation, philosophy, technology (pp. 215–223). Cambridge: Cambridge University Press. Fox, S., & Leake, D. (2001). Introspective reasoning for index refinement in casebased reasoning. Journal of Experimental and Theoretical Artificial Intelligence, 13(1), 63–88. François, A. R. J., Nevatia, R., Hobbs, J., & Bolles, R. C. (2005). VERL: An ontology framework for representing and annotating video events. IEEE MultiMedia, 12(4), 76–86. Frank, O. (1978). Sampling and estimation in large social networks. Social Network, 1, 91–101. Frank, M. G., & Ekman, P. (1997). The ability to detect deceit generalizes across different types of high-stake lies. Journal of Personality and Social Psychology, 72, 1429–1439. Frank, M. G., & Ekman, P. (2003). Nonverbal detection of deception in forensic contexts: Handbook of forensic psychology. New York: Academic. Frank, P. B., Wagner, M. J., & Weil, R. L. (1994). Litigation services handbook: The role of the accountant as expert witness. 1994 cumulative supplement.NewYork:Wiley. Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. 2003. Modeling knowledge-based inferences in story comprehension. Cognitive Science, 27, 875–910. Franke, K., & Srihari, S. N. (2008). Computational forensics: An overview. In Computational forensics: Proceedings of the international workshop, Washington, DC (pp. 1–10). (Lecture Notes in Computer Science, 5158). Berlin: Springer. Freeman, J. B. (1991). Dialectics and the macrostructure of arguments. Berlin: Floris Publications. Freeman, K. (1994). Toward formalizing dialectical argumentation. Ph.D. thesis, Department of Computer Science and Information Science, University of Oregon. Freeman, L. C. (1979). Centrality in social networks: Conceptual clarification. Social Networks, 1, 215–240. Freeman, L. C. (2000a). Visualizing social networks. Journal of Social Structure, 1(1). This arti- cle is posted at Freeman’s website at http://moreno.ss.uci.edu/79.pdf and also at the journal’s website at http://www.cmu.edu/joss/content/articles/volume1/Freeman.html http://www.heinz. cmu.edu/project/INSNA/joss/vsn.html Freeman, L. C. (2000b). Visualizing social groups. In American Statistical Association 1999 Proceedings of the Section on Statistical Graphics, 2000, pp. 47–54. http://moreno.ss.uci.edu/ 80.pdf Freeman, L. C. (2004). The development of social network analysis: A study in the sociology of science. Vancouver, BC: Empirical Press. Translated into Japanese by R. Tsuji and published as . Tokyo: NTT Publishing Co., 2007. Translated into Italian by 1166 References

R. Memoli and published as Lo sviluppo dell’analisi delle reti sociali. Uno studio di sociologia della scienza. Milano: Franco Angeli, 2007. Translated into Chinese by Wang Weidong and published as . Beijing: China Renmin University Press, 2008. Freeman, L. C. (2005). Graphical techniques for exploring social network data. In P. J. Carrington, J. Scott, & S. Wasserman (Eds.), Models and methods in social network analysis. Cambridge: Cambridge University Press. http://moreno.ss.uci.edu/86.pdf Freeman, L. C. (2007). Social network analysis (4 Vols.). London: Sage. Freeman, L. C. (2008). Going the wrong way on a one-way street: Centrality in physics and biology. Journal of Social Structure, 9(2). This article is posted at Freeman’s website at http://moreno.ss.uci.edu/joss.pdf and also at the journal’s website at http://www.cmu.edu/joss/ content/articles/volume9/Freeman.html Freeman, L. C. (2009). Methods of social network visualization. In R. A. Meyers (Ed.), Encyclopedia of complexity and systems science. Berlin: Springer. http://moreno.ss.uci.edu/ 89.pdf Freeman, K., & Farley, A. M. (1996). A model of argumentation and its application to legal reasoning. Artificial Intelligence and Law, 4(3/4), 157–161. Freund, M. S., & Lewis, N. S. (1995). A chemically diverse conducting polymer-based electronic nose. Proceedings of the National Academy of Sciences USA, 92, 2652–2656. Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139. Fridrich, J.,59 Soukal, D., & Lukáš, J. (2003). Detection of copy-move forgery in digital images. In Proceedings of digital forensic research workshop, August 2003. Friedkin, N. (1981). The development of structure in random networks: An analysis of the effects of increasing network density on five measures of structure. Social Networks, 3, 41–52. Friedman, R. D. (1991). Character impeachment evidence: Psycho-Bayesian analysis and proposed overhaul. UCLA Law Review, 38, 637–691. Friedman, R. D. (1997). Answering the Bayesioskeptical challenge. In R. Allen & M. Redmayne (Eds.), Bayesianism and juridical proof (pp. 276–291). London: Blackstone (With a consol- idated bibliography: pp. 354–360). Special issue, The International Journal of Evidence and Proof, 1, 253–360. Friedman, R. D., & Park, R. C. (2003). Sometimes what everybody thinks they know is true. Law and Human behavior, 27, 629–644. Frisch, A. M., & Perlis, D. (1981). A re-evaluation of story grammars. Cognitive Science, 5(1), 79–86. Frowd, C. D., Bruce, V., & Hancock, P. J. B. (2008). Changing the face of criminal identification. The Psychologist, 21, 670–672. Frowd, C. D., Bruce, V., Ross, D., McIntyre, A., & Hancock, P. J. B. (2007). An application of caricature: How to improve the recognition of facial composites. Visual Cognition, 15, 954–984. Frowd, C. D., Carson, D., Ness, H., Richardson, Morrison, L., McLanaghan, S., et al. (2005). A forensically valid comparison of facial composite systems. Psychology, Crime & Law, 11, 33–52. Frowd, C. D., Hancock, P. J. B., & Carson, D. (2004). EvoFIT: A holistic, evolutionary facial imaging technique for creating composites. ACM Transactions on Applied Psychology (TAP), 1, 1–21. Frowd, C. D., McQuiston-Surret, D., Anandaciva, S., Ireland, C. G. & Hancock, P. J. B. (2007). An evaluation of US systems for facial composite production. Ergonomics, 50, 187–198. Frowd, C. D., Pitchford, M., Bruce, V., Jackson, S., Hepton, G., Greenall, M., et al. (2010a). The psychology of face construction: Giving evolution a helping hand. Applied Cognitive Psychology. doi:10.1002/acp.1662.

59 This is Jessica Fridrich. Also Juri Fridrich is at the Computer Science Department of Dartmouth College in Hanover, New Hampshire. They both work in the given domain of research. References 1167

Frowd, C., Nelson, L., Skelton, F., Noyce, R., Heard, P., Henry, J., et al. (2010b). Interviewing techniques for Darwinian facial composite systems. Submitted. Fu, X., Boongoen, T., & Shen, Q. (2010). Evidence directed generation of plausible crime scenarios with identity resolution. Applied Artificial Intelligence, 24(4), 253–276. Fuhr, N. (Ed.). (2006). Advances in XML information retrieval and evaluation: 4th international workshop of the initiative for the evaluation of XML retrieval, INEX 2005, Dagstuhl Castle, Germany, November 28–30, 2005. Revised and Selected Papers. (Lecture Notes in Computer Science, Vol. 3977.) Berlin: Springer. Fulford, R. (2005). Utopian ends, murderous means. The National Post, Toronto, 8 January 2005. http://www.robertfulford.com/2005-01-08-pfaff.html Full, W. E., Ehrlich, R., & Bezdek, J. C. (1982). Fuzzy QModel: A new approach for linear unmixing. Journal of Mathematical Geology, 14, 259–270. Full, W. E., Ehrlich, R., & Klovan, J. E. (1981). Extended Qmodel [sic]: Objective definition of external end members in the analysis of mixtures. Journal of Mathematical Geology, 13, 331–344. Fung, T. H., & Kowalski, R. (1997). The IFF proof procedure for abductive logic programming. Journal of Logic Programming, 33(2), 151–165. Furnam, A. (1986). The robustness of the recency effect: Studies using legal evidence. The Journal of General Psychology, 113(4), 351–357. Furtado, V., Melo, A., Menezes, R., & Belchior, M. (2006). Using self-organization in an egent framework to model criminal activity in reponse to police patrol routes. In G. Sutcliffe & R. Goebel (Eds.), Proceedings of the 19th international florida artificial intelligence research society conference (pp. 68–73). Menlo Park, CA: AAAI Press. Furtado, V., & Vasconcelos, E. (2007). Geosimulation in education: A system for teaching police resource allocation. International Journal of Artificial Intelligence in Education, 17, 57–81. Gabbay, D., & Woods, J. (2003). Agenda relevance: A study in formal pragmatics. Amsterdam: North-Holland. Gabbert, F., Memon, A., & Allan, K. (2003). Memory conformity: Can eyewitnesses influence each other’s memories for an event? Applied Cognitive Psychology, 17, 533–544. Gabbert, F., Memon, A., Allan, K., & Wright, D. (2004). Say it to my face: Examining the effects of socially encountered misinformation. Legal and Criminological Psychology, 9, 215–227. Gabbert, F., Memon, A., & Wright, D. B. (2006). Memory conformity: Disentangling the steps towards influence during a discussion. Psychonomic Bulletin and Review, 13, 480–485. Gaffney, C., & Gater, J. (2003). Revealing the buried past: Geophysics for archaeologists. Gloucester: Tempus Publishing. Gaines, D. M. (1994). Juror simulation. BSc Project Report, Computer Science Department, Worcester Polytechnic Institute. Gaines, D. M., Brown, D. C., & Doyle, J. K. (1996). A computer simulation model of juror decision making. Expert Systems With Applications, 11(1), 13–28. Galasinski, D. (1996). Pretending to cooperate: How speakers hide evasive actions. Argumentation, 10, 375–388. Galindo, F. (1996). Sistemas de ayuda a la decisión jurídica. ¿Son posibles? Actas (Volumen I), II Congreso Internacional de Informática y Derecho, Mérida, Spain, April 1995 (Mérida: UNED, Centro Regional de Extremadura). Published as: Informática y Derecho, Vol. 9/10/11, Part 1, 1996, pp. 631–650. Galitsky, B. (1998). A formal scenario and metalanguage support means to reason about it. Technical report 98-28. New Brunswick, NJ: DIMACS. ftp://dimacs.rutgers.edu/pub/dimacs/ TechnicalReports/TechReports/1998/98-28.ps.gz Galitsky, B. (1999). Narrative generation for the control of buyer’s impression. Technical Report. Technical report 98-28. New Brunswick, NJ: DIMACS. Gallagher, T. (1998). Lost and found. Living Bird, Spring 1998. http://www.birds.cornell.edu/ Publications/livingbird/spring98/OwletSp98.htm Galton, A. P. (1987). Temporal logics and their applications. London: Academic. 1168 References

Galton, A. P. (1990). A critical examination of Allen’s theory of action and time. Artificial Intelligence, 42, 159–188. Galton, A. P. (2008). Temporal logic. Stanford Encyclopedia of Philosophy (entry revised from an original version of 1999). http://plato.stanford.edu/entries/logic-temporal/ Gambini, R. (1985). Il plea bargaining tra ‘common law’ e ‘civil law’. Milan: Giuffrè. Gambini, R. (1997). Inutilizzabilità (dir. proc. pen.). In Enciclopedia del diritto, Aggiornamento (Vol. 1). Milan: Giuffrè. Gan, H. (1994). Understanding a story with causal relationships. In Z. W. Ras´ & M. Zemankova (Eds.), Methodologies for intelligent systems: 8th international symposium, ISMIS’94. Charlotte, North Carolina, October 16–19, 1994 (pp. 265–274). Lecture Notes in Artificial Intelligence, Vol. 869. Berlin: Springer. Gangemi A., Sagri, M. T., & Tiscornia, D. (2005). A constructive framework for legal ontologies. In V. R. Benajmins, P. Casanovas, J. Breuker, & A. Gangemi (Eds.), Proceedings of law and the semantic web [2005]: Legal ontologies, methodologies, legal information retrieval, and applications (pp. 97–124). (Lecture Notes in Computer Science, Vol. 3369). Berlin: Springer. Garani, G. (2004). A temporal database model using nested relations. Ph.D. Dissertation, Computer Science and Information Systems Engineering. London: Birkbeck College, University of London. Garani, G. (2008). Nest and unnest operators in nested relations. Data Science Journal, 7, 57–64. Garcia-Rojas, A., Gutiérrez, M., & Thalmann, D. (2008a, July). Visual creation of inhabited 3D environments: An ontology-based approach. The Visual Computer, 24(7–9), 719–726. Also, Report VRLAB-ARTICLE-2008-062. Lausanne: Virtual Reality Lab at the Swiss Federal Institute of Technology. Garcia-Rojas, A., Gutiérrez, M., & Thalmann, D. (2008b). Simulation of individual spontaneous reactive behavior. In Proceedings of the seventh international conference on Autonomous Agents and Multiagent Systems (AAMAS),60 Estoril, Portugal, May 12–16, 2008, pp. 143–150. Also, Report VRLAB-CONF-2008-150. Lausanne: Virtual Reality Lab at the Swiss Federal Institute of Technology. http://infoscience.epfl.ch/getfile.py?recid=125120&mode=best Gärdenfors, P. (1988). Knowledge in flux: Modeling the dynamics of epistemic states. Cambridge, MA: MIT Press. Gärdenfors, P. (1992). Belief revision: An introduction. In P. Gärdenfors (Ed.), Belief revision (pp. 1–28). Cambridge Tracts in Theoretical Computer Science, Vol. 29. Cambridge, UK: Cambridge University Press. Gardner, A. von der Lieth. (1987). An artificial intelligence approach to legal reasoning. Cambridge, MA: The MIT Press. Gardner, J. W. (1991). Detection of vapours and odours from a multisensor array using pattern recognition: Principal component and cluster analysis. Sensors & Actuators, 4, 109–115. Gardner, J. W., & Bartlett, P. N. (1999). Electronic noses: Principles and applications. Oxford: Oxford University Press. Gardner, J. W., & Yinon, J. (Eds.). (2004). Proceedings of the NATO advanced research workshop on electronic noses and sensors for the detection of explosives. (NATO Science Series 2003, Vol. 159). Dordrecht: Kluwer. Gardner, R. M., & Bevel, T. (2009). Practical crime scene analysis and reconstruction. With contri- butions by M. Noedel, S. A. Wagner, & I. Dalley. (CRC Series in Practical Aspects of Criminal and Forensic Investigations.) Boca Raton, FL: CRC Press. Garfield, B. (2007). The Meinertzhagen mystery: The life and legend of a colossal fraud. Washington, DC: Potomac Books. Garnham, A. (1983). What’s wrong with story grammars? Cognition, 15, 145–154. Garrett, R. E. (1987). The overlooked business aspects of forensic engineering. Forensic Engineering, 1(1), 17–19.

60 http://www.ifaamas.org References 1169

Garrioch, L., & Brimacombe, E. (2001). Lineup administrators’ expectations: Their impact on eyewitness confidence. Law & Human Behavior, 25, 299–315. Garry, M., Manning, C., Loftus, E. F., & Sherman, S. J. (1996). Imagination inflation: Imagining a childhood event inflates confidence that it occurred. Psychonomic Bulletin and Review, 3, 208–214. Posted on the Web at: http://faculty.washington.edu/eloftus/Articles/Imagine.htm Garven, S., Wood, J., Malpass, R., & Shaw, III, J. (1998). More than suggestion: The effect of interviewing techniques from the McMartin Preschool case. Journal of Applied Psychology, 83, 347–359. Gastwirth, J. L., & Miao, W. (2009). Formal statistical analysis of the data in disparate impact cases provides sounder inferences than the U.S. government’s ‘four-fifths’ rule: An examina- tion of the statistical evidence in Ricci v. DeStefano. Law, Probability and Risk, 8, 171–191. doi:10.1093/lpr/mgp017 Gauthier, T. D. (2002). Statistical methods. Chapter 12 In B. L. Murphy & R. D. Morrison (Eds.), Introduction to environmental forensics (pp. 391–428). San Diego, CA & London: Academic. Gearey, A. (2005). Law and narrative. In D. Herman, M. Jahn, & M.-L. Ryan (Eds.), Routledge encyclopedia of narrative theory (pp. 271–275). London: Routledge, 2005 (hbk), 2008 (pbk, avail. Sept. 2007). Geddes, L. (2010, August 21). What are the chances? ([pre-headline:] Special report: DNA evi- dence; [subheadline:] In the second part of our investigation, Linda Geddes shows that the odds attached to a piece of DNA evidence can vary enormously). New Scientist, 207(2274), 8–10. With an inset: ‘When lawyers question DNA’, on p. 9. Geiger, A., Nissan, E., & Stollman, A. (2001). The Jama legal narrative. Part I: The JAMA model and narrative interpretation patterns. Information & Communications Technology Law, 10(1), 21–37. [Part II is Nissan (2001c).] Gelbart, D., & Smith, J. C. (1993). FLEXICON: An evaluation of a statistical model adapted to intelligent text management. In the Proceedings of the fourth international Conference on Artificial Intelligence and Law (ICAIL’93), Amsterdam. New York: ACM Press, pp. 142–151. Gelfand, M., Mironov, A., & Pevzner, P. (1996). Gene recognition via splices sequence alignment. Proceedings of the National Academy of Sciences USA, 93, 9061–9066. Gemmell, J., Lueder, & Bell, G. (2003). The MyLifeBits lifetime store. In Proceedings of the 2003 ACM SIGMM workshop experiential telepresence (ETP). New York: ACM Press, pp. 80–83. Gemperline, P. J. (1984). A priori estimates of the elution [sic] profiles of pure components in overlapped liquid chromatography peaks using target transformation factor analysis. Journal of Chemical Information and Computer Sciences, 24, 206–212. Geng, L., & Chan, C. W. (2001). An algorithm for automatic generation of a case base from a database using similarity-based rough approximation. In A. Abraham & M. Köppen (Eds.), Hybrid information systems: Proceedings of the first international workshop on Hybrid Intelligent Systems (HIS 2001), Adelaide, Australia, December 11–12, 2001 (pp. 571–582). Advances in Soft Computing Series. Heidelberg: Physica-Verlag (of Springer-Verlag), 2002. Genrich, H., & Lautenbach, K. (1979). Predicate/transitions nets. In W. Brauer (Ed.), Net theory and application (Lecture Notes in Computer Science, 84). Berlin: Springer. Geradts, Z. (2005). Use of computers in forensic science. Chapter 26 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Gerard, S., & Sansonnet, J.-P. (2000). A spatio-temporal model for the representation of situations described in narrative texts. In Proceedings of NLP 2000, pp. 176–184. Gerasimov, M. (1971). The face finder. London: Hutchinson. Gerasimov, M. M. (about) (2007). Mikhail Gerasimov. Biographical Wikipedia [English] http:// en.wikipedia.org/wiki/Mikhail_Gerasimov. Last modified on 24 April 2007, when accessed in  August 2007. A Russian entry (MihailMiháloviq Gerásimov) is also available under http://ru.wikipedia.org/wiki/ Gerkey, B., & Mataric, M. (2004). A formal analysis and taxonomy of task allocation in multi-robot systems. International Journal of Robotics Research, 23(9), 939–954. Gerritsen, C., Klein, M. C. A., & Bosse, T. (2009). Agent-based simulation of social learning in criminology. In Proceedings of the [First] International Conference on Agents and Artificial 1170 References

Intelligence, ICAART 2009, Porto, Portugal, 19–21 January 2009, area 1: Artificial intelligence, paper 8. Gervás, P., Díaz-Agudo, B., Peinado, F., & Hervás, R. (2005). Story Plot Generation based on CBR. In the AI-2004 special issue, Knowledge-Based Systems, 18(4/5), 2005: 235–242. Previously in the Proceedings of the 24th Annual International Conference of the British Computer Society’s Specialist Group on Artificial Intelligence (SGAI). Applications and Innovations in Intelligent Systems, Cambridge, England, 13–15 December 2004, (WICS Series) Berlin: Springer, Vol. 12, pp. 36–46. Gervás, P., Lönneker-Rodman, B., Meister, J. C., & Peinado, F. (2006). Narrative models: Narratology meets artificial intelligence. In R. Basili & A. Lenci (Eds.), Proceedings of satellite workshop: Toward computational models of literary analysis.AttheFifth International Conference on Language Resources and Evaluation, Genoa, Italy, 22 May 2006, pp. 44–51. Ghosh, A., Wong, L., Di Crescenzo, G., & Talpade, R. (2005). InFilter: Predictive ingress fil- tering to detect spoofed IP traffic. At the Second International Workshop on Security in Distributed Computing Systems (SDCS),in:Proceediongs of the 25th International Conference on Distributed Computing Systems Workshops (ICDCS 2005 Workshops), 6–10 June 2005, Columbus, Ohio. IEEE Computer Society, pp. 99–106. Gibbons, B., Busch J., & Bradac J. (1991). Powerful versus powerless language: Consequences for persuasion, impression formation, and cognitive response. Journal of Language and Social Psychology, 10(2), 115–133. Gibbons, J. (1994). Language and the law. (Language in Social Life Series). London: Longman. Gibson, J. J. (1977). The theory of affordances. In R. Shaw & J. Bransford (Eds.), Perceiving, acting and knowing (pp. 67–82). Hillsdale, NJ: Erlbaum. Gibson, J. J. (1979). The ecological approach to visual perception. Hillsdale, NJ: Erlbaum. Gibson, S. J., Solomon, C. J., Maylin, M. I. S., & Clark, C. (2009). New methodology in facial composite construction: From theory to practice. International Journal of Electronic Security and Digital Forensics, 2, 156–168. Gigerenzer, G., & Selten, R. (Eds.). (2001). Bounded rationality: The adaptive toolbox. Cambridge, MA: The MIT Press. Gilbert, D. T., & Malone, D. S. (1995). The correspondence bias. Psychological Bulletin, 117, 21–38. Gilbert, M. (2002). Informal logic, argumentation theory & artificial intelligence: Introduction. (Special issue.) Informal Logic, 22, 191–194. Gilbert, M., Grasso, F., Groarke, L., Gurr, C., & Gerlofs, J.-M. (2003). “The Persuasion Machine”: Argumentation and computational linguistics. Chapter 5 In C. Reed & T. Norman (Eds.), Argumentation machines: New frontiers in argument and computation (pp. 121–174). Dordrecht: Kluwer. Gilbreth, F. B., & Gilbreth, L. M. (1917). Applied motion study. New York: Sturgis and Walton. Gillies, D. (2004). Probability in artificial intelligence. Chapter 21 In L. Floridi (Ed.), The blackwell guide to the philosophy and computing and information (pp. 276–288). (Blackwell Philosophy Guides, 14). Oxford & Malden, MA: Blackwell. Gilman, S. L. (1975). “Das-ist-der-Teu-fel-si-cher-lich”: The Image of the Black on the Viennese Stage from Schikaneder to Grillparzer. In Festschrift for Heinz Politzer (pp. 78–106). Tübingen, Germany: Nimemeyer. Reprinted in Gilman (1982a). Gilman, S. L. (1982a). On blackness without blacks: Essays on the image of the black in Germany. (Yale Afro-American Studies). Boston: G. K. Hall. Gilman, S. L. (1982b). Seeing the insane: A cultural history of psychiatric illustration.NewYork: Wiley Interscience. Republished as a Wiley Paperback, 1985. Gilman, S. L. (1984). Jews and mental illness: Medical metaphors, anti-Semitism and the Jewish response. Journal of the History of the Behavioral Sciences, 20, 150–159. Reprinted in his Disease and Representation: Images of Illness from Madness to AIDS. Ithaca, NY: Cornell University Press. (Also in Italian, Bologna: Il Mulino, 1993.) References 1171

Gilman, S. L. (1985). Difference and pathology: Stereotypes of sexuality, race, and madness. Ithaca, NY: Cornell University Press. Gilman, S. L. (1986a). Jewish self-hatred: Anti-semitism and the hidden language of the jews. Baltimore, MD: The Johns Hopkins University Press. Paperback edn., 1990. Gilman, S. L. (1986b). Black sexuality and modern consciousness. In R. Grimm & J. Hermand (Eds.), Blacks and German culture (pp. 35–53). Madison, WI: University of Wisconsin Press. Reprinted in Gilman, S. L. (1988). Disease and representation: Images of illness from mad- ness to AIDS. Ithaca, NY: Cornell University Press; Paperback edition, 1988; Second edition, 1991; Second paperback edition, 1991; Italian translation, Bologna: Il Mulino, 1993; Japanese translation, Tokyo: Arino Shobo, 1996. Gilman, S. L. (1991). The Jew’s body. New York: Routledge. Gilman, S. L. (1993a). The case of : Medicine and identity at the Fin de Siècle. Baltimore, MD: The Johns Hopkins University Press. Paperback, 1994. Gilman, S. L. (1993b). Mark Twain and the diseases of the Jews. American Literature, 65, 95–116. Also In B. Cheyette (Ed.), Between “Race” and culture: Representations of the Jew in English and American literature. Stanford, CA: Stanford University Press, 1996, pp. 27–43. Also in: M. Moon & C. N. Davidson (Eds.), Subjects and Citizens: Nation, Race, and Gender from “Oroonoko” to Anita Hill. Durham, NC: Duke University Press, pp. 271–292. Gilman, S. L. (1994a). Psychoanalysis and anti-semitism: Tainted greatness in a professional con- text. In N. Harrowitz (Ed.), Tainted greatness: Anti-semitism, prejudice, and cultural heroes (pp. 93–108). Philadelphia, PA: Temple University Press. Gilman, S. L. (1994b). The Jewish nose: Are Jews white? or the History of the nose job. In L. J. Silberstein & R. L. Cohn (Eds.), The other in Jewish thought and history: Constructions of Jewish culture and identity (pp. 364–401). New York: New York University Press. Gilman, S. L. (1995). Otto Weininger and Sigmund Freud: Race and gender in the shaping of psychoanalysis. In N. Harrowitz & B. Hyams (Eds.), Jews and gender: Responses to Otto Weininger (pp. 103–121). Philadelphia, PA: Temple University Press. Gilman, S. L. (1996a). Smart Jews: The construction of the idea of Jewish superior intelligence at the other end of the bell curve. Lincoln: The University of Nebraska Press. Paperback edn., 1997. Gilman, S. L. (1996b). Smart Jews in fin-de-siècle Vienna: ‘Hybrids’ and the anxiety about Jewish superior intelligence – Hofmannsthal and Wittgenstein. /Modernity, 3, 45–58. Reprinted in: R. Block & P. Fenves (Eds.), The spirit of Poesy: Essays on Jewish and German literature and thought in honor of Géza von Molnar. Evanston, IL: Northwestern University Press, 2000, pp. 193–207. Gilman, S. L. (1996c). The Bell Curve, intelligence, and the virtuous Jews. In J. L. Kincheloe, S. R. Steinberg, & A. D. Gresson III (Eds.), Measured lies: The bell curve examined (pp. 265–290). New York: St. Martin’s Press. Reprinted in Discourse, 19(1), 58–80 (1996). Gilman, S. L. (1999). By a nose: On the construction of ‘foreign bodies’. Social Epistemology, 13, 49–58. Gilman, S. L. (Ed.). (2006). Race and contemporary medicine: Biological facts and fictions. Special issue. Patterns of Prejudice, 40. Also published as a book, London: Routledge, 2007. Gilmore, G. (1979). Formalism and the law of negotiable instruments. Creighton Law Review, 13, 441–461. Also, New Haven, CT: Yale Law School, Faculty Scholarship Series, Paper 2564. http://digitalcommons.law.yale.edu/fss_papers/2564 or: http:// digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=3612&context=fss_papers&sei- redir=1#search=“Gilmore+“Formalism+and+the+law+of+negotiable”” Gimblett, H. R. (2002). Integrating geographic information systems and agent-based modelling techniques for simulating social and ecological processes. Oxford: Oxford University Press. Giovagnoli, A., & Pons, S. (Eds.). (2003). L’Italia repubblicana nella crisi degli anni Settanta: Tra guerra fredda e distensione. Soveria Mannelli (prov. Cosenza, in Calabria, Italy): Rubbettino. Gladwell, M. (2005). Blink: The power of thinking without thinking. New York: Little, Brown and Company. 1172 References

Glass, R. T. (2005). Forensic odontology. Chapter 6 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Goad, W., & Kanehisa, M. (1982). Pattern recognition in nucleic acid sequences: A general method for finding local homologies and symmetries. Nucleic Acids Research, 10, 247–263. Göbel, J., Hektor, J., & Holz, T. (2006). Advanced honeypot-based intrusion detection. In Login: (sic), December 2006, pp. 17–25. http://www.usenix.org Goble, L. (1999). Deontic logic with relevance. In P. McNamara & H. Prakken (Eds.), Norms, logics and information systems (pp. 331–346). Amsterdam: ISO Press. Goel, A., Feng, W.-C., Maier, D., Feng, W.-C., & Walpole, J. (2005). Forensix: A Robust, high- performance reconstruction system. At the Second International Workshop on Security in Distributed Computing Systems (SDCS),In:Proceedings of the 25th International Conference on Distributed Computing Systems Workshops (ICDCS 2005 Workshops), 6–10 June 2005, Columbus, OH. IEEE Computer Society, pp. 155–162. Golan, T. (2004). Laws of men and laws of nature: The history of scientific expert testimony in England and America. Cambridge, MA: Harvard University Press. Goldberg, H. G., & Wong, R. W. H. (1998). Restructuring transactional data for link analysis in the FinCEN AI system. In D. Jensen & H. Goldberg (Eds.), Artificial intelligence and link analysis. Papers from the AAAI Fall Symposium, Orlando, FL. Goldberg, M., Hayvanovych, M., Hoonlor, A., Kelley, S., Magdon-Ismail, M., Mertsalov, K., et al. (2008). Discovery, analysis and monitoring of hidden social networks and their evolution. At the IEEE Homeland Security Technologies Conference, Boston, MA, May 12–13, 2008, pp. 1–6. Golden, R. M., & Rumelhart, D. E. (1993). A parallel distributed processing model of story comprehension and recall. Discourse Processes, 16, 203–237. Goldfarb, C. F. (1990). The SGML handbook, edited and introduced by Y. Rubinsky. Oxford: Clarendon Press, 1990, repr. 2000. Goldin, H. E. (1952). Hebrew criminal law and procedure. New York: Twayne. Goldman, A. I. (1986). Epistemology and cognition. Cambridge, MA: Harvard University Press. Goldman, A. I. (1987a). Foundations of social epistemics. Synthese, 73(1), 109–144. Goldman, A. I. (1987b). The cognitive and social sides of epistemology. In A. Fine & P. Machamer (Eds.), PSA 1986 (Vol. 2, pp. 295–311). East Lansing, MI: Philosophy of Science Association. Goldman, A. I. (1991). Epistemic paternalism: Communication control in law and society. The Journal of Philosophy, 88(3), 113–131. Goldman, A. I. (1992). Liaisons: Philosophy meets the cognitive and social sciences. Cambridge, MA: The MIT Press. Goldman, R. P. (1990). A probabilistic approach to language understanding. Technical Report CS-90-34, Computer Science Department. Providence, RI: Brown University. Goldman, S., Dyer, M. G., & Flowers, M. (1988). Representing contractual situations. In C. Walter (Ed.), Computer power and legal language: The use of computational linguistics, artificial intelligence, and expert systems in law (pp. 99–118). New York: Quorum Books. Goldman, S. R., Dyer, M. G., & Flowers, M. (1985). Learning to understand contractual situations. In Proceedings of the ninth International Joint Conference on Artificial Intelligence (IJCAI’85), Los Angeles, CA, 18–24 August 1985. San Mateo, CA: Morgan Kaufmann Publ. http://ijcai. org/search.php Goldman, S. R., Dyer, M. G., & Flowers, M. (1987). Precedent-based legal reasoning and knowl- edge acquisition in contract law. In Proceedings of the first International Conference on Artificial Intelligence and Law (ICAIL’87), Boston, MA, pp. 210–221. Goldman, Sh. (2004). God’s sacred tongue: Hebrew & the American imagination. Chapel Hill, NC: University of North Carolina Press. Goldsmith, R. W. (1986). The applicability of an evidentiary value model to judicial and prose- cutorial decision making. In A. A. Martino, F. Socci Natali, & S. Binazzi (Eds.), Automated analysis of legal texts, logic, informatics, law (pp. 229–245). Amsterdam: North-Holland. References 1173

Goldsmith, R. W. (1989). Potentialities for practical, instructional and scientific purposes of com- puter aids to evaluating judicial evidence in terms of an evidentiary value model. In A. A. Martino (Ed.), Pre-proceedings of the third international conference on “Logica, Informatica, Diritto: Legal Expert Systems”, Florence, 1989 (2 vols. + Appendix) (Vol. 1, pp. 317–329). Florence: Istituto per la Documentazione Giuridica, Consiglio Nazionale delle Ricerche. Goldstein, A. G., & Chance, J. E. (1981). Laboratory studies of face recognition. In G. Davies, H. Ellis, & J. Shepherd (Eds.), Perceiving and remembering faces (pp. 81–104). New York: Academic. Goldstein, H. (1990). Problem-oriented policing. New York: McGraw-Hill. Gómez-Gauchía, H., & Peinado, F. (2006). Automatic customization of non-player characters using players temperament. In S. Göbel, R. Malkewitz, & I. Iurgel (Eds.), Proceedings of the third international conference on Technologies for Interactive Digital Storytelling and Entertainment (TIDSE), Darmstadt, Germany, 4–6 December 2006 (pp. 241–252). (Lecture Notes in Computer Science, 4326). Berlin: Springer. Gómez-Pérez, A., Fernández-López, M., & Corcho, O. (2004). Ontological engineering: With examples from the areas of knowledge management, e-commerce and the semantic web.Berlin: Springer. Gonçalves, T., & Quaresma, P. (2003). A preliminary approach to the multilabel classification problem of Portuguese juridical documents. In Proceedings of EPIA’03, the Eleventh Portugese Conference on Artificial Intelligence, Beja, Portugal, 4–7 December 2003. (Lecture Notes in Computer Science.) Berlin: Springer, pp. 435–444. González Ballester, M. A., Büchler, P., & Reimers, N. (2007). Combined statistical model of bone shape and biomechanical properties for evidence-based orthopaedic implant design. In The Digital Patient, special issue of ERCIM News, 69 (April), 27–28. Accessible at the web- page http://ercim-news.ercim.org/ of the European Research Consortium for Informatics and Mathematics. Good, I. J. (1960). The paradox of confirmation. The British Journal for the Philosophy of Science, 11(42), 145–149. Good, I. J. (1983). On the principle of total evidence. In his Good thinking. Minneapolis, MN: Minnesota University Press. Goode, G. C., Morris, J. R., & Wells, J. M. (1979). The application of radioactive bromine isotopes for the visualisation of latent fingerprints. Journal of Radioanalytical [and Nuclear] Chemistry, 48(1/2), 17–28. Goodrich, P. (1986). Reading the law. Oxford: Blackwell. Goodrich, P. (2005). Narrative as argument. In D. Herman, M. Jahn, & M.-L. Ryan (Eds.), Routledge encyclopedia of narrative theory (pp. 348–349). London: Routledge, 2008 (pbk). Goranson, H. T., Chu, B. T., Grüninger, M., Ivezic, N., Kulvatunyou, B., Labrou, Y., et al. (2002). Ontologies as a new cost factor in enterprise integration. In Proceedings of ICEIMT 2002, pp. 253–263. Gordon, T. F. (1995). The Pleadings Game: An exercise in computational dialectics. Artificial Intelligence and Law, 2(4), 239–292. Gordon, T. F., Prakken, H., & Walton, D. N. (2007). The Carneades model of argument and burden of proof. Artificial Intelligence, 171, 875–896. Gordon, T. F., & Walton, D. (2006). The Carneades argumentation framework: Using presump- tions and exceptions to model critical questions. At The Sixth International Workshop on Computational Models of Natural Argument, held together with ECAI’06, Riva del Garda, Italy, August 2006. Gouron, A. (1992). Medieval courts and towns: Examples from Southern France. Fundamina: A Journal of Legal History, 1, 30–45. Governatori, G., & Rotolo, A. (2002). A Gentzen system for reasoning with contrary-to-duty obligations: A preliminary study. In A. J. I. Jones & J. Horty (Eds.), EON’02: Sixth inter- national workshop on deontic logic in computer science, Imperial College, London, May 2002 (pp. 97–116). 1174 References

Grabherr, S., Djonov, V., Friess, A., Thali, M. J., Ranner, G., Vock, P., et al. (2006). Postmortem angiography after vascular perfusion with diesel oil and a lipophilic contrast agent. AJR: American Journal of Roentgenology, 187(5), W515–523. Grady, G., & Patil, R. S. (1987). An expert system for screening employee pension plans for the Internal Revenue Service. Proceedings of the first international conference on artificial intelligence and law (pp. 137–143). New York: ACM Press. Granhag, P. A., & Strömwall, L. A. (2004). Detection deception in forensic contexts. Cambridge: Cambridge University Press. Grant, J., Kraus, S., & Perlis, D. (2005). A logic-based model of intention formation and action for multi-agent subcontracting. Artificial Intelligence, 163(2), 163–201. Grasso, F. (2002a). Would I lie to you? Fairness and deception in rhetorical dialogues. In R. Falcone & L. Korba (Eds.), Working notes of the AAMAS 2002 workshop on “Deception, Fraud and Trust in Agent Societies”, Bologna, Italy, 15 July 2002. The article can be downloaded from http://www.csc.liv.ac.uk/~floriana/pub.html Grasso, F. (2002b). Towards computational rhetoric. Informal Logic Journal, 22(3), 225–259. Grasso, F., Cawsey, A., & Jones, R. (2000). Dialectical argumentation to solve conflicts in advice giving: A case study in the promotion of healthy nutrition. International Journal of Human- Computer Studies, 53(6), 1077–1115. Grasso, F., Rahwan, I., Reed, C., & Simari, G. R. (2010). Introducing argument & computation. Argument & Computation, 1(1), 1–5. Grasso, F., Reed, C., & Carenini, G. (Eds.). (2004). Proceedings of the fourth workshop on Computational Models of Natural Argument (CMNA IV) at ECAI 2004, Valencia, Spain. Gray, G. L., & Debreceny, R. (2006). Continuous assurance using text mining. At the 12th world continuous auditing & reporting symposium. Posted at the Rutgers Accounting Web (raw.rutgers.edu): http://raw.rutgers.edu/docs/wcars/12wcars/Continuous_ Assurance_Text_Mining.pdf Greene, J. R. (2006). The encyclopedia of police science (3rd ed.). London: Routledge. Greenwood [=Atkinson], K., Bench-Capon, T., & McBurney, P. (2003). Towards a computa- tional theory of persuasion in law. In G. Sartor (Ed.), Proceedings of the ninth International Conference on Artificial Intelligence and Law (ICAIL 2003), Edinburgh, Scotland, 24–28 June 2003 (pp. 22–31). New York: ACM Press. Greer, S. (1994). Miscarriages of criminal justice reconsidered. Modern Law Review, 58, 71. Gregg, D. G., & Scott, J. E. (2008). A typology of complaints about eBay sellers. Communications of the ACM, 51(4), 69–74. Gregory, F. (1998). There is a global crime problem. International Journal of Risk, Security and Crime Prevention, 3, 133–137. Grey, T. (1983). Langdell’s orthodoxy. University of Pittsburgh Law Review, 45, 1–53. Grey, T. C. (1999). The new formalism. Stanford Law School Public Law and Legal Theory Working Paper, No. 4 (SSRN 200732). Stanford, CA: University of Stanford. Grice, H. P. (1975). Logic and conversation. In P. Cole & J. Morgan (Eds.), Syntax and semantics, Vol. 3: Speech acts (pp. 41–58). Orlando, FL: Academic. Grice, H. P. (1981). Presupposition and conversational implicature. In: P. Cole (Ed.), Radical pragmatics (pp. 183–198). New York: Academic Press. Grifantini, F. M. (1993). Inutilizzabilità. In Digesto delle Discipline Penalistiche (4th ed., Vol. 7). Turin, Italy: Utet. Grifantini, F. M. (1999). Utilizzabilità in dibattimento degli atti provenienti dalle fasi anteriori. In AA.VV., La prova nel dibattimento penale. Turin: Giappichelli. Griffiths, P. E. (2003). Emotions. Chapter 12 In S. P. Stich & T. A. Warfield (Eds.), The Blackwell guide to philosophy of mind (pp. 288–308). Oxford: Blackwell. Grosz, B. (1977). The representation and use of focus in dialogue understanding. Technical Note No. 151. Menlo Park, CA: Stanford Research Institute. Grosz, B., & Kraus, S. (1996). Collaborative plans for complex group action. Artificial Intelligence, 86(2), 269–357. References 1175

Grover, C., Hachey, B., Hughson, I., & Korycinski, C. (2003). Automatic summarisation of legal documents. In Proceedings of the ninth International Conference on Artificial Intelligence and Law (ICAIL 2003), Edinburgh, Scotland. New York: ACM Press, pp. 243–251. Grubbs, F. E. (1969). Procedures for detecting outlying observations in samples. Technometrics, 11, 1–21. Gruber, T. R. (1993). A translation approach to portable ontology specifications. Knowledge Acquisition, 5(2), 199–220. Gruber, T. R. (1995). Towards principles for the design of ontologies used for knowledge shar- ing. International Journal of Human-Computer Studies, 43(5–6), 907–928. (Originally in N. Guarino & R. Poli (Eds.). (1993). International workshop on formal ontology, Padova, Italy. Revised August 1993. Technical report KSL-93-04, Knowledge Systems Laboratory, Stanford University, Stanford, CA. Often cited, without page-numbers, as though it appeared in N. Guarino & R. Poli (Eds.), Formal ontology in conceptual analysis and knowledge rep- resentation. Dordrecht: Kluwer. Actually published in a special issue on Formal Ontology in Conceptual Analysis and Knowledge Representation (Ed. N. Guarino & R. Poli), International Journal of Human-Computer Studies, 43(5–6), 907–928). Grugq. (2005). The art of defiling: Defeating forensic analysis. In Blackhat briefings. http://www. blackhat.com/presentations/bh-usa-05/bh-us-05-grugq.pdf (The web link did no longer seem to work in the summer of 2011; contact Blackhat for a copy). Grüninger, G., & Delaval, A. (2009). A first-order cutting process ontology for sheet metal parts. In Proceedings of FOMI 2009, pp. 22–33. Grüninger, M., & Lee, J. (2002). Ontology applications and design: Introduction. In a special issue of Communications of the ACM, 45(2), 39–41. Gu, J. (2007). Consumer rights protection on the online auction website – Situations and solutions: A Case study of EBay. BILETA 2007 Annual Conference, University of Hertfordshire, 16–17 April 2007. British and Irish Law, Education and Technology Association (BILETA). 8 pages. http://www.bileta.ac.uk/Document%20Library/1/Consumer%20Rights%20Protection% 20on%20the%20Online%20Auction%20Website%20-Situations%20and%20Solutions% 20-%20A%20Case%20Study%20of%20EBay.pdf Gudjonsson, G. H. (1992). The psychology of interrogations, confessions and testimony.NewYork: Wiley. New edition: 2003. Gudjonsson, G. H. (2001). False confessions. The Psychologist, 14, 588–591. Gudjonsson, G. H. (2006). Disputed confessions and miscarriages of justice in Britain: Expert psychological and psychiatric evidence in the Court of Appeal. The Manitoba Law Journal, 31, 489–521. Gudjonsson, G. H. (2007). Investigative interviewing. In T. Newburn, T. Williamson, & A. Wright (Eds.), Handbook of criminal investigation (pp 466–492). Cullompton: Willan Publishing. Gudjonsson, G. H., & MacKeith, J. A. C. (1982). False confessions: Psychological effects of inter- rogation. In A. Trankell (Ed.), Reconstructing the past: The role of psychologists in criminal trials (pp. 253–269). Deventer, The Netherlands: Kluwer. Gudjonsson, G. H., & Clark, N. K. (1986). Suggestibility in police interrogation: A social Psychological Model. Social Behavior, 1, 83–104. Gudjonsson, G. H., & Sigurdsson, J. F. (1994). How frequently do false confessions occur? An empirical study among prison inmates. Psychology, Crime, and Law, 1, 21–26. Gudjonsson, G. H., Sigurdsson, J. F., Asgeirsdottir, B. B., & Sigfusdottir, I. D. (2006). Custodial interrogation, false confession, and individual differences: A national study among Icelandic youth. Personality and Individuals Differences, 41, 49–59. Gudjonsson, G. H., Sigurdsson, J. F., Asgeirsdottir, B. B., & Sigfusdottir, I. D. (2007). Custodial Interrogation: What are the background factors associated with claimed false confessions? The Journal of Forensic Psychiatry and Psychology, 18, 266–275. Gudjonsson, G. H., Sigurdsson, J. F., & Einarsson, E. (2004). The role of personality in relation to confessions and denials. Psychology, Crime and Law, 10, 125–135. Guest, A. G. (1961). Logic in the law. In A. G. Guest (Ed.), Oxford essays in jurisprudence (pp. 176–197). Oxford: Oxford University Press; Corrected edn., Oxford: Clarendon Press, 1968(in 1961 edition). 1176 References

Guidotti, P. (1994). Use of precedents based on reasoning by analogy in a deductive framework. In I. Carr & A. Narayanan (Eds.), Proceedings of the fourth national conference on law, computers and artificial intelligence (pp. 56–69). Exeter, England: Exeter University Centre for Legal Interdisciplinary Development (EUCLID). Gulotta, G. (2004). Differenti tattiche persuasive. In G. Gulotta & L. Puddu (Eds.), La persuasione forense: strategie e tattiche (pp. 85–148). Milan: Giuffrè, with a consolidated bibliography on pp. 257–266. Gulotta, G., & Zappalà, A. (2001). The conflict between prosecution and defense in a child sex- ual abuse case and in an attempted homicide case. In D. M. Peterson, J. A. Barnden, & E. Nissan (Eds.), Artificial intelligence and law, special issue, Information and Communications Technology Law, 10(1), 91–108. Gunn, J., & Taylor, P. J. (1993). Forensic psychiatry: Clinical, legal and ethical issues. Oxford & Boston: Butterworth-Heinemann. Gutebier, T., Schmidt, M. A., & Rogers, S. P. (1989). An annotated bibliography on preparation, taxidermy, and collection management of vertebrates with emphasis on birds. (Special publica- tion of Carnegie Museum of Natural History, 15). Pittsburgh, PA: Carnegie Museum of Natural History. Gutés, A., Céspedes, F., & del Valle, M. (2007). Electronic tongues in flow analysis. Analytica Chimica Acta, 600, 90–96. Gutiérrez, M., García-Rojas, A., Thalmann, D., Vexo, F., Moccozet, L., Magnenat-Thalmann, N., et al. (2007). An ontology of virtual humans: Incorporating semantics into human shapes. The Visual Computer, 23(3), 207–218. Gutiérrez, M., Thalmann, D., Vexo, F., Moccozet, L., Magnenat-Thalmann, N., Mortara, M., et al. (2005). An ontology of virtual humans: Incorporating semantics into human shapes. In Proceedings of the workshop towards Semantic Virtual Environments (SVE05), Villars, Switzerland, March 2005, pp. 57–67. Posted at http://vrlab.epfl.ch/Publications of the Virtual Reality Lab at the Swiss Federal Institute of Technology in Lausanne. Güven, S., Podlaseck, M., & Pingali, G. (2005). PICASSO: Pervasive information chronicling, access, search, and sharing for organizations. In Proceedings of the IEEE 2005 Pervasive Computing conference (PerCom 2005). Los Alamitos, CA: IEEE Computer Society Press. Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3, 1157–1182. Gwadera, R., Atallah, M. J., & Szpankowski, W. (2005a). Reliable detection of episodes in event sequences. Knowledge and Information Systems, 7(4), 415–437. Gwadera, R., Atallah, M. J., & Szpankowski, W. (2005b). Markov models for identification of significant episodes. In Proceedings of the SIAM International Conference on Data Mining (SDM 2005), pp. 404–414. Gyongyi, Z., Molina, H. G., & Pedersen, J. (2004). Combating web spam with TrustRank. In Proceedings of the 30th Very Large Data Bases international conference (VLDB 2004), Toronto, Canada, August 29–September 3, 2004. An older version of that paper had appeared as a technical report of Stanford University. Haber, L., & Haber, R. N. (2003). Error rates for human fingerprint examiners. In N. K. Ratha & R. Bolle (Eds.), Automatic fingerprint recognition systems (pp. 339–360). New York: Springer. Haber, L., & Haber, R. N. (2006). Letter Re: A report of latent print examiner accuracy during comparison training exercises. Journal of Forensic Identification, 56, 493–499. Haber, L., & Haber, R. N. (2008). Scientific validation of fingerprint evidence under daubert. Law, Probability and Risk, 7, 87–109. Habermas, J. (1981). The theory of communicative action. London: Beacon Press. HaCohen-Kerner, Y. (1997). The judge’s apprentice. Ph.D. Thesis (in Hebrew, with an English abstract). Ramat-Gan, Israel: Department of Mathematics and Computer Science, Bar-Ilan University. HaCohen-Kerner, Y., & Schild, U. J. (1999). The judge’s apprentice. In B. Knight & E. Nissan (Eds.), Forum on case-based reasoning, thematic section in The New Review of Applied Expert Systems, 5, 191–202. References 1177

HaCohen-Kerner, Y., & Schild, U. J. (2000). Case-based sentencing using a tree of legal concepts. In Time for AI and society: Proceedings of the AISB’00 symposium on artificial intelligence and legal reasoning, 2000. The Society for the Study of Artificial Intelligence and the Simulation of Behavior, UK, pp. 9–16. HaCohen-Kerner, Y., & Schild, U. J. (2001). Case-based sentencing using a tree of legal concepts. In D. M. Peterson, J. A. Barnden, & E. Nissan (Eds.), Artificial intelligence and law, special issue of Information and Communications Technology Law, 10(1), 125–135. HaCohen-Kerner, Y., Schild, U. J., & Zeleznikow, J. (1999). Developing computational models of discretion to build legal knowledge based systems. In Proceedings of the seventh international conference on artificial intelligence and law, ICAIL99, Oslo, 1999. New York: ACM, 1999, pp. 206–213. Hadzic, M., & Chang, E. (2008). Using coalgebra and coinduction to define ontology-based multi- agent systems. International Journal of Metadata, Semantics and Ontologies, 3(3), 197–209. Hafstad, G., Memon, A., & Logie, R (2004). The effects of post-identification feedback on children’s memory. Applied Cognitive Psychology, 18, 901–912. Hage, J. (2001). Contrary to duty obligations: A study in legal ontology. Chapter 8 In B. Verheij, A. R. Lodder, R. P. Loui, & A. Muntjewerff (Eds.), Legal knowledge and information sys- tems. JURIX 2001: The fourteenth annual international conference, University of Amsterdam, December 13–14, 2001. (Frontiers in Artificial Intelligence and Applications, 70). Tokyo: Ohmsha. Haglund, W. D. (2005). Forensic taphonomy. Chapter 8 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Hahn, U., & Mani, I. (2000). The challenges of automatic summarization. IEEE Computer, 33(11), 29–36. Hahn, U., & Schulz, S. (2004). Building a very large ontology from medical thesauri. In S. Staab & R. Studer (Eds.), Handbook on ontologies (pp. 133–150). (International Handbooks on Information Systems). Berlin: Springer. Haïm, S. (1956). Persian-English proverbs. Tehran [sic]: B. & D. Beroukhim Booksellers. Hall, R. P., & Kibler, D. F. (1985). Differing methodological perspectives in artificial intelligence research. The AI Magazine, 6(3), 166–178. Halliwell, J., Keppens, J., & Shen, Q. (2003). Linguistic Bayesian Networks for reasoning with subjective probabilities in forensic statistics. In G. Sartor (Ed.), Proceedings of the ninth International Conference on Artificial Intelligence and Law (ICAIL 2003), Edinburgh, Scotland, 24–28 June 2003 (pp. 42–50). New York: ACM Press. Halpin, H., & Moore, J. D. (2010). Event extraction in a plot advice agent. In ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and 44th annual meet- ing of the Association for Computational Linguistics. http://portal.acm.org/citation.cfm?doid= 12201275.1220283 Halpin, H. R. (2003) The plots of children and machines: The statistical and symbolic semantic analysis of narratives. MSc. thesis. Edinburgh, Scotland: School of Informatics, University of Edinburgh. http://www.semanticstories.org/thesis/mscthesis.pdf Hamblin, C. L. (1970). Fallacies. London: Methuen. Hamblin, C. L. (1971). Mathematical models of dialogue. Theoria, 37, 130–155. Hamill, J. T. (2006). Analysis of layered social networks. Ph.D. dissertation. Report AFIT/DS/ENS/06 03. Graduate School of Engineering and Management, Air Force Institute of Technology (Air University). Available online at both these addresses: http://www.afit.edu/en/ docs/ENS/dissertations/Hamill.pdf http://www.au.af.mil/au/awc/awcgate/afit/hamill_layered_ social_networks.pdf Hamilton, D. L., & Rose, T. L. (1980). Illusory correlation and the maintenance of stereotypical beliefs. Journal of Personality and Social Psychology, 39, 832–845. Hamkins, J. D., & Löwe, B. (2008). The modal logic of forcing. Transactions of the American Mathematical Society, 360, 1793–1817. 1178 References

Hamlin, C. (2009). Cholera: The biography. (Biographies of Disease Series). Oxford: Oxford University Press. Hammersley, R., & Read, J. D. (1993). Voice identification by humans and computers. In S. L. Sporer, R. S. Malpass, & G. Köhnken (Eds.), Suspect identification: Psychological knowledge, problems and perspectives. Hillsdale, NJ: Lawrence Erlbaum Associates. Hammon, W. S., McMechan, G. A., & Zeng, X. (2000). Forensic GPR-finite-difference simulation of responses from buried remains. Journal of Applied Geophysics, 45, 171–186. Hamscher, W., Console, L., & de Kleer, J. (Eds.). (1992). Readings in model-based diagnosis.San Mateo, CA: Morgan-Kaufmann. Han, J., Cheng, H., Xin, D., & Yan, X. (2007). Frequent data mining: Current status and future directions. In Data Mining and Knowledge Discovery, 10th Anniversary Issue,ofData Mining and Knowledge Discovery, 15, 55–86. http://www.cs.ucsb.edu/~xyan/papers/dmkd07_ frequentpattern.pdf Han, J., & Kamber, M. (2001). Data mining: Concepts and techniques. San Francisco: Morgan Kaufmann. Hanba, J. M., & Zaragoza, M. S. (2007). Interviewer feedback in repeated interviews involving forced confabulation. Applied Cognitive Psychology, 21(4), 433–455. Hancock, P. J. B. (2000). Evolving faces from principal components. Behaviour Research Methods, Instruments and Computers, 32(2), 327–333. Hancock, P. J. B., Bruce, V., & Burton, A. M. (1998). A comparison of two computer-based face recognition systems with human perceptions of faces. Vision Research, 38, 2277–2288. Hancock P. J. B., Bruce, V., & Burton, A. M. (2000). Recognition of unfamiliar faces. Trends in Cognitive Sciences, 4(9), 330–337. Hancock, P. J. B., & Frowd, C. D. (2001). Evolutionary generation of faces. In P. J. Bentley & D. W. Corne (Eds.), Creative evolutionary systems. London: Academic. Hand, D. J., Mannila, H., & Smyth, P. (2001). Principles of data mining (Adaptive computation and machine learning). Cambridge, MA: MIT Press. Handler Miller, C. 2004. Digital storytelling: A creator’s guide to interactive entertainment. Burlington, MA: Focal Press. Hanlein, H. (1998). Studies in authorship recognition: A corpus-based approach. (European University Studies, Series 14: Anglo-Saxon Language and Literature, 352). Frankfurt/M: Peter Lang, 1999. (Originally: doctoral dissertation, Universität Augsburg, 1998.) Hao, Y., Tan, T., & Wang, Y. (2002). An effective algorithm for fingerprint matching. In TENCON ’02. Proceedings. 2002 IEEE region 10 conference on computers, communications, control and power engineering, 28–31 October 2002, Vol. 1, pp. 519–522. Also, technical report, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing. http://nlpr-web.ia.ac.cn/english/irds/papers/haoying/tencon.pdf Harder, C. (1998). Serving maps on the internet. Redland, CA: ESRI Press. Harley, E. M., Carlsen, K. A., & Loftus, G. R. (2004). The “saw-it-all-along” effect: Demonstrations of visual hindsight bias. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 960–968. Harman, G. (1986). Changes in view: Principles of reasoning. Cambridge, MA: The MIT Press. Harman, G. H. (1965). Inference to the best explanation. Philosophical Review, 74(1), 88–95. Harman, G. H. (1968). Enumerative induction as inference to the best explanation. Journal of Philosophy, 65(18), 529–533. Harper, W. R., & Harris, D. H. (1975). The application of link analysis to police intelligence. Human Factors, 17(2), 157–164. Harris, M. D. (1985). Introduction to natural language processing. Reston, VA: Reston Publ. Co. Harris, R. (2006). Arriving at an anti-forensics consensus: Examining how to define and con- trol the anti-forensics problem. At the Digital Forensic Research Workshop 2006: The 6th annual DFRWS 2006, Lafayette, Indiana, 14–16 August 2006. Article published in Digital Investigation, 3S, pp. S44–S49 (Amsterdam: Elsever). http://www.dfrws.org/2006/ proceedings/6-Harris.pdf References 1179

Harris, D. H., & Chaney, F. B. (1969). Human factors in quality assurance. New York: Wiley. Harrison, J. M. (1964). Bird taxidermy. London: Percival Marshall & Co., 1964. With an addendum (by J. Harrison): Newton Abbot (U.K.) & North Pomfreth, Vermont: David & Charles, 1976. Harrison, W. R. (1958). Suspect documents: Their scientific examination. New York: Praeger. Hart, H. L. A. (1961a). The concept of law. Oxford: Clarendon Press. Hart, H. L. A. (1961b). Negligence, mens rea and criminal responsibility. In A. G. Guest (Ed.), Oxford essays in jurisprudence. Oxford: Oxford University Press. Corrected edn., Oxford: Clarendon Press, 1968. Hart, H. L. A. (1994). The concept of law (2nd ed.). Oxford: Clarendon Press. Hartley, J. R. M., & Varley, G. (2001). The design and evaluation of simulations for the devel- opment of complex decision-making skills. In T. Okamoto, R. Hartley, Kinshuk & J. P. Klus (Eds.), Proceedings of the IEEE international conference on advanced learning technology: Issues, achievements and challenge (pp. 145–148). IEEE Computer Society. Hartwig, M., Granhag, P. A., Strömwall, L. A., & Doering, N. (2010). Impression and informa- tion management: On the regulation of innocent and guilty suspects. The Open Criminology Journal, 3, 10–16. Hartwig, M., Granhag, P. A., Strömwall, L. A., & Vrij, A. (2005). Detecting deception via strategic disclosure of evidence. Law and Human Behavior, 29, 469–484. Hasan-Rokem, G. (1996). The web of life: Folklore in rabbinic literature – The Palestinian aggadic midrash Eikha Rabba (in Hebrew: Riqmat-H. ayyim), Tel-Aviv: Am Oved. English translation (by B. Stein): Stanford, 2000. Hasel, L. E., & Wells, G. L. (2006). Catching the bad guy: Morphing composite faces helps. Law and human Behavior, 31, 193–207. Hastie, R. (Ed.). (1993). Inside the juror: The psychology of juror decision making. (Cambridge Series on Judgment and Decision Making). Cambridge: Cambridge University Press, 1993 (hard cover), 1994 (paperback). Hastie, R., Penrod, S. D., & Pennington, N. (1983). Inside the jury. Cambridge, MA: Harvard University Press. Hatfield, J. V., Neaves, P., Hicks, P. J., Persaud, K. C., & Tavers, P. (1994). Toward an integrated electronic nose using conducting polymer sensors. Sensors & Actuators, 18, 221–228. Hauck, R. V., Atabakhsh, H., Ongvasith, P., Gupta, H., & Chen, H. (2002). COPLINK concept space: An application for criminal intelligence analysis. In Digital Government, special issue of IEEE Computer, 35(3), 30–37. Hauser, K., & Ng-Thow-Hing, V. (2010). Randomized multi-modal planning for precision pushing on a humanoid robot. Chapter 9 In K. Harada, E. Yoshida, & K. Yokoi (Eds.), Motion planning for humanoid robots (pp. 251–276). Berlin: Springer. Hawkins, K. (1992). The use of legal discretion: Perspectives from law and social science. In K. Hawkins (Ed.), The uses of discretion. Oxford Socio-Legal Studies. Oxford: Clarendon Press. Hayes, P. J. (1985). Naive physics I: Ontology for liquids. Chapter 3 In J. R. Hobbs & R. C. Moore (Eds.), Formal theories of the commonsense world. Norwood, NJ: Ablex. Hayes, P. J., Knecht, L. E., & Cellio, M. J. (1988). A news story categorization system. In Proceedings of the second ACL conf. on applied natural language processing, 1988, pp. 9–17. Reprinted in K. Sparck Jones & P. Willett (Eds.). (1997). Readings in information retrieval. San Francisco: Morgan Kaufmann, pp. 518–526. Hayes-Roth, B. (1983). The blackboard architecture: A general framework for problem solving. Report No. HPP-83-30, Stanford Heuristic Programming Project (which was to become the Knowledge Systems Laboratory, in Palo Alto), of Stanford, CA: Stanford University. Hayes-Roth, B. (1985). A blackboard architecture for control. Artificial Intelligence, 26, 251–321. Hayes-Roth, B., & van Gent, R. (1997). Story-making and improvisational puppets. In W. L. Johnson (Ed.), Autonomous Agents ’97. (pp. 1–7)Marina del Rey, CA. New York: ACM Press. Haygood, R. C., Teel, K. S., & Greening, C. P. (1964). Link analysis by computer. Human Factors, 6, 63–70. 1180 References

Haykin, S. (1994). Neural networks: A comprehensive foundation. New York: Macmillan. Hearst, M. (1999). Untangling text data mining. In Proceedings of the 37th annual meeting for computational linguistics (pp. 3–10). New York: ACM Press. Heaton-Armstrong, A. Wolchover, D., & Maxwell-Scott A. (2006). Obtaining, recording, and admissibility of out-of-court witness statements. In A. Heaton-Armstrong, E. Shepherd, G. Gudjonsson & D. Wolchover (Eds.), Witness testimony. Psychological, investigative and evidential perspectives (pp. 171–209). Oxford: Oxford University Press. Hecht-Nielson, R. (1990). Neurocomputing. Reading, MA: Addison-Wesley. Heckerman, D. (1997). Bayesian networks for data mining. Data Mining and Knowledge Discovery, 1, 79–119. Henrion, M., Provan, G., Del Favero, B., & Sanders, G. (1994). An experimental comparison of numerical and qualitative probabilistic reasoning. In R. Lopez de Mántaras & D. Poole (Eds.), Uncertainty in artificial intelligence: Proceedings of the Tenth Conference, July 1994. San Mateo, CA: Morgan Kaufmann pp. 319–326. Hendrix, G. G. (1976). Partitioned networks for modelling natural language semantics. Dissertation. Austin, Texas: Department of Computer Sciences, The University of Texas. Hendrix, G. G. (1979). Encoding knowledge in partitioned networks. In N. V. Findler (Ed.), Associative networks: Representation and use of knowledge by computers (pp. 51–92). New York: Academic. Henry, R. C., & Kim, B. M. (1990). Extension of self-modeling curve resolution to mixtures of more than three components. Part 1: Finding the basic feasible region. Chemometrics and intelligent Laboratory Systems, 8, 205–216. Henry, R. C., Lewis, C. W., & Collins, J. F. (1994). Vehicle related hydrocarbon source composition from ambient data: The GRACE/SAFER method. Environmental Science and Technology, 28, 823–832. Henry, R. C., Spiegelman, C. H., Collins, J. F., & Park, J. F. [sic] (1997). Reported emissions of organic gases are not consistent with observations. Proceedings of the National Academy of Sciences USA, 94, 6596–6599. Hepler, A. B., Dawid, A. P., & Leucari, V. (2007). Object-oriented graphical representations of complex patterns of evidence. Law, Probability & Risk, 6, 275–293. doi:10.1093/lpr/mgm005 Her Majesty’s...(1998/99). Information technology. Chapter 5 In: Her Majesty’s chief inspector of constabulary for scotland report for 1998/99. http://www.scotland.gov.uk/library2/doc05/cicr- 11.htm Herold, J., Loyek, C., & Nattkemper, T. W. (2011). Multivariate image mining. Wiley Interdisciplinary Reviews (WIREs): Data Mining and Knowledge Discovery, 1(1), 2–13. doi://10.1002/widm.4 Herrero, S. (2005). Legal issues in forensic DNA. In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Heuston, R. F. V. (1964). Lives of the lord chancellors, 1885–1940. Oxford: Clarendon Press. Hewer, L., & Penrod, S. D. (1995). Jury decision-making in complex trials. Chapter 6.3 In R. Bull & D. Carson (Ed.), Handbook of psychology in legal contexts (pp. 527–541). Chichester: Wiley. Hewstone, M. (1989). Causal attribution: From cognitive processes to cognitive beliefs. Oxford: Blackwell. Heylighen, F. (1999). Advantages and limitations of formal expression. Foundations of Science, 4(1), 25–56. Hickey, L. (1993). Presupposition under cross-examination. International Journal for the Semiotics of Law, 6, 89–109. Hildebrand, J. A., Wiggins, S. M., Henkart, P. C., & Conyers, L. B. (2002). Comparison of seis- mic reflection and ground penetrating radar imaging at the controlled archaeological test site, Champaign, Illinois. Archaeological Prospection, 9, 9–21. Hill, C., Memon, A., & McGeorge, P. (2008). The role of confirmation bias in suspect interviews: A systematic evaluation. Legal & Criminological Psychology, 13, 357–371. References 1181

Hilton, J. L., & Fein, S. (1989). The role of diagnosticity in stereotype-based judgments. Journal of Personality and Social Psychology, 57, 201–211. Hilton, O. (1982). Scientific examination of questioned documents. Amsterdam & New York: Elsevier Science Publishing Co. Hinton, G., & Sejnowski, T. J. (Eds). (1999). Unsupervised learning: Foundations of neural computation. Cambridge, MA: MIT Press. Hinz, T., & Pezdek, K. (2001). The effect of exposure to multiple lineups on face identification accuracy. Law and Human Behavior, 25, 185–198. Hirschman, L., Light, M., Breck, E., & Burger, J. D. (1999). Deep read: A reading comprehen- sion system. In Proceedings of the 37th annual meeting of the association for computational linguistics. Hirst, G. (2004). Ontology and the lexicon. In S. Staab & R. Studer (Eds.), Handbook on ontologies (pp. 209–230). (International Handbooks on Information Systems). Berlin: Springer. Hitchcock, D., & Verheij, B. (2005). The Toulmin model today: Introduction to the special issue on contemporary work using Stephen Edelston Toulmin’s layout of arguments. Argumentation, 19, 255–258. Ho, D. (1998). Indigenous psychologies: Asian perspectives. Journal of Cross-, 29(1), 88–103. Ho, Sh.-Sh., & Talukder, A. (2009). Utilizing spatio-temporal text information for cyclone eye annotation in satellite data. IJCAI Workshop on Cross-Media Information Mining, at the International Joint Conference on Artificial Intelligence (IJCAI’09), July 2009. Hoare, C. A. R. (1985). Communicating sequential processes. (Prentice Hall Series in Computer Science). Hemel Hempstead, Hertfordshire, England: Prentice Hall. Hobbs, D. (1998). There is not a global crime problem. International Journal of Risk, Security and Crime Prevention, 3, 139–146. Hobbs, J. R., Stickel, M. E., Appelt, D. E., & Martin, P. (1993). Interpretation as abduction. In F. C. N. Pereira & B. J. Grosz (Eds.), Natural language processing (pp. 69–142). Cambridge, MA: MIT Press. Paper posted on the Web at http://www.ai.sri.com/~hobbs/interp-abduct-ai.ps Hobbs, P. (2007). Judges’ use of humor as a social corrective. Journal of Pragmatics, 39(1), 50–68. Hobson, J. B., & Slee, D. (1993). Rules, cases and networks in a legal domain. Law, Computers & Artificial Intelligence, 2(2), 119–135. Hobson, J. B., & Slee, D. (1994). Indexing the Theft Act 1968 for case based reasoning and artifi- cial neural networks. In Proceedings of the fourth national conference on law, computers and artificial intelligence, Exeter, England, p. 96. Hochberg, J. (1999). Statistical approaches to automatic identification of classified documents. Paper delivered at the CRL/NMSU International Symposium on New Paradigms in Knowledge and Information Processing, Las Cruces, NM, December 13. Cited in a from Raskin et al. (2001). Hochberg, J. (2000). Automatic identification of classified documents. Paper delivered at the CERIAS Security Seminar, Purdue University, West Lafayette, IN, February 25. Cited in a quotation from Raskin et al. (2001). Hoen, P. (1999). The glossary: Part 4 (P–R). Laboratory of Palaeobotany and Palynology, University of Utrecht. Retrieved in March 2007 http://www.bio.uu.nl/~palaeo/glossary/glos- p4.htm.PartofhisGlossary of Pollen and Spore Terminology, 2nd edn. (http://www.bio.uu. nl/~palaeo/glossary/glos-int.htm). First edition (LPP Contribution Series, No. 1, 1994) was by W. Punt, S. Blackmore, S. Nilsson, & A. Le Thomas [sic]. Hogarth, J. (1988). Sentencing database system: User’s guide. Vancouver, BC: University of British Columbia. Hohns, H. M. (1987). The place of forensics in engineering. Forensic Engineering, 1(1), 3–5. Holland, B. (2007). Picking the firm favourite: Selecting the right expert can be crucial in court. In Expert Witness Supplement to The New Law Journal, 157(7294) (London, 26 October 2007), pp. 1486–1487. Hollien, H. F. (1990). The acoustics of crime: The new science of forensic phonetics.NewYork: Plenum. 1182 References

Holmes, O. W., Jr. (1881). The Common Law. London: Macmillan; Boston: Little, Brown, 1881 (now in .pdf on the Web, in the HeinOnline Legal Classics Library: www.heinonline.org). With a new introd. by T. Griffin, New Brunswick, N.J.: Transaction, 2005. Also, ed. M. D. Howe, Boston: Little, Brown, 1948, 1963, Cambridge, MA: Belknap Press of Harvard University Press, 1963, Oxford: Oxford University Press, 1963, & London: Macmillan, 1968. Also as part of O. W. Holmes, Jr., The Common Law & Other Writings (Collected Legal Papers), Birmingham, AL: Legal Classics Library, 1982. Holmström-Hintikka, G. (1995). Expert witnesses in legal argumentation. Argumentation, 9(3), 489–502. Holmström-Hintikka, G. (2001). Expert witnesses in the model of interrogation. In A. A. Martino & E. Nissan (Eds.), Software, Formal models, and artificial intelligence for legal evidence, special issue of Computing and Informatics, 20(6), 555–579. Holstein, J. A. (1985). Jurors’ interpretation and jury decision making. Law and Human Behavior, 9, 83–100. Holt, A. W. (1971). Introduction to occurrence systems. In E. L. Jacks (Ed.), Associative information techniques (pp. 175–203). New York: American Elsevier. Holt, A. W. (1988). Diplans: A new language for the study and implementation of coordination. ACM Transactions on Information Systems (TOIS), 6(2), 109–125. Holt, A. W., & Meldman, J. A. (1971). Petri nets and legal systems. Jurimetrics, 12(2), 65–75. Holzner, S. (1998). XML complete. New York: McGraw-Hill. Home Office. (2003). Police and Criminal Evidence Act 1984. Codes of Practice A–E Revised Edition. Her Majesty Stationary Office (HMSO). Hopke, P. K. (1989). Target transformation factor analysis. Chemometrics and Intelligent Laboratory Systems, 6, 7–19. Horgan, T. (2010). Transvaluationism about vagueness: A progress report. The Southern Journal of Philosophy, 48(1), 67–94. Horie, C. V., & Murphy, R. G. (1988). Conservation of natural history specimens: Vertebrates. Proceedings of the short course at Manchester University. Manchester, England: University of Manchester Department of Environmental Biology and the Manchester Museum. Horn, R., Birdwell, J. D., & Leedy, L. W. (1997). Link discovery tool. In Proceedings of the coun- terdrug technology assessment center’s ONDCP/CTAC international symposium, Chicago, IL, August 18–22. Horry, R., & Wright, D. B. (2008). I know your face but not where I saw you: Context memory is impaired for other race faces. Psychonomic Bulletin & Review, 15, 610–614. Horsenlenberg, R., Merckelbach, H., & Josephs, S. (2003). Individual differences and false con- fessions: A conceptual replication of Kassin and Kiechel (1996). Psychology, Crime and Law, 9, 1–18. Horty, J. F. (1993). Deontic logic as founded on nonmonotonic logic. In J.-J. Meyer & R. Wieringa (Eds.), Deontic logic in computer science. Basel: Baltzer. = Annals of Mathematics and Artificial Intelligence, 9, 69–91. Horty, J. F., & Belnap, N. (1995). The deliberative stit: A study of action, omission, ability and obligation. The Journal of Philosophical Logic, 24, 583–644. Horwich, P. (1982). Probability and evidence. Cambridge, MA: MIT Press. Houck, M. M. (1999). Statistics and trace evidence: The tyranny of numbers. Forensic Science Communications, 1(3), 1–8. Hovy, E. (1987a). Generating natural language under pragmatic constraints. Ph.D. dissertation, Yale University technical report YALEU/CSD/RR#521. Hovy, E. (1987b). Generating natural language under pragmatic constraints. Journal of Pragmatics, 11(6), 689–719. Hovy, E. (1988a). Generating natural language under pragmatic constraints. Hillsdale, NJ: Erlbaum. Hovy, E. (1988b). Pauline: An experiment in interpersonal, ideational, and textual language gener- ation by computer. In Proceedings of the 15th international systemics congress, East Lansing, MI. References 1183

Hovy, E. (1988c). Two types of planning in language generation. In Proceedings of the 26th annual meeting of the Association for Computational Linguistics (ACL’88), State University of New York, Buffalo, NY, 1988, pp. 179–186. Hovy, E. (1988d). Planning coherent multisentential texts. In Proceedings of the 26th annual meet- ing of the Association for Computational Linguistics (ACL’88), State University of New York, Buffalo, NY, pp. 163–169. Hovy, E. (1991) Approaches to the planning of coherent text. In C. L. Paris, W. R. Swartout, & W. C. Mann (Eds.), Natural language generation in artificial intelligence and computational linguistics (pp. 83–102). Dordrecht, The Netherlands: Kluwer. Hovy, E. (1993) Automated discourse generation using discourse structure relations. Artificial Intelligence, 63(1/2), 341–385. Howe, C. J., Barbrook, A. C., Spencer, M., Robinson, P., Bordalejo, B., & Mooney, L. R. (2001). Manuscript evolution. Endeavour, 25(3), 121–126. Howe, M., Candel, I., Otgaar, H., Malone, C., & Wimmer, M. C. (2010). Valence and the devel- opment of immediate and long-term false memory illusions. Memory, 18, 58–75. http://www. personeel.unimaas.nl/henry.otgaar/HoweOtgaar--%20MEMORY%202010.pdf Howe, M. L. (2005). Children (but not adults) can inhibit false memories. Psychological Science, 16, 927–931. Howlett, J. B. (1980). Analytical investigative techniques: Tools for complex criminal investiga- tions. Police Chief, 47(12), 42–45. Hu, M., & Liu, B. (2004). Mining and summarizing customer reviews. In Proceedings of the 10th ACM SIG KDD international conference on knowledge discovery and data mining, pp. 168–177. Hu, W., Liao, Y., & Vemuri, V. R. (2003). Robust support vector machines for anomaly detection in computer security. In Proceedings of the International Conference on Machine Learning and Application (ICMLA 2003), pp. 168–174. doi://10.1.1.87.4085 Huard, R .D., & Hayes-Roth, B. (1996). Children’s collaborative playcrafting. Technical Report KSL-96-17. Stanford, CA: Stanford Knowledge System Laboratory. Huber R. A., & Headrick A. M. (1999). Handwriting identification: Facts and fundamentals.Boca Raton, FL: CRC Press. Hueske, E. E. (2002). Shooting incident investigation/Reconstruction tranining manual. Hughes, P. A., & Green, A. D. P. (1991). The use of neural networks for fingerpring classification. In Proceedings of the second IEEE international conference on neural networks,Universityof Sussex, England, pp. 79–81. Hulstijn, J., & Nijholt, A. (Eds.). (1996). Automatic interpretation and generation of verbal humor: Proceedings of the 12th twente workshop on language technology, Twente, 1996. Enschede, The Netherlands: University of Twente. Hunt, L. (2007). Goldman gets the last word on OJ. In her column ‘In the frame’. The Daily Telegraph (London), 5 July 2007, p. 25. Hunter, D. (1994). Looking for law in all the wrong places: Legal theory and neural networks. In H. Prakken, A. J. Muntjewerff, A. Soeteman, & R. Winkels (Eds.), Legal knowledge based systems: Foundations of legal knowledge systems (Jurix’94) (pp. 55–64). Lelystad, The Netherlands: Koninklijke Vermende. Hunter, D., Tyree, A., & Zeleznikow, J. (1993). There is less to this argument than meets the eye. Journal of Law and Information Science, 4(1), 46–64. Hunter, J. R., Roberts, C., & Martin, A. (with Heron, C., Knupfer, G. and Pollard, M.). (1997). Studies in crime: An introduction to forensic archaeology. London: Batsford, 1995, 1996; London: Routledge, 1997. Hutchinson, J. R., Ng-Thow-Hing, V., & Anderson, F. C. (2007). A 3D interactive method for estimating body segmental parameters in animals: Application to the turning and running performance of Tyrannosaurus rex. Journal of Theoretical Biology, 246(4), 660–680. Hutton, N., Tata, C., & Wilson, J. N. (1994). Sentencing and information technology: Incidental reform? International Journal of Law and Information Technology, 2(3), 255–286. 1184 References

Hyde, H. A., & Williams, D. W. (1944). The right word. Pollen Analisis Circulars, 8,6. Iacoviello, F. M. (1997). La motivazione della sentenza penale e il suo controllo in cassazione. Milan: Giuffrè. Iacoviello, F. M. (2006). Regole più chiare sui vizi di motivazione. In Il Sole 24 Ore, Guida al Diritto, 10/2006, p. 96. IACP. (2002). Criminal intelligence sharing: A national plan for intelligence-led policing at the local, state, and federal levels. Alexandria, VA: Office of Community Oriented Policing Services and the International Association of Chiefs of Police. Executive summary available to download from http://it.ojp.gov/documents/NCISP_executive_summary.pdf Idika, N., & Mathur, A. P. (2007). a survey of malware detection techniques. Technical report, Department of Computer Science, West Lafayette, IN: Purdue University. Igaki, S., Eguchi, S., & Shinzaki, T. (1990). Holographic fingerprint sensor. Fujitsu Scientific Technical Journal, 25, 287–296. Ilan, T. (2005). Rachel, wife of rabbi Akiva. Jewish Women: A Comprehensive Historical Encyclopedia.AttheJewish Women’s Archive. http://jwa.org/encyclopedia/article/rachel-wife- of-rabbi-akiva Imbrie, J. (1963). Factor and vector analysis programs for analyzing geologic data. Tecnical Report no. 6. Office of Naval Research, U.S.A., 83 pp. Imwinkelried, E. J. (1990). The use of evidence of an accused’s uncharged misconduct to prove mens rea: the doctrines that threaten to engulf the character evidence prohibition. Military Law Review [Washington, D.C.: Headquarters, Dept. of the Army, Supt. of Docs.], 130(Fall 1990), 41–76. Inbau, F. E., Reid, J. E., Buckley, J. P., & Jayne, B. C. (2001). Criminal interrogation and confessions (4th ed.). Gaithersberg, MD: Aspen. Ineichen, M., & Neukom, R. (1995). Daktyloskopieren von mummifizierten Leichen [in German; Dactyloscopy of mummified bodies; with German and English summaries]. Archiv für Kriminologie, 196(3/4), 87–92. Our quotation is from the English summary, as reproduced in Forensic Science Abstracts, Section 49, 22(2) (1996): p. 57, sec. 4, §395. Ingleby, R. (1993). Family law and society. Sydney: Butterworths. Inman, K., & Rudin, V. (2002). An introduction to forensic DNA analysis. Boca Raton, FL: CRC Press. Isenor, D. K., & Zaky, S. G. (1986). Fingerprint identification using graph matching. Pattern Recognition, 19, 111–112. Ishikawa, H. (2003). Exact optimization for markov random fields with convex priors. IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE PAMI), 25(10), 1333–1336. http://doi.ieeecomputersociety.org/10.1109/TPAMI.2003.1233908 Ishikawa, H., Yokohama, S., Ohta, M., & Katayama, K. (2005). On mining XML structures based onstatistics.InR.Khosla,R.J.Howlett,&L.C.Jain(Eds.),Knowledge-based intelligent information and engineering systems: 9th international conference, KES 2005, Melbourne, Australia, September 14–16, 2005, Proceedings, Part I (pp. 379–390). (Lecture Notes in Computer Science, Vol. 3684). Berlin: Springer. Isobe, Y., Seto, Y., & Kataoka, M. (2001). Development of personal authentication sys- tem using fingerprint with digital signature technologies. In Proceedings of the 34th Hawaii International Conference on System Sciences (HICSS-34), Hawaii, Track 9. IEEE Computer Society, http://computer.org/proceedings/hicss/0981/volume%209/0981toc. htm; http://computer.org/proceedings/hicss/0981/volume%209/09819077abs.htm Ito, K., Nakajima, H., Kobayashi, K., Aoki, T., & Higuchi, T. (2004). A fingerprint matching algo- rithm using phase-only correlation. IEICE Transactions on Fundamentals, E87-A(3), 682–691. (IEICE stands for ‘The Institute of Electronics, Information and Communication Engineers’). Ivkovic, S., Yearwood, J., & Stranieri, A. (2003). Visualising association rules for feedback within the legal system. In Proceedings of the ninth International Conference on Artificial Intelligence and Law (ICAIL 2003), Edinburgh, Scotland. New York: ACM Press, pp. 214–223. Izard, C. E. (1971). Thefaceofemotion. New York: Appleton-Century-Crofts. Izard, C. E. (1977). Human emotions. (“Emotions, Personality, and ” Series). New York: Plenum. References 1185

Izard, C. E. (1982). Comments on emotion and cognition: Can there be a working relationship?. In M.S.Clark&S.T.Fiske(Eds.),Affect and cognition. Hillsdale, NJ: Lawrence Erlbaum. Jackowski, C., Thali, M., Sonnenschein, M., Aghayev, E., Yen, K., Dirnhofer, R., et al. (2004). Visualization and quantification of air embolism structure by processing postmortem MSCT data. Journal of Forensic Science, 49(6), 1339–1342. Jackowski, C., Thali, M. J., Buck, U., Aghayev, E., Sonnenschein, M., Yen, K., et al. (2006). Noninvasive estimation of organ weights by postmortem magnetic resonance imaging and multislice computed tomography. Investigative Radiology, 41(7), 572–578. Jackson, B. S. (1971). Liability for mere intention in early Jewish law. Hebrew Union College Annual, 42, 197–225. Jackson, B. S. (1977). Susanna and the singular history of singular witnesses. Acta Juridica (1977), 37–54 (Essays in Honour of Ben Beinart). Jackson, B. S. (1988a). Law, fact and narrative coherence. Merseyside (Liverpool, England): Deborah Charles Publications. Jackson, B. S. (1988b). Narrative models in legal proof. International Journal for the Semiotics of Law, 1(3), 225–246. Jackson, B. S. (1990). The teaching of Jewish Law in British Universities. A lecture given at the Institute of Advanced Legal Studies on 26th June 1990. The Second Jewish Law Fellowship Lecture. Oxford: The Yarnton Trust, for the Oxford Centre for Postgraduate Hebrew Studies and the Institute of Advanced Legal Studies. Typeset at Merseyside (Liverpool, England): Deborah Charles Publications. Jackson, B. S. (1994). Towards a semiotic model of professional practice, with some narrative reflections on the criminal process. International Journal of the Legal Profession, 1(1), 55–79. Abingdon, UK: Carfax (later part of Taylor & Francis). Jackson, B. S. (1995). Making sense in law. Liverpool: Deborah Charles Publications. Jackson, B. S. (1996). ‘Anchored narratives’ and the interface of law, psychology and semiotics. Legal and Criminological Psychology, 1, 17–45. The British Psychological Society. Jackson, B. S. (1998a). Bentham, truth and the semiotics of law. In M. D. A. Freeman (Ed.), Legal theory at the end of the millennium (pp. 493–531). (Current Legal Problems 1998, Vol. 51). Oxford: Oxford University Press. Jackson, B. S. (1998b). On the atemporality of legal time. In F. Ost & M. van Hoecke (Eds.), Temps et Droit. Le droit a-t-il pour vocation de durer? (pp. 225–246). Brussels: E. Bruylant. Jackson, B. S. (1998c). Truth or proof?: The criminal verdict. International Journal for the Semiotics of Law, 11(3), 227–273. Jackson, B. S. (2010). Review of: S. Azuelos-Atias, A Pragmatic Analysis of Legal Proofs of Criminal Intent (Amsterdam: Benjamins, 2007). International Journal for the Semiotics of Law, 22(3), 365–372. Jacoby, J., Mellon, L., Ratledge, E., & Turner, S. (1982). Prosecutorial decision making: A national study. Washington, DC: Department of Justice, National Institute of Justice. Jacovides, M. (2010a). Experiences as complex events. The Southern Journal of Philosophy, 48(2), 141–159. Jacovides, M. (2010b). Do experiences represent? Inquiry, 53, 87–103. Jain, A. K., Bolleand, R., & Pankanti, S. (1999). Biometrics: Personal identification in networked society. Norwell, MA & Dordrecht, Netherlands: Kluwer. Jain, A. K., Hong, L., Pankanti, S., & Bolle, R. (1997). An identity-authentication system using fingerprint. Proceedings of the IEEE, 85(9), 1365–1388. Jain, A. K., & Maltoni, D. (2003). Handbook of fingerprint recognition. Berlin: Springer. Jain, A. K., Prabhakar, S., Hong, L., & Pankanti, S. (2000). Filterbank-based fingerprint matching. IEEE Transactions on Image Processing, 9(5), 846–859. Jain, A. K., Ross, A., & Prabhakar, S. (2001). Fingerprint matching using minutiae and texture features. In Proceedings of Image Processing International Conference (ICIP), Thessaloniki, Greece, 7–10 October 2001, Vol. 3, pp. 282–285. 1186 References

Jain, R. (2003). Multimedia electronic chronicles. “Media Vision” column in IEEE Multimedia, July 2003, pp. 111–112. Jain, R. (2008). EventWeb: Events and experiences in human centered computing. IEEE Computer, February 2008, pp. 42–50. Jain, R., Kim, P., & Li, Z. (2003). Experiential meeting system. In Proceedings of the 2003 ACM SIGMM workshop experiential telepresence (ETP). New York: ACM, pp. 1–12. James, S. H. (Ed.). (1999). Scientific and legal applications of bloodstain pattern interpretation. Boca Raton, FL: CRC Press. James, S. H., & Eckert, W. G. (1999). Interpretation of bloodstain evidence at crime scenes (2nd ed.). Boca Raton, FL: CRC Press. James, S. H., & Nordby, J. J. (Eds.). (2003). Forensic science: An introduction to scientific and investigative techniques (1st ed.). Boca Raton, FL: CRC Press. 2nd edition, 2005. Also published in the 3rd edition, 2009. James, S. H., Kish, P. E., & Sutton, T. P. (2005a). Principles of bloodstain pattern analysis (3rd ed.). Boca Raton, FL: CRC Press. James, S. H., Kish, P. E., & Sutton, T. P. (2005b). Recognition of bloodstain patterns. In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (1st ed.) [with that title of the book]. Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Jameson, A. (1983). Impression monitoring in evaluation-oriented dialog: The role of the listener’s assumed expectations and values in the generation of informative statements. In Proceedings of the eighth International Joint Conference on Artificial Intelligence (IJCAI’83), Karlsruhe, Germany. San Mateo, CA: Morgan Kaufmann, Vol. 2, pp. 616–620. http://ijcai.org/search.php Jamieson, A. (2004). A rational approach to the principles and practice of crime scene investiga- tion: I, principles. Science & Justice, 44(1), 3–7. Janner, G. (1984). Janner on presentation. London: Business Books. Jedrzejek, C., Falkowski, M., & Smolenski, M. (2009). Link analysis of fuel laundering scams and implications of results for scheme understanding and prosecutor strategy. In G. Governatori (Ed.), Proceedings of legal knowledge and information systems: JURIX 2009, The twenty- second annual conference, 25 July 2009 (pp. 79–88). Amsterdam: IOS Press. Jefferson, M. (1992). Criminal law (1st ed.). London: Pitman. Jenkins, R. V. (1987). Words, images, artifacts and sound: Documents for the history of technology. The British Journal for the History of Science,PartI,61 20(64), 39–56. Jennings, N., Parsons, S., Sierra, C., & Faratin, P. (2000). Automated negotiation. In Proceedings of the fifth international conference on the practical application of intelligent agents and multi- agent technology. The Practical Application Company, pp. 23–30. Jerdon, T. C. (1847). Illustrations of Indian Ornithology. Containing fifty figures of new, unfigured and interesting species of birds, chiefly from the south of India. Madras [now Chennai], India: Printed by P. R. Hunt, American Mission Press. Jerdon, T. C. (1862–1864). The Birds of India. Being a natural history of all the birds known to inhabit continental India, with descriptions of the species, genera, families, tribes, and orders, and a brief notice of such families as are not found in India, making it a manual of ornithology specially adapted for India. (2 vols., vol. 2 being in 2 parts.) Printed for the Author by the Military Orphan Press, Calcutta, 1862–1864. (Vol. 2, Pt. 2 imprint is G. Wyman in Calcutta, 1864; it is also known as Vol. 3 of the three-parts work. The publisher of all parts is sometimes given as P. S. d’Rozario in Calcutta; this actually is the imprint of the 1877 edition. Another edition of the three volumes was published by C. M. Mission P. in Cherrapoonje, in 1870.) Jha, S. K., & Yadava, R. D. S. (2010). Development of surface acoustic wave electronic nose using pattern recognition system. Defence Science Journal, 60(4), 364–376.

61 The British Journal for the History of Science is published in Oxford by Blackwell. References 1187

Jiang, X., Yau, W., & Ser, W. (2001). Detecting the fingerprint minutiae by adaptively tracing the gray-level ridge. Pattern Recognition, 34, 999–1013. Jin, F., Fieguth, P., & Winger, L. (2005). Image denoising using complex wavelets and Markov prior models. In M. Kamel & A. Campilho (Eds.), Image analysis and recognition: Proceedings of the second International Conference (ICIAR 2005), Toronto, Canada, September 28–30, 2005 (pp. 73–80). (Lecture Notes in Computer Science, 3656). Berlin: Springer. Jin, F., Fieguth, P., & Winger, L. (2006). Wavelet video denoising with regularized multiresolution motion estimation. EURASIP Journal on Applied Signal Processing (2006), Article 72705. Joachims, T. (1998). Text categorization with support vector machines: Learning with many rel- evant features. In Proceedings of the European Conference on Machine Learning (ECML). Berlin: Springer. Joachims, T., Hofmann, T., Yue, Y., & Yu, C.-N. (2009). Predicting structured objects with support vector machines. Communications of the ACM, 52(11), 97–104. Johnson, M. K. (2007). Lighting and optical tools for image forensics. Ph.D. dissertation, Dartmouth College, September 2007. Posted at www.cs.dartmouth.edu/farid/publications/ mkjthesis07.html Johnson, S. L. (1985). Two actions equals no response: Misinterpretations of motorcycle collision causes. In I. D. Brown, R. Goldsmith, K. Coombes, & M. A. Sinclair (Eds.), Ergonomics inter- national 85: Proceedings of the ninth congress of the international ergonomics association, Bournemouth, England, September. Basingstoke, Hamsphire: Taylor & Francis. Johnson, G. W., Ehrlich, R., & Full, W. (2002). Principal components analysis and receptor models in environmental forensics. Chapter 12 In B. L. Murphy & R. D. Morrison (Eds.), Introduction to environmental forensics (pp. 461–515). San Diego, CA & London, U.K.: Academic. Johnson, M. K., & Farid, H. (2005). Exposing digital forgeries by detecting inconsistencies in lighting. At the ACM Multimedia and Security Workshop, 2005. Johnson, M. K., & Farid, H. (2006a). Exposing digital forgeries through chromatic aberration. At the ACM Multimedia and security Workshop, Geneva, Switzerland, 2006. Johnson, M. K., & Farid, H. (2006b). Metric measurements on a plane from a single image. Technical Report TR2006-579. Hanover, New Hampshire: Department of Computer Science, Dartmouth College. Johnson, M. K., & Farid, H. (2007a). Exposing digital forgeries in complex lighting environments. IEEE Transactions on Information Forensics and Security, 2(3), 450–461. Johnson, M. K., & Farid, H. (2007b). Exposing digital forgeries through specular highlights on the eye. At the Ninth International Workshop on Information Hiding, Saint Malo, France, 2007. Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source monitoring. Psychological Bulletin, 114, 3–28. Johnson, P. E., Zualkernan, I. A., & Tukey, D. (1993). Types of expertise: An invariant of problem solving. International Journal of Man Machine Studies, 39, 641. Johnston, V. S., & Caldwell, C. (1997). Tracking a criminal suspect through face space with a genetic algorithm. In T. Bäck, D. B. Fogel, & Z. Michalewics (Eds.), Handbook of evolutionary computation. Bristol, England: Institute of Physics Publishing, & New York & Oxford: Oxford University Press. Jones, A. J. I., & Sergot, M. (1992). Deontic logic in the representation of law: Towards a methodology. Artificial Intelligence and Law, 1(1), 45–64. Jones, C. A. G. (1994). Expert witnesses: Science, medicine, and the practice of law. Oxford Socio- Legal Studies. Oxford: Clarendon Press. Jones, S. S. (1979). The pitfalls of snow white scholarship. Journal of American Folklore, 90, 69–73. Jøsang, A., & Bondi, V. A. (2000). Legal reasoning with subjective logic. Artificial Intelligence and Law, 8, 289–315. Jøsang, A., Ismail, R., & Boyd, C. (2007). A survey of trust and reputation systems for online service provision. Decision Support Systems, 43(2), 618–644. doi://10.1016/j.dss.2005.05.019. http://persons.unik.no/josang/papers/JIB2007-DSS.pdf 1188 References

Josephson, J. R., & Josephson, S. G. (Eds.). (1994). Abductive inference: Computation, philosophy, technology. Cambridge: Cambridge University Press. Joshi, A., & Krishnapuram, R. (1998). Robust fuzzy clustering methods to support web mining. In Proceedings of the 15th workshop on data mining and knowledge discovery (SIGMOD ’98), Seattle, WA, June 2–4 1998. Seattle, WA and New York: ACM, 1998, pp. 1–8. Josselson, R., & Lieblich, A. (Eds.) (1993). The narrative study of lives. Newsbury Park, CA, and London: Sage. Joyce, C. (1984). The detective from the laboratory. New Scientist, 15 November 1984, pp. 12–16. Juan, L., Kreibich, C., Lin, C. H., & Paxson, V. (2008). A tool for offline and live testing of evasion resilience in network intrusion detection systems. In Detection of Intrusions and Malware, and Vulnerability Assessment: Proceedings of the fifth international conference (DIMVA 2008). Berlin: Springer, pp. 267–278. Judson, G. (1995). Mother guilty in the killings of 5 babies. The New York Times, April 22 (Late Edn., Final): Sec. 1, p. 25, col. 5, Metropolitan Desk, Second Front. Julius, A. (2010). Trials of the diaspora: A history of anti-semitism in England. Oxford: Oxford University Press. Jung, C., Han, I., & Suh, B. (1999). Risk analysis for electronic commerce using case-based rea- soning. International Journal of Intelligent Systems in Accounting, Finance & Management, 8, 61–73. Junger, E. P. (1996). Assessing the unique characteristics of close-proximity soil samples: Just how useful is soil evidence? Journal of Forensic Science, 41(1), 27–34. Junkin, T. (2004). Bloodsworth: The true story of the first death row inmate exonerated by DNA. Chapel Hill, NC: Algonquin. Kadane, J., & Schum, D. (1996). A probabilistic analysis of the Sacco and Vanzetti evidence.New York: Wiley. Kahan, D. M., & Braman, D. (2006). Cultural cognition and public policy. Yale Law & Policy Review, 24, 147–170. Kahn, D. (1967). The codebreakers: The story of secret writing. New York: Scribner. 2nd edn., 1996. Kakas, T., Kowalski, K., & Toni, F. (1992). Abductive logic programming. Journal of Logic and Computation, 2(6), 719–770. Kakas, T., Kowalski, R., & Toni, F. (1998). The role of logic programming in abduction. In D. Gabbay, C. J. Hogger, & J. A. Robinson (Eds.), Handbook of logic in artificial intelligence and programming (Vol. 5, pp. 235–324). Oxford: Oxford University Press. Kalender, W. A., Seissler W., Klotz, E., & Vock, P. (1990). Spiral volumetric CT with single- breathhold technique, continuous transport, and continuous scanner rotation. Radiology, 176, 181–183. Kalera, M. K., Srihari, S. N., & Xu, A. (2004). Offline signature verification and identifi- cation using distance statistics. International Journal of Pattern Recognition and Artificial Intelligence, 18(7), 1339–1360. Kamisar, Y., LaFave, W. R., Israel, J. H., & King, N. J. (2003). Modern criminal procedure (10th ed.). St. Paul, MN: West Publishing. Kanellis, P., Kiountouzis, E., Kolokotronis, N., & Martakos, D. (2006). Digital crime and forensic science in cyberspace. Hershey, PA: Idea Press. Kangas, L. J., Terrones, K. M., Keppel, R. D., & La Moria, R. D. (2003). Computer aided tracking and characterization of homicides and sexual assaults (CATCH). Sec. 12.6 In J. Mena (Ed.), Investigative data mining for security and criminal detection (pp. 364–375). Amsterdam & Boston: Butterworth-Heinemann (of Elsevier). Kannai, R., Schild, U. J., & Zeleznikow, J. (2007). Modeling the evolution of legal discretion: An artificial intelligence approach. Ratio Juris, 20(4), 530–558. Kantrowitz, M. (1990, July). GLINDA: Natural language generation in the Oz interactive fic- tion project. Technical report CMU-CS-90-158. Pittsburgh, PA: School of Computer Science, Carnegie Mellon University. References 1189

Kaptein, H., Prakken, H., & Verheij, B. (Eds.). (2009). Legal evidence and proof: Statistics, stories, logic. (Applied Legal Philosophy Series). Farnham, England: Ashgate Publishing. Karttunen, L. (1976). Discourse referents. Syntax and Semantics, 7, 363–385. Karunatillake, N. (2006). Argumentation-based negotiation in a social context. Ph.D. thesis in Computer Science. Southampton, England: University of Southampton, School of Electronics and Computer Science. Karunatillake, N., & Jennings, N. (2004). Is it worth arguing? In I. Rahwan, P. Moratïs, & C. Reed (Eds.), Argumentation in multi-agent systems (Proc. of ArgMAS’04) (pp. 234–250). Berlin: Springer. Kass, A. M. (1990). Developing creative hypotheses by adapting explanations. Doctoral disserta- tion. New Haven, CT: Computer Science Department, Yale University. Also: Technical Report #6, Institute for the Learning Sciences. Chicago: Northwestern University. Kass, A. M. (1994). Tweaker: Adapting old explanations to new situations. In R. C. Schank, A. Kass, & C. K. Riesbeck (Eds.), Inside case-based explanation (pp. 263–295). Hillsdale, NJ: Erlbaum. Kass, A. M., Leake, D. B., & Owens, C. (1986). SWALE: A program that explains. In R. C. Schank (Ed.), Explanation patterns: Understanding mechanically and creatively. Hillsdale, NJ: Lawrence Erlbaum Associates. Kassin, S. (1997). The psychology of confession evidence. American Psychologist, 52, 221–233. Kassin, S. (2004). The detection of false confessions. Chapter 8 In P. A. Granhag & L. A. Strömwall (Eds.), Detection deception in forensic contexts. Cambridge: Cambridge University Press. Kassin, S. (2005). On the psychology of confessions: Does innocence put the innocents at risk? American Psychologist, 60, 215–228. Kassin, S. (2006). A critical appraisal of modern police interrogations. In T. Williamson (Ed.), Investigative interviewing: Rights, research, regulation (pp. 207–228). Cullompton: Willan Publishing. Kassin, S., & Dunn, M. A. (1997). Computer-animated displays and the jury: Facilitative and prejudicial effects. Law and Human Behavior, 21, 269–281. Kassin, S., & Fong, C. T. (1999). “I’m innocent!” Effects of training on judgments of truth and deception in the interrogation room. Law and Human Behavior, 23, 499–516. Kassin, S. M., & Gudjonsson, G. H. (2004). The psychology of confession evidence: A review of the literature and issues. Psychological Science in the Public Interest, 5, 35–69. Kassin, S. M., & Kiechel, K. L. (1996). The social psychology of false confessions: Compliance, internalization, and confabulation. Psychological Science, 7, 125–128. Kassin, S. M., & McNall, K. (1991). Police interrogations and confessions: Communicating promises and threats by pragmatic implication. Law and Human Behavior, 15, 233–251. Kassin, S. M., & Neumann, K. (1997). On the power of confession evidence: An experimental test of the “fundamental difference” hypothesis. Law and Human Behavior, 21, 469–484. Kassin, S., & Norwick, R. (2004). Why people waive their Miranda rights: The power of innocence. Law and Human Behavior, 28, 211–221. Kassin, S. M., & Wrightsman, L. S. (1985). Confession evidence. In S. Kassin & L. Wrightsman (Eds.), The psychology of evidence and trial procedure (pp. 67–94). Beverly Hills, CA, & London: Sage. Kassin, S. M., Goldstein, C. J., & Savitsky, K. (2003). Behavioral confirmation in the interrogation room: On the dangers of presuming guilt. Law and Human Behavior, 27, 187–203. Katai, O., Kawakami, H., Shiose, T., & Notsu, A. (2010). Formalizing coexistential communication as co-creation of Leibnizian spatio-temporal fields. AI & Society, 25, 145–153. Kassin, S. M., Leo, R. A., Meissner, C. A., Richman, K. D., Colwell, L. H., Leach, A.-M., et al. (2007). Police interviewing and interrogation: A self-report survey of police practices and beliefs, Law and Human Behavior, 31, 381–400. Kassin, S. M., Meissner, C. A., & Norwick, R. J. (2005). I’d know a false confession if I saw one, a comparative study of police officers and college students. Law and Human Behaviour, 29, 211–227. 1190 References

Kato, Z., & Pong, T.-C. (2001). A Markov random field image segmentation model using combined color and texture features. In W. Skarbek (Ed.), Computer Analysis of Images and Patterns: Proceedings of the 9th international conference (CAIP 2001), Warsaw, Poland, September 5–7, 2001 (pp. 547–554). (Lecture Notes in Computer Science, 2124). Berlin: Springer. Katz, L. (1953). A new status index derived from sociometric analysis. Psychometrika, 18(1), 39–43. Kaufman, L., & Rousseeuw, P. J. (2005). Finding groups in data: An introduction to cluster analysis (2nd ed.). New York: Wiley. The 1st edn. was of 1990. Kaufmann, M., & Wagner, D. (Eds.). (2001). Drawing graphs: Methods and models. (Lecture Notes in Computer Science, Vol. 2025). Berlin: Springer. Kaye, B. H. (1995). Science and the detective: Selected readings in forensic science. Weinheim, Baden-Württemberg: VCH Verlag.62 Kaye, D. (1982). Statistical evidence of discrimination. Journal of the American Statistical Association, 77(380), 773–783. Kaye, D. H. (2003). Questioning a courtroom proof of the uniqueness of fingerprints. International Statistical Review, 71, 521–533. Kaye, D. H., & Koehler, J. (2003). The misquantification of probative value. Law and Human Behavior, 27, 645–659. Kearey, P., & Brooks, M. (1984). An introduction to geophysical exploration (1st ed.). Oxford: Blackwell Science (as cited); 2nd edn.: 1991. 3rd edn.: Kearey, P., Brooks, M., & Hill, I., ibid., 2002. Keh, B. (1984). Scope and applications of forensic entomology. Annual Review of Entomology, 30, 137–154. Palo Alto, CA: Annual Reviews. Keila, P. S., & Skillicorn, D. B. (2005). Structure in the Enron email dataset. In Proceedings of the SIAM international conference on data mining, SIAM workshop on link analysis, counterterrorism and security. Philadelphia, PA: SIAM. Kelly, J., & Davis, L. (1991). Hybridizing the genetic algorithm and the K-nearest neigh- bour. In Proceedings of the fourth international conference on genetic algorithms and their applications. San Mateo, CA: Morgan Kaufman, pp. 377–383. Kelsen, H. (1967). Pure theory of law (M. Knight, Trans., 2nd ed.). Berkeley, CA: University of California Press. Kempe, D., Kleinberg, J., & Tardos, E. (2003). Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIG KDD interantional conference on knowledge discovery and data mining, pp. 137–146. Kempe, D., Kleinberg, J. M., & Tardos, E. (2005). Influential nodes in a diffusion model for social networks. In Proceedings of ICALP, pp. 1127–1138. Kenji, T., Aoki, T., Sasaki, Y., Higuchi, T., & Kobayashi, K. (2003). High-accuracy subpixel image registration based on phase-only correlation. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences,63 E86-A(8), 1925–2934. Kennedy, D. (2001). Legal formalism. In International encyclopedia of the social and behavioral sciences (Vol. 13, 8634–8646). Amsterdam: Elsevier. Kennedy, I. (2007). Public inquiries: Experience from the Bristol public inquiry. In J. Carrier, G. Freilich, V. Hoffbrand, & S. Parbhoo (Eds.), Law, medicine and ethics: Essays in honour of Lord Jakobovits (pp. 13–48). London: The Cancerkin Centre, The Royal Free Hospital. Kephart, J., & Arnold, W. (1994). Automatic extraction of computer virus signatures. At the 4th Virus Bulletin International Conference, pp. 178–184.

62 http://www.wiley-vch.de The city of Weinheim is approximately 15 km north of Heidelberg and 10 km northeast of Mannheim. Together with these cities, it makes up the Rhine-Neckar triangle. 63 This is a journal of the Institute of Electronics, Information and Communication Engineers (IEICE). References 1191

Keppel, R. D. (1995a). Signature murders: A report of several related cases. Journal of Forensic Sciences, 40(4), 658–662. Keppel, R. D. (1995b). The riverman: Ted Bundy and I Hunt the Green Rover Killer.NewYork: Pocket Books. Keppel, R. D. (1997). Signature killers. New York: Pocket Books. Keppel, R. D. (2005). Serial offenders: Linking cases by modus operandi and signature. Chapter 30 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Keppel, R. D., & Weis, J. P. (1997). Time and distance as solvability factors in murder cases. Journal of Forensic Sciences, 39(2), 386–401. Keppens, J. (2007). Towards qualitative approaches to bayesian evidential reasoning. In Proceedings of the 11th international conference on artificial intelligence and law, pp. 17–25. Keppens, J. (2009). Conceptions of vagueness in subjective probability for evidential reasoning. In Proceedings of the 22nd annual conference on legal knowledge and information systems, pp. 89–99. Keppens, J., & Schafer, B. (2003a). Using the box to think outside it: Creative skepticism and computer decision support in criminal investigations. In Proceedings of the IVR 21st world congress special workshop on artificial intelligence in the law: Creativity in legal problem solving. http://www.meijigakuin.ac.jp/~yoshino/documents/ivr2003/keppens-schafer.pdf Keppens, J., & Schafer, B. (2003b). Assumption based peg unification for crime scenario mod- elling. In Proceeding of the 2003 conference on legal knowledge and information systems; JURIX 2003: The eighteenth annual conference. Amsterdam: IOS Press. http://www.jurix.nl/ pdf/j05-07.pdf Keppens, J., & Schafer, B. (2004). “Murdered by persons unknown” – Speculative reasoning in law and logic. In T. Gordon (Ed.), Legal knowledge and information systems. Jurix 2004: The seventeenth annual conference (pp. 109–118). Amsterdam: IOS Press. Keppens, J., & Schafer, B. (2005). Assumption based peg unification for crime scenario modelling. In Proceeding of the 2005 conference on legal knowledge and information systems; JURIX 2005: The eighteenth annual conference. (Frontiers in Artificial Intelligence and Applications, 134). Amsterdam: IOS Press, pp. 49–58. Keppens, J., & Schafer, B. (2006). Knowledge based crime scenario modelling. Expert Systems with Applications, 30(2), 203–222. Keppens, J., & Shen, Q. (2001). On compositional modelling. Knowledge Engineering Review, 16(2), 157–200. Keppens, J., & Shen, Q. (2004). Compositional model repositories via dynamic constraint satis- faction with order-of-magnitude preferences. Journal of Artificial Intelligence Research, 21, 499–550. Keppens, J., Shen, Q., & Lee, M. (2005). Compositional Bayesian modelling and its application to decision support in crime investigation. In Proceedings of the 19th international workshop on qualitative reasoning, pp. 138–148. Keppens, J., Shen, Q., & Price, C. (2010). Compositional Bayesian modelling for computation of evidence collection strategies. Applied Intelligence. In press. doi://10.1007/s10489-009-0208-5 Keppens, J., Shen, Q., & Shafer, B. (2005). Probabilistic abductive computation of evidence col- lection strategies in crime investigation. In Proceedings of the 10th international conference on artificial intelligence and law, pp. 215–224. Keppens, J., & Zeleznikow, J. (2002). On the role of model-based reasoning in decision support in crime investigation. In Proceedings of the IASTED third international conference on Law and Technology (LawTech2002). Anaheim, CA: ACTA Press, pp. 77–83. Keppens, J., & Zeleznikow, J. (2003). A model based reasoning approach for generating plausible crime scene scenarios from evidence In G. Sartor (Ed.), Proceedings of the ninth International Conference on Artificial Intelligence and Law (ICAIL 2003), Edinburgh, Scotland, 24–28 June 2003. New York: ACM Press, pp. 51–59. Kerr, N. L., Boster, F. J., Callen, C. R., Braz, M. E., O’Brien, B., & Horowitz, I. (2008). Jury nullification instructions as amplifiers of bias. International Commentary on Evidence, 6(1), Article 2. http://www.bepress.com/ice/vol6/iss1/art2 1192 References

Khuwaja, G. A. (2004). Best parameter-based compression of fingerprints with wavelet packets. International Journal of Computer Applications in Technology, 19, 51–62. Khuwaja, G. A. (2006). A multimodal biometric identification system using compressed finger images. Cybernetics and Systems, 37(1), 23–46. Kibble, R. (2004). Elements of social semantics for argumentative dialogue. In F. Grasso, C. Reed, & G. Carenini (Eds.), Proceedings of the fourth workshop on Computational Models of Natural Argument (CMNA IV) at ECAI 2004, Valencia, Spain, pp. 25–28. Kieras, D. (2001). Using the keystroke-level model to estimate execution times. University of Michigan. http://www.cs.loyola.edu/~lawrie/CS774/S06/homework/klm.pdf Kim, B. M., & Henry, R. C. (1999). Extension of self-modeling curve resolution to mixtures of more than 3 components. Part 2: Finding the complete solution. Chemometrics and Intelligent Laboratory Systems, 49, 67–77. Kim, D. S., & Park, J. S. (2003). Network-based intrusion detection with support vector machines. In H.-K. Kahng (Ed.), Information networking, networking technologies for enhanced internet services international conference (ICOIN 2003), Cheju Island, Korea, February 12–14, 2003 (pp. 747–756). (Lecture Notes in Computer Science, Vol. 2662.) Heidelberg & Berlin: Springer. Kim, P., Gargi, U., & Jain, R. (2005). Event-based multimedia chronicling system. In Proceedings of the 2nd ACM workshop on Continuous Archival and Retrieval of Personal Experiences (CARPE’05), Singapore, November 2005. Kim, S.-M., & Hovy, E. (2006). Identifying and analyzing judgment opinions. In Proceedings of HLT/NAACL-2006, New York City, NY, pp. 200–207. Kinder, J., Katzenbeisser, S., Schallhart, C., & Veith, H. (2005). Detecting malicious code by model checking. In K. Julisch & C. Krügel (Eds.), Detection of intrusions and malware, and vulnera- bility assessment: Proceedings of the second international conference (DIMVA 2005), Vienna, Austria, July 7–8, 2005 (pp. 174–187). (Lecture Notes in Computer Science, Vol. 3548.) Berlin: Springer. Kindermann, R., & Snell, J. R. (1980). Markov random fields and their applications. (Contemporary Mathematics, 1.) Providence, RI: American Mathematical Society. Kingsnorth, R., MacIntosh, R., & Sutherland, S. (2002). Criminal charge or probation violation? Prosecutorial discretion and implications for research in criminal court pro- cessing. Criminology: An Interdisciplinary Journal, 40, 553–578. doi://10.1111/j.1745- 9125.2002.tb00966.x Kingston, J., Schafer, B., & Vandenberghe, W. (2003). No model behaviour: Ontologies for fraud detection. In Proceedings of law and the semantic web, pp. 233–247. Kingston, J., Schafer, B., & Vandenberghe, W. (2004). Towards a financial fraud ontology: A legal modelling approach. Artificial Intelligence and Law, 12(4), 419–446. Kingston, J., & Vandenberghe, W. (2003). A comparison of a regulatory ontology with existing legal ontology frameworks. In OTM Workshops 2003 = R. Meersman & Z. Tari (Eds.), On The Move to Meaningful Internet Systems 2003: [Proceedings of the] OTM 2003 Workshops, OTM Confederated International Workshops, HCI-SWWA, IPW, JTRES, WORM, WMS, and WRSM 2003, Catania, Sicily, Italy, 3–7 November 2003 (pp. 648–661). (Lecture Notes in Computer Science, Vol. 2889). Berlin: Springer. Kingston, J., Schafer, B., & Vandenberghe, W. (2004). Towards a financial fraud ontology: A legal modelling approach. Artificial Intelligence and Law, 12, 419–446. Kinton, R., Ceserani, V., & Foskett, D. (1992). The theory of catering (7th ed.). London: Hodder & Stoughton. Kintsch, W., & van Dijk, T. (1978). Recalling and summarizing stories. In W. Dressier (Ed.), Current trends in textlinguistics. Berlin: de Gruyter. Kirkpatrick, S., Gellatt, D., & Vecchi, M. (1983). Optimization by simulated annealing. Science, 220, 671–680. Kirschenbaum, A. (1970). Self-incrimination in Jewish law. New York: Burning Bush Press. Kirschner, P. A., Buckingham Shum, S. J., & Carr, C. S. (Eds.). (2003). Visualizing argumentation. London & Berlin: Springer. References 1193

Kitayama, Sh., & Markus, H. R. (Eds.). (1994). Emotion and culture: Empirical studies of mutual influence. Washington, DC: American Psychological Association. Klein, S. (1973). Automatic novel writer: A status report. In J. S. Petöfi (Ed.), Papers in text analysis and text description. (Research in Text Theory, 3). Berlin: de Gruyter. Klein, S. (2002). The analogical foundations of creativity in language, culture & the arts: The upper paleolithic to 2100CE. In P. McKevitt, C. Mulvihill, & S. O’Nuallin (Eds.), Language, vision & music (pp. 347–371). Amsterdam: John Benjamin. Klein, S., Aeschlimann, J. F., Appelbaum, M. A., Blasiger, D. F., Curtis, E. J., Foster, M., et al. (1974). Modeling propp and lévi-strauss in a metasymbolic simulation system. In H. Jason & D. Segal (Eds.), Patterns in oral literature (pp. 141–171). Chicago, IL: Aldine. World Anthropology Series. The Hague: Mouton. Klein, S., Aeschliman, J. F., Applebaum, M. A., Blasiger, D. F., Curtis, E. J., Foster, M., et al. (1976, March). Simulation d’hypothèses émisés par Propp et Lévi-Strauss en utilisant un système de simulation meta-symbolique. Informatique et Sciences Humaines, 28, 63–133. A revised and expanded French translation of Klein et al. (1974). Klein, S., & Simmons, R. F. (1963a). A computational approach to grammatical coding of English words. Journal of the Association for Computing Machinery, 10, 334–347. Klein, S., & Simmons, R. F. (1963b). Syntactic dependence and the computer generation of coherent discourse. Mechanical Translation and Computational Linguistics, 7, 50–61. Kleinberg, J. (1998). Authoritative sources in a hyperlinked environment. In Proceedings of the ninth ACM-SIAM symposium on discrete algorithms. New York: ACM, & Philadelphia, PA: SIAM. Extended version in Journal of the ACM, 46 (1999). Also appears as IBM Research Report RJ 10076, May 1997. http://www.cs.cornell.edu/home/kleinber/auth.pdf Kleinberg, J. (2000a). Navigation in a small world. Nature, 406, 845. http://www.cs.cornell.edu/ home/kleiber/nat00.pdf Kleinberg, J. (2000b). The small-world phenomenon: An algorithmic perspective. In Proceedings of the 32nd ACM symposium on theory of computing. Also appears as Cornell Computer Science Technical Report 99-1776 (October 1999). http://www.cs.cornell.edu/home/kleiber/ swn.pdf Kleinberg, J. (2001). Small-world phenomena and the dynamics of information. Advances in Neural Information Processing Systems (NIPS), 14. http://www.cs.cornell.edu/home/kleiber/ nips14.pdf Kleinberg, J. (2004). The small-world phenomenon and decentralized search. A short essay as part of Math Awareness Month 2004, appearing in SIAM News, 37(3), April 2004. http://www. mathaware.org/mam/04/essays/smallworld/html Kleinberg, J. (2006). Complex networks and decentralized search algorithms. In Proceedings of the International Congress of Mathematicians (ICM), 2006. http://www.cs.cornell.edu/home/ kleiber/icm06-swn.pdf Klimt, B., & Yang, Y. (2004a). The Enron corpus: A new dataset for email classification research. In Proceedings of the European Conference on Machine Learning (ECML), 2004, pp. 217–226. Klimt, B., & Yang, Y. (2004b). Introducing the Enron corpus. At the First Conference on Email and Anti-Spam. CEAS 2004. Mountain View, CA. Klock, B. A. (2001, April). Project plan for the evaluation of X-ray threat detection of explosives at different subcertification weights. DOT/FAA/AR-01/81, Washington, DC: Office of Aviation Research. http://www.tc.faa.gov/its/worldpac/techrpt/ar01-81.pdf Klovdahl, A. S. (1981). A note on images of networks. Social Networks, 3, 197–214. Klusch, M., & Sycara, K. (2001). Brokering and matchmaking for coordination of agent soci- eties: A survey. Chapter 8 In A. Omicini, F. Zambonell, M. Klusch, & R. Tolksdorf (Eds.), Coordination of internet agents: Models, technologies, and applications (pp. 197–224). Berlin: Springer. Kneller, W., Memon, A., & Stevenage, S. (2001). Simultaneous and sequential lineups: Decision processes of accurate and inaccurate witnesses. Applied Cognitive Psychology, 15, 659–671. 1194 References

Knight, B., Ma, J., & Nissan, E. (1998). Representing temporal knowledge in legal discourse. In A. A. Martino & E. Nissan (Eds.), Formal models of legal time., Special issue, Information and Communications Technology Law, 7(3), 199–211. Knight, B., Nissan, E., & Ma, J. (Eds.). (1999). Temporal logic in engineering. Special issue, Artificial Intelligence for Engineering Design, Analysis and Manufacturing (AIEDAM), 13(2). Knott, A. W. (1987). Are you an expert? Forensic Engineering, 1(1), 7–16. Knox, A. G. (1993). Richard Meinertzhagen – a case of fraud examined. Ibis, 135(3), July 1993, 320–325. Kobayashi, M., & Ito, T. (2008). A transactional relationship visualization system in Internet auctions. Studies in Computational Intelligence, 110, 87–99. Koehler, J. J. (2008). A welcome exchange on the scientific status of fingerprinting (Editorial). Law, Probability and Risk, 7, 85–86. Kohonen, T. (1982). Self-organised formation of topologically correct feature maps. Biological Cybernetics, 43, 59–69. Kohonen, T. (1990). The self-organizing map. Proceedings of the IEEE, 78(9), 1464–1480. Köller, N., Nissen, K., Reiß, M., & Sadorf, E. (2004). Probabilistische Schlussfolgerungen in Schriftgutachten./Probability conclusions in expert opinions on handwriting. Munich, Germany: Luchterhand. This document, available in German & English, can be downloaded from: www.bka.de/vorbeugung/pub/probabilistische_schlussfolgerungen_in_schriftgutachten. pdf Kolmogorov, V., & Zabih, R. (2004). What energy functions can be minimized via graph cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE PAMI), 26, 147–159. Kolodner, J. L. (1984). Retrieval and organizational strategies in conceptual memory: A computer model. Hillsdale, NJ: Erlbaum. Kong, A., Zhang, D., & Kamel, M. (2006). Palmprint identification using feature-level fusion. Pattern Recognition, 29(3), 478–487. Kompatsiaris, Y., & Hobson, P. (Eds.). (2008). Semantic multimedia and ontologies: Theory and applications. Berlin: Springer. Koppenhaver, K. (2007). Forensic document examination, principles and practice.Totowa,NJ: Humana Press. Kort, F. (1964). Simultaneous equations and Boolean Algebra. In G. Schubert (Ed.), Judicial behaviour: A reader in theory and research (pp. 477–491). Chicago, IL: Rand McNally and Company. Kosala, R., & Blockeel, H. (2000). Web mining research: A survey. ACM SIGKDD Explorations, 2(1), 1–15. Kothari, R., & Dong, M. (2002). Decision trees for classification: A review and some new results. In S. K. Pal & A. Pla (Eds.), Pattern Recognition from classical to modern approaches (pp. 169–186). Singapore: World Scientific. Kou, Y., Lu, C. T., Sirwongwattana, S., & Huang Y. P. (2004). Survey of fraud detection techniques. In Proceedings of the 2004 International Conference on Networking, Sensing, and Control, Taipei, Taiwan, 2004, pp. 749–754. Kovacs, D. (1992). Family property proceedings in Australia. Sydney: Butterworths. Kovács-Vajna, Z. M. (2000). A fingerprint verification system based on triangular matching and dynamic time warping. IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE PAMI), 22(11), 1266–1276. Kovács-Vajna, Z. M., Rovatti, R., & Frazzoni, M. (2000). Fingerprint ridge distance computation methodologies. Pattern Recognition, 33(1), 69–80. Kowalski, R. (1979). Logic for problem solving. New York: Elsevier North-Holland. Kowalski, R. A., & Toni, F. (1996). Abstract argumentation. Artificial Intelligence and Law, 4(3/4), 275–296. Also In H. Prakken & G. Sartor (Eds.), Logical models of legal argumentation. Dordrecht, The Netherlands: Kluwer, 1997. Krackhardt, D. (1996). Social networks and the liability of newness for managers. In C. L. Cooper & D. M. Rousseau (Eds.), Trends in organizational behavior (Vol. 3, pp. 159–173). New York & Chichester, West Sussex, England: John Wiley & Sons. References 1195

Krackhardt, D., Blythe, J., & McGrath, C. (1994). KrackPlot 3.0: An improved network drawing program. Connections, 17(2), 53–55. Krane, D., Ford, S., Gilder, J., Inman, K., Jamieson, A., Koppl, R., et al. (2008). Sequential unmasking: A means of minimizing observer effects in forensic DNA interpretation. Journal of Forensic Sciences, 53(4),1006–1007. http://www.bioforensics.com/sequential_unmasking/ http://www.bioforensics.com/sequential_unmasking/Sequential_Unmasking_2008.pdf Kraus, S. (1996). An overview of incentive contracting. Artificial Intelligence, 83(2), 297–346. Kraus, S. (2001). Strategic negotiation in multiagent environments. Cambridge, MA: The MIT Press. Kraus, S., Sycara, K., & Evenchik, A. (1998). Reaching agreements through argumentation: A logical model and implementation. Artificial Intelligence, 104, 1–69. Krause, P., Ambler, S., Elvang-Goransson, M., & Fox, J. (1995). A logic of argumentation for reasoning under uncertainty. Computational Intelligence, 11(1), 113–131. Krawczak, M., & Schmidtke, J. (1994). DNA fingerprinting. Medical Perspectives Series. Oxford: BIOS Scientific. Kreibich, C., & Crowcroft, J. (2004). Honeycomb: Creating intrusion detection signatures using honeypots. Computer Communication Review, 34(1), 51–56. Kreibich, C., & Jahnke, M. (Eds.). (2010). Detection of Intrusions and Malware, and Vulnerability Assessment: Proceedings of the seventh international conference (DIMVA 2010), Bonn, Germany, July 8–9, 2010. (Lecture Notes in Computer Science, Vol. 6201). Berlin: Springer. Kristensen, T. (2010). Fingerprint identification: A support vector machine approach. In Proceedings of the second International Conference on Agents and Artificial Intelligence, ICAART 2010, Valencia, Spain, 22–24 January 2010, pp. 451–458. Krogman, W. M., & I¸˙scan, M. Y. (1986). The human Skeleton in forensic medicine. Springfield, IL: Charles C. Thomas. Kronman, A. T. (1988). Jurisprudential responses to legal realism. Cornell Law Review, 73, 335–340. Also, New Haven, CT: Yale Law School, Faculty Scholarship Series, Paper 1061. http://digitalcommons.law.yale.edu/fss_papers/1061 or: http://digitalcommons. law.yale.edu/cgi/viewcontent.cgi?article=2060&context=fss_papers&sei-redir=1# search=“Kronman+“Jurisprudential+responses+to+legal+realism”” Kruse, W., & Heiser, J. (2002). Computer forensics: Incident response essentials. Reading, MA: Addison-Wesley. Kruskal, J. B., & Wish, M. (1978). Multidimensional scaling. Beverly Hills, CA & London: Sage. Ku, Y., Chen, Y.-C., & Chiu, C. (2007a). A proposed data mining approach for internet auction fraud detection. In C. C. Yang, D. Zeng, M. Chau, K. Chang, Q. Yang, X. Cheng, et al. (Eds.), Intelligence and security informatics: Proceedings of the Pacific Asia Workshop, PAISI 2007, Chengdu, China, April 11–12, 2007 (pp. 238–243). (Lecture Notes in Computer Science, 4430). Berlin: Springer. Ku, Y., Chen, Y.-C., Wu., K.-C., & Chiu, C. (2007b). An empirical analysis of online gaming crime characteristics from 2002 to 2004. In C. C. Yang, D. Zeng, M. Chau, K. Chang, Q. Yang, X. Cheng, et al. (Eds.), Intelligence and security informatics: Proceedings of the Pacific Asia Workshop, PAISI 2007, Chengdu, China, April 11–12, 2007 (pp. 34–45). (Lecture Notes in Computer Science, 4430). Berlin: Springer. Kuflik, T., Nissan, E., & Puni, G. (1989). Finding excuses with ALIBI: Alternative plans that are deontically more defensible. In Proceedings of the International Symposium on Communication, Meaning and Knowledge vs. Information Technology, Lisbon, September. Then again in Computers and Artificial Intelligence, 10(4), 297–325, 1991. Then in a selection from the Lisbon conference: Lopes Alves, J. (Ed.). (1992). Information technology & society: Theory, uses, impacts (pp. 484–510). Lisbon: Associação Portuguesa para o Desenvolvimento das Comunicações (APDC), & Sociedade Portuguesa de Filosofia (SPF). Kuglin, C. D., & Hines, D. C. (1975). The phase correlation image alignment method. In Proceedings of the international conference on cybernetics and society, pp. 163–165. 1196 References

Kumar, A., Wong, D. C. M., Shen, H. C., & Jain, A. K. (2003). Personal verification using palmprint and hand geometry biometric. In Proceedings of the fourth international confer- ence on Audio and Video-Based Biometric Person Authentication (AVBPA), Guildford, UK, pp. 668–678. Kurosawa, A. (director) 1950. Rashomon (a film). Producer: Daiei (Japan). Script: T. Matsuama. Based on the short story In the Forest (1921), by R. Akutagawa. Kurzon, D. (1985). How lawyers tell their tales. Poetics, 14, 467–481. Kvart, I. (1994). Overall positive causal impact. Canadian Journal of Philosophy, 24(2), 205–227. Kwan, M., Chow, K. P., Law, F., & Lai, P. (2008). Reasoning about evidence using Bayesian net- works. In I. Ray & S. Shenoy (Eds.), Advances in Digital Forensics IV, International Federation for Information Processing (IFIP), Tokyo, January 2008 (pp. 142–155). Berlin: Springer. Kwan,M.Y.K.,Overill,R.E.,Chow,K.P.,Silomon,J.A.M.,Tse,H.,Law,F.Y.W.,etal. (2010). Internet auction fraud investigations. Chapter 7 in Advances in Digital Forensics VI: Proceedings of the 6th annual IFIP WG 11.9 international conference on digital forensics, Hong Kong, 3–6 January 2010. Berlin: Springer, on behalf of the International Federation for Information Processing (IFIP), pp. 95–106. http://www.kcl.ac.uk/staff/richard/IFIP_2010 Labor, E. (1994). Review of: J. Watson, Forensic Fictions: The Lawyer Figure in Faulkner (Athens, Georgia, U.S.A.: University of Georgia Press, 1993). American Literature,64 66(4), 858–859. LaFond, T., & Neville, J. (2010). Randomization tests for distinguishing social influence and homophily effects. In Proceedings of the International World Wide Web Conference (WWW). http://www.cs.purdue.edu/homes/neville/papers/lafond-neville-www2010.pdf Lagerwerf, L. (1998). Causal connectives have presuppositions: Effects on coherence and dis- course structure. Doctoral dissertation, Netherlands Graduate School of Linguistics, Vol. 10. The Hague, The Nertherlands: Holland Academic. Lakoff, G. P. (1972). Structural complexity in fairy tales. The Study of Man, 1, 128–150. Lam, S. C. J. (2007). Methods for resolving inconsistencies in ontologies. Ph.D. thesis. Department of Computing Science, University of Aberdeen, Aberdeen, Scotland. Lamarque, P. V., & Olsen, S. H. (1994). Truth, fiction, and literature: A philosophical perspective. Oxford: Clarendon Press. Landman, F. (1986). Towards a theory of information: The status of partial objects in semantics. Dordrecht, The Netherlands: Foris. Lane, B., Tingey, M., & Tingey, R. (Eds.). (1993). The encyclopedia of forensic science. London: Headline. Lane, S. M., & Zaragoza, M. S. (2007). A little elaboration goes a long way: The role of generation in eyewitness suggestibility. Memory & Cognition, 35(6), 125–126. Lang, R. R. (2003). Story grammars: Return of a theory. Chapter 12 In M. Mateas & P. Sengers (Eds.), Narrative intelligence (pp. 199–212). Amsterdam: Benjamins. Langbein, J. H. (1977). Torture and the law of proof: Europe and England in the Ancien Régime. Chicago: University of Chicago Press. Lange, T. E., & Dyer, M. G. (1989). High-level inferencing in a connectionist network. Connection Science, 1, 181–217. ftp://ftp.cs.ucla.edu/tech-report/198_-reports/890063.pdf Lange, T. E., & Wharton, C. M. (1992). Remind: Retrieval from episodic memory by infer- encing and disambiguation. Technical Teport 920047. Los Angeles, CA: Computer Science Department, University of California, Los Angeles. ftp://ftp.cs.ucla.edu/tech-report/1992- reports/920047.pdf Langenburg, G. M. (2004). Pilot study: A statistical snalysis of the ACE-V methodology – Analysis stage. Journal of Forensic Identification, 54, 64–79. Langston, M.C., Trabasso, T., & Magliano, J.P. (1999). A connectionist model of narrative comprehension. In A. Ram & K. Moorman (Eds.), Understanding language understanding (pp. 181–226). Cambridge, MA: MIT Press.

64 The journal American Literature is published in Durham, NC, by Duke University Press. References 1197

Lara-Rosano, F., & del Socorro Téllez-Silva, M. (2003). Fuzzy support systems for discretionary judicial decision making. In V. Palade, R. J. Howlett, & L. C. Jain (Eds.), Knowledge-based intelligent information and engineering systems: 7th international conference, KES 2003, Oxford, UK, September 3–5, 2003, Proceedings, Part II (pp. 94–100). (Lecture Notes in Computer Science, LNAI, Vol. 2774.) Berlin: Springer. Lassiter, D. (Ed.). (2004). Interrogations, confessions and entrapment. Dordrecht, The Netherlands: Kluwer. Latendresse, M. (2005). Masquerade detection via customized grammars. In K. Julisch & C. Krügel (Eds.), Detection of intrusions and malware, and vulnerability assessment: Proceedings of the second international conference (DIMVA 2005), Vienna, Austria, July 7–8, 2005 (pp. 141–159). Lecture Notes in Computer Science, Vol. 3548. Berlin: Springer. Latour, B. (1986). Editorial. Technologos, 3 (Paris: Laboratoire d’Informatique pour les Sciences de l’Homme), 3–5. Latourette, K. S. (1936). Biographical Memoir of Berthold Laufer, 1874–1934. Biographical Memoirs, 18(3). National Academy of Sciences of the United States of America. http://books. nap.edu/html/biomems/blaufer.pdf Laufer, B. (1913). History of the fingerprint system. In Smithsonian Report for 1912 (pp. 631–652) (with 7 plates). Washington, DC: Smithsonian Institution. Laufer, B. (1917, May 25). Concerning the history of finger-prints. Science, 504, 505. Laughery, K. R., & Fowler, R. H. (1980). Sketch artist and Identi-kit procedures for recalling faces. Journal of Applied Psychology, 65(3), 307–316. Laurel, B. (1986). Towards the design of a computer-based interactive fantasy system.Ph.D. Dissertation. Cleveland, OH: The Ohio State University. Lauritsen, J. L. (2005). Social and scientific influences on the measurement of criminal victimiza- tion. Journal of Quantitative Criminology, 21(3), 245–266. Lauritsen, J. L. (2010). Advances and challenges in empirical studies of victimization. Journal of Quantitative Criminology, 26(4), 501–508. Lauritzen, S. L., & Mortera, J. (2002). Bounding the number of contributors to mixed DNA stains. Forensic Science International, 130, 125–126. Laxman, S., & Sastry, P. S. (2006). A survey of temporal data mining. SADHANA, Academy Proceedings in Engineering Sciences, 31(2), 173–198. Leach, A.-M., Talwar, V., Lee, K., Bala, N., & Lindsay, R. C. L. (2004). “Intuitive” lie detection of children’s deception by law enforcement officials and university students. Law and Human Behavior, 28, 661–685. Leake, D. B. (1992). Evaluating explanations: A content theory. Hillsdale, NJ: Erlbaum. Leake, D. B. (1994). Accepter: Evaluating explanations. In R. C. Schank, A. Kass, & C. K. Riesbeck (Eds.), Inside case-based explanation (pp. 167–206). Hillsdale, NJ: Erlbaum. Leake, D. B. (1996). CBR in context: The present and the future. Chapter 1 In D. B. Leake (Ed.), Case-based reasoning: Experiences, lessons, and future directions. Menlo Park, CA: AAAI Press, and Cambridge, MA: MIT Press. Leary, R. M. (2001). Evaluation of the impact of the FLINTS software system in West Midlands and Elsewhere. London: Home Office Policing & Reducing Crime Unit, The Home Office. Leary, R. M. (2002). The role of the National Intelligence Model and FLINTS in improving police performance. London: Home Office Policing & Reducing Crime Unit, The Home Office. http:// www.homeoffice.gov.uk/docs2/ Leary, R. M. (2004). Evidential reasoning and analytical techniques in criminal pre-trial fact investigation. Ph.D. thesis, University College, London. Leary, R. M., VanDenBerghe, W., & Zeleznikow, J. (2003a). User requirements for financial fraud modeling. In Proceedings of BILETA 2003: British & Irish Law, Education & Technology Association 18th annual conference. Leary, R. M., Vandenberghe, W., & Zeleznikow, J. (2003b). Towards a financial fraud ontology: A legal modelling approach. Presentation at the ICAIL Workshop on Legal Ontologies and Web Based Legal Information Management. Originally a technical report; Edinburgh, Scotland: 1198 References

Joseph Bell Centre for Forensic Statistics and Legal Reasoning, School of Law, University of Edinburgh. http://www.forensic-pathways.com/PDFs/Leary-Ontology.pdf Lebbah, M., Bennani, Y., & Rogovschi, N. (2009). Learning self-organizing maps as a mixture Markov models. In Proceedings of the third International Conference on Complex Systems and Applications (ICCSA’09), Le Havre, Normandy, France, June 29–July 02, 2009. pp. 54–59. Lebowitz, M. (1983). Creating a story-telling universe. In Proceedings of the 8th international joint conference on artificial intelligence, pp. 63–65. Lee, C.-J., & Wang, S.-D. (1999). A Gabor filter-base approach to fingerprint recognition. In Proceedings of the IEEE workshop on Signal Processing Systems (SiPS 99), pp. 371–378. Lee, G., Flowers, M., & Dyer, M. G. (1992). Learning distributed representations for conceptual knowledge and their application to script-based story processing. Chapter 11 In N. Sharkey (Ed.), Connectionist natural language processing: Readings from connection science (pp. 215– 247). Norwell, MA: Kluwer. Reprinted from Connection Science, 2(4). Lee, H. C., & Gaensslen, R. E. (Eds.). (1991). Advances in fingerprint technology. Elsevier Series in Forensic and Police Science. New York & Amsterdam: Elsevier. 2nd edition published in the CRC Series in Forensic and Police Science, Boca Raton, Florida: CRC Press, 2001. Lee, H. C., Palmbach, T., & Miller, M. T. (2001). Henry Lee’s crime scene handbook. London: Academic. Lee, J.-M., & Hwang, B.-Y. (2005). Two-phase path retrieval method for similar XML document retrieval. In R. Khosla, R. J. Howlett, & L. C. Jain (Eds.), Knowledge-based intelligent informa- tion and engineering systems: 9th international conference, KES 2005, Melbourne, Australia, September 14–16, 2005, Proceedings, Part I (pp. 967–971). (Lecture Notes in Computer Science, Vol. 3684.) Berlin: Springer. Lee, R. (1995). An NLToolset-based system for MUC-6. In Proceedings of the sixth Message Understanding Conference (MUC-6). Columbia, MD: DARPA, and San Mateo, CA: Morgan Kaufmann Publishers, pp. 249–261. Lee, R. (1998). Automatic information extraction from documents: A tool for intelligence and law enforcement analysts. In D. Jensen & H. Goldberg (Eds.), Artificial intelligence and link analysis: Papers from the 1998 fall symposium (pp. 63–65). Menlo Park, CA: AAAI Press. Leff, L. (2001). Automated reasoning with legal XML documents. In Proceedings of the eighth international conference on artificial intelligence and law, St. Louis, MO. New York: ACM Press, pp. 215–216. Legrand, J. (1999). Some guidelines for fuzzy sets application in legal reasoning. Artificial Intelligence and Law, 7(2/3), 235–257. Lehmann, F. (Ed.). (1992). Semantic networks in artificial intelligence. Oxford: Pergamon Press. Also published as a special issue of Computers and Mathematics with Applications, 23(6–9). Lehnert, W., & Loiselle, C. (1985). Plot unit recognition for narratives. In G. Tonfoni (Ed.), Artificial intelligence and text-understanding: Plot units and summarization procedures (pp. 9–47). Quaderni di Ricerca Linguistica, Vol. 6. Parma, Italy: Edizioni Zara. Lehnert, W. G. (1977). Question answering in a story understanding system. Cognitive Science, 1, 47–73. Lehnert, W. G. (1978). The process of question answering. Hillsdale, NJ: Erlbaum. Lehnert, W. G. (1981). Plot units and narrative summarization. Cognitive Science, 4, 293–331. Lehnert, W. G. (1982). Plot units: A narrative summarization strategy. In W. G. Lehnert & M. H. Ringle (Eds.), Strategies for natural language processing (pp. 375–412). Hillsdale, NJ: Erlbaum. Lehnert, W. G., Alker, H., & Schneider, D. (1983). The heroic Jesus: The affective plot struc- ture for Toynbee’s Christus Patiens. In S. K. Burton & D. D. Short (Eds.), Proceedings of the sixth international conference on computers and the humanities (pp. 358–367). Rockville, MD: Computer Science Press. Lehnert, W. G., Dyer, M. G., Johnson, P. N., Yang, C. J., & Harley, S. (1983). BORIS: An experiment in in-depth understanding of narratives. Artificial Intelligence, 20(2), 15–62. References 1199

Leippe, M. R. (1985). The influence of eyewitness nonidentifications on mock jurors’ judgments of a court case. Journal of Applied Social Psychology, 15, 656–672. Leith, P. (1998). The judge and the computer: How best ‘decision support’? Artificial Intelligence and Law, 6, 289–309. Lemar, C., & Chilvers, D. R. (1995). Litigation support (3rd ed.). (The Coopers & Lybrand Guide to the Financial Assessment of Damages and Forensic Accounting.) London: Butterworths. Lemons, J., Shrader-Frechette, K., & Cranor, C. (1997). The precautionary principle: Scientific uncertainty and Type I and Type II errors. In M. Kaiser (Ed.), The precautionary principle and its implications for science. Special issue of Foundations of Science, 2(2), 207–236. Lempert, R. (1977). Modeling relevance. Michigan Law Review, 75, 1021–1057. Lenat, D., & Guha, R. V. (1990). Building large knowledge-based systems: Representation and inference in the CYC project. Reading, MA: Addison Wesley. Lenci, A., Bel, N., Busa, F., Calzolari, N., Gola, E., Monachini, M., et al. (2000). SIMPLE: A general framework for the development of multilingual lexicons. International Journal of Lexicography, 13(4), 249–263. Lengers, R. J. C. (1995). Evolving artificial neural networks: A design approach. Masters Thesis. Tilburg, The Netherlands: Tilburg University. Lenzi, V. B., Biagioli, C., Cappelli, A., Sprugnoli, R., & Turchi, F. (2009). The LME project: Legislative metadata based on semantic formal models. International Journal of Metadata, Semantics and Ontologies, 4(3), 154–164. Leo, R. A. (2005). Re-thinking the study of miscarriages of justice: Developing a criminology of wrongful conviction. Journal of Contemporary Criminal Justice, 21, 201–223. Leo, R. A. (2008). Police interrogation and American justice. Cambridge, MA: Harvard University Press. Leo, R. A., Drizin, S., Neufeld, P., Hall, B., & Vatner, A. (2006). Bringing reliability back in: False confessions and legal safeguards in the twenty-first century. Wisconsin Law Review, 2006, 479–539. Leo, R. A., & Ofshe, R. J. (1998). The consequences of false confessions: Deprivations of liberty and miscarriages of justice in the age of psychological interrogation. Journal of Criminal Law and Criminology, 88, 429–496. Leon, C., Peinado, F., Navarro, A., & Cortiguera, H. (2008). An intelligent plot-centric interface for mastering computer role-playing games. In U. Spierling & N. Szilas (Eds.), Proceedings of the first international conference on interactive digital storytelling, Erfurt, Germany, 26–29 November 2008 (pp. 321–324). (Lecture Notes in Computer Science, Vol. 5334). Berlin: Springer. Lerti (2006). Note d’information 4, version 0.94, dated 23 September 2006, produced in France by Lerti: La preuve informatique. Accessible on the Web. Leskovec, J., Chakrabarti, D., Kleinberg, J., & Faloutsos, C. (2005). Realistic, mathematically tractable graph generation and evolution, using Kronecker multiplication. In Proceedings of ECML/PKDD 2005, Porto, Portugal, 2005. http://www.cs.cmu.edu/~jure/pubs/kronecker- pkdd05.pdf http://ecmlpkdd05.liacc.up.pt Leskovec, J., & Faloutsos, C. (2007). Scalable modeling of real graphs using Kronecker multipli- cation. In Proceedings of ICML 2007, Corvallis, OR. Leskovec, J., Kleinberg, J., & Faloutsos, C. (2005). Graphs over time: Densification laws, shrinking diameters and possible explanations. In Proceedings of the 2005 international conference on Knowledge Discovery and Data mining (KDD 2005), Chicago, IL, August 2005. http://www. cs.cmu.edu/~christos/PUBLICATIONS/icdm05-power.pdf Leskovec, J., Singh, A., & Kleinberg, J. (2006). Patterns of influence in a recommendation net- work. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), 2006. http://www.cs.cornell.edu/home/kleiber/pakdd06-cascade.pdf Lester, J. C., & Stone, B. A. (1997). Increasing believability in animated pedagogical agents. In W. L. Johnson (Ed.), Autonomous Agents ’97, Marina del Rey, California (pp. 16–21). New York: ACM Press. 1200 References

Leung, W. F., Leung, S. H., Lau, W. H., & Luk, A. (1991). Fingerprint recognition using neural networks. In Proceedings of the IEEE workshop on neural networks for signal processing, pp. 226–235. Levene, M. (1992). The nested universal relation database model. (Lecture Notes in Computer Science, Vol. 595.) Berlin: Springer. Levene, M., & Loizou, G. (1990). The nested relation type model: An application of domain theory to databases. The Computer Journal, 33(1), 19–30. Levene, M., & Loizou, G. (1993). Semantics of null extended nested relations. ACM Transactions on Database Systems, 18, 414–459. Levene, M., & Loizou, G. (1994). The nested universal relation data model. Journal of Computer and System Sciences, 49, 683–717. Levi, J. N. (1994). Language as evidence: The linguist as expert witness in North American courts. Forensic Linguistics, 1(1), 1–26. Levi, M. (1998). Perspectives on ‘organized crime’: An overview. Howard Journal of Criminal Justice, 37, 1–11. Levine, F. J., & Tapp, J. L. (1982). Eyewitness identification: Problems and pitfalls. In V. J. Konecniˇ & E. E. Ebbesen (Eds.), The criminal justice system: A social psychological analysis (pp. 99–127). San Francisco, CA: Freeman. Levine, T. R., Kim, R. K., & Blair, J. P. (2010). (In_accuracy at detecting true and false confess- sions and denials: An initial test of a projected motive model of veracity judgments. Human Communication Research, 36, 82–102. Levinson, J. (2000). Questioned documents: A lawyer’s handbook. London & San Diego, CA: Academic. Levinson, J. (2001). Questioned documents: A lawyer’s handbook. San Diego, CA: Academic Press. Levitt, T. S., & Laskey, K. B. (2002). Computational inference for evidential reasoning in sup- port of judicial proof. In M. MacCrimmon & P. Tillers, P. (Eds.), The dynamics of judicial proof: Computation, logic, and common sense (pp. 345–383). (Studies in Fuzziness and Soft Computing, Vol. 94). Heidelberg, Germany: Physical-Verlag. Lewis, D. (1973). Counterfactuals. Oxford: Blackwell. Lewis, D. (1997). Finkish dispositions. The Philosophical Quarterly, 47(187), 143–158. Lewis, C. M., & Sycara, K. (1993). Reaching informed agreement in multispecialist cooperation. Group Decision and Negotiation, 2(3), 279–300. Lewis, P. R., Gagg, C., & Reynolds, K. (2004). Forensic materials engineering: Case studies.Boca Raton, FL: CRC Press. Lewis, P. R., & Hainsworth, S. (2006). Fuel line failure from stress corrosion cracking. Engineering Failure Analysis, 13, 946–962. Li, Ch.-Ts. (2008). Multimedia forensics and security. Hershey, PA: IGI Global. Li, J., Zheng, R., & Chen, H. (2006). From fingerprint to writeprint. Communications of the ACM, 49(4), 76–82. Li, M., Yunhong, W., & Tan, T. (2002). Iris recognition using circular symmetric filters. IEEE International Conference on Pattern Recognition (ICPR 2002), 2, 414–417. Li, S. Z. (2009). Mathematical MRF models. In S. Z. Li (Ed.), Markov random field modeling in image analysis (3rd ed.) (Advances in Computer Vision and Pattern Recognition series; originally published in the series: Computer Science Workbench). Both softcover and hardcover editions, 2009, pp. 1–28. Previous edition (also cited), Tokyo: Springer, 2001. Li, Q., Niu, X., Wang, Zh., Jiao, Y., & Sun, Sh. H. (2005). A verifiable fingerprint vault scheme. In R. Khosla, R. J. Howlett, & L. C. Jain (Eds.), Knowledge-based intelligent information and engineering systems: 9th international conference, KES 2005, Melbourne, Australia, September 14–16, 2005, Proceedings, Part III (pp. 1072–1078). (Lecture Notes in Computer Science, Vol. 3684.) Berlin: Springer. Li, S. Z., & Jain, A. K. (Eds.). (2009). Encyclopedia of biometrics. New York: Springer. References 1201

Li, Z., Yang, M. C., & Ramani, K. (2009). A methodology for engineering ontology acquisition and validation. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 23(1), 37–51. Liang, T., & Moskowitz, H. (1992). Integrating neural networks and semi markov processes for automated knowledge acquisition: An application to real-time scheduling. Decision Sciences, 23(6), 1298–1314. Light, R. (1997). Presenting XML. Indianapolis, IN: Sams.net Linde, C. (1993). Life stories: The creation of coherence. New York: Oxford University Press. Lindsay, R. C. L., Lim, R., Marando, L., & Cully, D. (1986). Mock-juror evaluations of eyewit- ness testimony: A test of metamemory hypotheses. Journal of Applied Social Psychology, 15, 447–459. Lindsay, R. C. L., & Malpass, R. S. (1999). Measuring lineup fairness. Special issue of Applied Cognitive Psychology, 13, S1–S7. Lindsay, R. C. L., Ross, D. F., Read, J. D., & Toglia, M. P. (Eds.). (2006). Handbook of eyewitness psychology: Memory for people. Mahwah, NJ: Lawrence Erlbaum Associates. Lindsay, R. C. L., & Wells, G. L. (1980). What price justice? Exploring the relationship between lineup fairness and identification accuracy. Law and Human Behavior, 4, 303–314. Linford, N. (2002). The English heritage geophysical survey database. http://www.eng-h.gov.uk/ SDB/ Lingras, P., & Peters, G. (2011). Rough clustering. Wiley Interdisciplinary Reviews (WIREs): Data Mining and Knowledge Discovery, 1(1), 64–72. doi://10.1002/widm.16 Lipske, M. (1999). Forest owlet thought to be extinct is spotted anew. Smithsonian Institution Research Reports, No. 96, Spring 1999. http://www.si.edu/opa/researchreports/9996/owlet.htm Lipton, P. (2004). Inference to the best expanation (2nd ed.) (revised, augmented). London & New York: Routledge. Lipton, L. (2007). Alien abduction: Inference to the best explanation and the management of testimony. Episteme, 4(3), 238–251. Lisetti, C. L., & Schiano, D. J. (2000). Automatic facial expression interpretation: Where human- computer interaction, artificial intelligence and cognitive science intersect. In I. E. Dror & S. V. Stevenage (Eds.), Facial information processing: A multidisciplinary perspective. Special issue of Pragmatics & Cognition, 8(1): 185–235. Liu, H., & Motoda, H. (1998). Feature selection for knowledge discovery and data mining. Dordrecht, The Netherlands: Kluwer. Liu, H., & Singh, P. (2002). MAKEBELIEVE: Using commonsense knowledge to generate stories. In Proceedings of the eighteenth national conference on artificial intelligence and fourteenth conference on innovative applications of artificial intelligence, pp. 957–958. http://web/media. mit.edu/~hugo/publications/papers/AAAI2002-makebelieve.pdf Liu, D., Yue, J., Wang, X., Raja, A., & Ribarsky, W. (2008). The role of blackboard-based reasoning and visual analytics in RESIN’s predictive analysis. In Proceedings of 2008 IEEE/WIC/ACM international conference on Intelligent Agent Technology (IAT 2008), Sydney, December 9–12, 2008, pp 508–511. An extended version is: CVC Technical Report CVC-UNCC-08-29, Charlotte: University of North Carolina, July 2008. http://www.sis.uncc.edu/~anraja/PAPERS/ IAT08-Final.pdf Llewellyn, K. N. (1962). Jurisprudence: Realism in theory and practice. Chicago, IL: University of Chicago Press. Published again, with a new introduction by J. J. Chriss, in Somerset, NJ: Transaction; distrib. London: Eurospan, 2008. [Compilation of writings from the 1930s through the 1950s.] Lloyd, C. (1995). Forensic psychiatry for health professionals. London: Chapman & Hall. Locard, E. (1930). Analysis of dust traces. American Journal of Police Science, 1(276), 401–496. Locard, E. (1937). La Criminalistique à l’usage des gens du monde et des auteurs de romans policiers. Lyon: Desvignes et Cie. Lodder, A. R. (2004). Law, logic, rhetoric: A procedural model of legal argumentation. In S. Rahman & J. Symons (Eds.), Logic, epistemology and the unity of science (pp. 569–588). Dordrecht: Kluwer. 1202 References

Lodder, A. R., & Zeleznikow, J. (2005). Developing an online dispute resolution environment: Dialogue tools and negotiation systems in a three step model. Harvard Negotiation Law Review, 10, 287–338. Lodder, A. R., & Zeleznikow, J. (2010). Enhanced dispute resolution through the use of information technology. Cambridge: Cambridge University Press. Loftus, E. F. (1974). Reconstructing memory: The incredible witness. Psychology Today, 8, 116–119. Loftus, E. F. (1975). Leading questions and the eye witness report. Cognitive Psychology, 7, 560–572. Loftus, E. F. (1976). Unconscious transference in eyewitness identification. Law and Psychology Review, 2, 93–98. Loftus, E. F. (1979). Eyewitness testimony. Cambridge, MA: Harvard University Press. (Revised edn.: 1996). Loftus, E. F. (1980). Impact of expert psychological testimony on the unreliability of eye- witness identification. Journal of Applied Psychology, 65, 915. Loftus, E. F. (1981a). Eyewitness testimony: Psychological research and legal thought. In N. Morris & M. Tonry (Eds.), Crime and justice 3. Chicago: University of Chicago Press. Loftus, E. F. (1981b). Mentalmorphosis: Alteration in memory produced by the bonding of new information to old. In J. Long & A. Baddeley (Eds.), Attention and performance IX (pp. 417– 434). Hillsdale, NJ: Lawrence Erlbaum Associates. Loftus, E. F. (1983). Silence is not golden. American Psychologist, 38, 9–15. Loftus, E. F. (1986a). Experimental psychologist as advocate or impartial educator. Law and Human Behavior, 10, 63–78. Loftus, E. F. (1986b). Ten years in the life of an expert witness. Law and Human Behavior, 10, 241–263. Loftus, E. F. (1987). Trials of an expert witness. In the My Turn column, in Newsweek, 109, 29 June 1987, pp. 10–11. Loftus, E. F. (1991). Resolving legal questions with psychological data. American Psychologist, 46, 1046–1048. Loftus, E. F. (1993a). The reality of repressed memories. American Psychologist, 48, 518–537. http://faculty.washington.edu/eloftus/Articles/lof93.htm Loftus, E. F. (1993b). Psychologists in the eyewitness world. American Psychologist, 48, 550–552. Loftus, E. F. (Sept. 1997). Creating false memories. Scientific American, 277, 70–75. http://faculty. washington.edu/eloftus/Articles/sciam.htm Loftus, E. F. (1998). The price of bad memories. Skeptical Inquirer, 22, 23–24. Loftus, E. F. (2002). Memory faults and fixes. Issues in Science and Technology, 18(4), National Academies of Science, 2002, pp. 41–50. http://faculty.washington.edu/eloftus/ Articles/IssuesInScienceTechnology02%20vol%2018.pdf Loftus, E. F. (2003a). Our changeable memories: Legal and practical implications. Nature Reviews: Neuroscience, 4, 231–234. http://faculty.washington.edu/eloftus/Articles/2003Nature.pdf Loftus, E. F. (2003b). Make-believe memories. American Psychologist, 58(11), 867–873. Posted at: http://faculty.washington.edu/eloftus/Articles/AmerPsychAward+ArticlePDF03%20(2).pdf Loftus, E. F. (2005). Planting misinformation in the human mind: A 30-year investigation of the malleability of memory. Learning and Memory, 12, 361–366. Loftus, E. F., Donders, K., Hoffman, H. G., & Schooler, J. W. (1989). Creating new memories that are quickly accessed and confidently held. Memory and Cognition, 17, 607–616. Loftus, E. F., & Doyle, J. M. (1997). Eyewitness testimony: Civil and criminal. Charlottesville, VA: Lexis Law Publishing. Loftus, E. F., & Greene, E. (1980). Warning: Even memory for faces may be contagious. Law and Human Behavior, 4, 323–334. Loftus, E. F., & Hoffman, H. G. (1989). Misinformation and memory: The creation of new mem- ories. Journal of Experimental Psychology: General, 118, 100–104. http://faculty.washington. edu/eloftus/Articles/hoff.htm References 1203

Loftus, E. F., & Ketcham, K. (1991). Witness for the defense: The accused, the eyewitness and the expert who puts memory on trial. New York: St. Martin’s Press. Loftus, E. F., & Ketcham, K. (1994). The Myth of repressed memory: False memories and allegations of sexual abuse. New York: St. Martin’s Press. Loftus, E. F., & Loftus, G. R. (1980). On the permanence of stored information in the brain. American Psychologist, 35, 409–420. Loftus, E. F., Loftus, G. R., & Messo, J. (1987). Some facts about ‘weapon focus’. Law and Human Behavior, 11, 55–62. Loftus, E. F., Miller, D. G., & Burns, H. J. (1978). Semantic integration of verbal information into a visual memory. Journal of Experimenal Psychology: Human Learning and Memory, 4, 19–31. Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of automobile destruction: An example of the interaction between language and memory. Journal of Verbal Learning and Verbal Behaviour, 13, 585–589. Loftus, E. F., & Pickrell, J. E. (1995). The formation of false memories. Psychiatric Annals, 25(12), 720–725. Loftus, E. F., & Rosenwald, L. A. (1993) Buried memories, shattered lives. American Bar Association Journal, 79, 70–73. Loftus, E. F., Weingardt, J. W., & Wagenaar, W. A. (1985). The fate of memory: Comment on McCloskey and Zaragoza. Journal of Experimental Psychology: General, 114, 375–380. Loftus, E. F., & Zanni, G. (1975). Eyewitness testimony: The influence of the wording of a question. Bulletin of the Psychonomic Society, 5, 86–88. Loh, W.-Y. (2011). Classification and regression trees. Wiley Interdisciplinary Reviews (WIREs): Data Mining and Knowledge Discovery, 1(1), 14–23. doi://10.1002/widm.8 Lonergan, M. C., Severin, E. J., Doleman, B. J., Beaber, S. A., Grubbs, R. H., & Lewis, N. S. (1996). Array-based vapor sensing using chemically sensitive, carbon black-polymer resistors. Chemistry of Materials, 8, 2298–2312. Longley, P. A., Goodchild, M. F., Maguire, D. J., & Rhind, D. W. (2001). Geographic information systems and science.NewYork:Wiley. Lönneker, B. (2005). Narratological knowledge for natural language generation. In Proceedings of the 10th European workshop on natural language generation, Aberdeen, Scotland, August 2005, pp. 91–100. Lönneker, B., & Meister, J. C. (2005). “Dream on”: Designing the ideal story generator algorithms. Short communication at the session on story generators: Approaches for the generation of lit- erary artefacts. At the ACH/ALLC 2005 Conference, of the Association for Computing and the Humanities and the Association for Linguistic and Literary Computing. Lönneker, B., Meister, J. C., Gervás, P., Peinado, F., & Mateas, G. (2005). Story generators: Approaches for the generation of literary artefacts. Session at the ACH/ALLC 2005 Conference, of the Association for Computing and the Humanities and the Association for Linguistic and Literary Computing. In Proceedings of the 17th joint international conference of the Association for Computers and the Humanities and the Association for Literary and Linguistic Computing (ACH/ALLC 2005 Conference Abstracts), Victoria, BC, Canada, June 15–18, 2005, pp. 126–133. Lonsdorf, R. G. (1995). Review of H. Bluestone, S. Travin, & D. Marlowe, Psychiatric-legal deci- sion making by the mental health practitioner: The clinician as de facto magistrate. (New York: Wiley, 1994). The Journal of Legal Medicine, 16(2), 319–324. Lord, J. (1971). Duty, honour, empire. London: Hutchinson. Louchart, S., & Aylett, R. (2003). Towards a narrative theory of virtual reality. Virtual Reality, 7(1), 2–9. Loui, R. P., & Norman, J. (1995). Rationales and argument moves. Artificial Intelligence and Law, 2(3), 159–190. Loui, R. P., Norman, J., Alpeter, J., Pinkard, D., Craven, D., Lindsay, J., et al. (1997). Progress on Room 5: A testbed for public interactive semi-formal legal argumentation. In Proceedings of the sixth International Conference on Artificial Intelligence and Law (ICAIL 1997).NewYork: ACM Press, pp. 207–214. 1204 References

Louis, J.-H. (1987). L’engrenage de la violence. La guerre psychologique aux États-Unis pendant la Seconde Guerre Mondiale. Paris: Payot. Löwe, B., & Pacuit, E. (2008). An abstract approach to reasoning about games with mistaken and changing beliefs. Australasian Journal of Logic, 6, 162–181. http://www.philosophy.unimelb. edu.au/ajl/2008 Löwe, B., Pacuit, E., & Saraf, S. (2008). Analyzing stories as games with changing and mis- taken beliefs. Technical report. ILLC Publications PP-2008-31. Amsterdam: Institute for Logic, Language and Computation of the University of Amsterdam. [This is the version we referred to.] Later published as: Identifying the structure of a narrative via an agent-based logic of preferences and beliefs: Formalizations of episodes from CSI: Crime Scene InvestigationTM,In M. Duvigneau & D. Moldt (Eds.), Proceedings of the fifth international workshop on Modelling of Objects Components and Agents, MOCA’09, Hamburg, 2009 [FBI-HH-B-290/09], pp. 45–63. Loyall, A. B. (1997). Believable agents: Building interactive personalities (Technical Report CMU-CS-97-123). Pittsburgh, PA: School of Computer Science, Carnegie Mellon University. Retrieved from http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/oz/web/papers/CMU-CS-97- 123.ps Loyka, S. A., Faggiani, D. A., & Karchmer, C. (2005). The production and sharing of intelligence. Vol. 4: Protecting your community from terrorism. Washington, DC: Office of Community Oriented Policing Services and the Police Executive Research Forum. http://www.cops.usdoj. gov/mime/open.pdf?Item=1438 Lu, Q., Korniss, G., & Szymanski, B. K. (2009). The naming game on social networks: Community formation and consensus engineering. Journal of Economic Interaction and Coordination, 4(2), 221–235. Lucas, R. (1986). An expert system to detect burglars using a logic language and a relational database. Fifth British National Conference on Databases, Canterbury, Kent, England. Lucy, D. (2005). Introduction to statistics for forensic scientists. Chichester: Wiley. Luger, G. F., & Stubblefield, W. A. (1998). Artificial intelligence: Structures and strategies for complex problem solving (3rd ed.). Reading, MA: Addison Wesley Longman. Luhmann, T., Robson, S., Kyle, S., & Harley, I. (2006). Close range photogrammetry: Principles, techniques and applications. (Translated from the German.) Scotland: Whittles Publishing. Luhn, H. P. (1958). The automatic creation of literature abstracts. IBM Journal of Research and Development, 2, 159–165. Lukáš, J., Fridrich, J.,65 & Goljan, M. (2006). Detecting digital image forgeries using sensor pattern noise. In Proceedings of the SPIE, Vol. 6072. Lutomski, L. S. (1989). The design of an attorney’s statistical consultant. In Proceedings of the second international conference of artificial intelligence and law. New York: ACM Press, pp. 224–233. Luus, C. A. E., & Wells, G. L. (1994). The malleability of eyewitness confidence: Co witness and perseverance effects. Journal of Applied Psychology, 79, 714–723. Lykken, D. T. (1998). A tremor in : Uses and abuses of the lie detector. Reading, MA: Perseus Books. Maas, A., & Köhnken, G. (1989). Eye-witness identification: Simulating the ‘weapon effect’. Law and Human behavior, 13, 397–409. MacCormick, D. N. (1981). H. L. A. Hart. Stanford, CA: Stanford University Press. MacCormick, N. (1980). The coherence of a case and the reasonableness of doubt. The Liverpool Law Review, 2, 45–50. MacCormick, N. (1995). Argumentation and interpretation in law. Argumentation, 9, 467–480.

65 This is Jessica Fridrich. Also Juri Fridrich is at the Computer Science Department of Dartmouth College in Hanover, New Hampshire. They both work in the given domain of research. References 1205

MacCrimmon, M. (1989). Facts, stories and the hearsay rule. In A. A. Martino (Ed.), Pre- proceedings of the third international conference on “Logica, Informatica, Diritto: Legal Expert Systems”, Florence, 1989 (2 vols. + Appendix) (Vol. 1, pp. 461–475). Florence: Istituto per la Documentazione Giuridica, Consiglio Nazionale delle Ricerche. MacCrimmon, M., & Tillers, P. (Eds.). (2002). The dynamics of judicial proof: Computation, logic, and common sense. (Studies in Fuzziness and Soft Computing, Vol. 94). Heidelberg: Physical- Verlag. MacDonell, H. L. (1993). Bloodstain patterns. Corning, NY: Laboratory of Forensic Science. MacDonell, H. L., & Bialousz, L. F. (1979). Laboratory manual for the geometric interpretation of human bloodstain evidence (2nd ed.). Corning, NY: Laboratory of Forensic Science. MacDougall, K. A., Fenning, P. J., Cooke, D. A., Preston, H., Brown, A., Hazzard, J., et al. (2002). Non intrusive investigation techniques for groundwater pollution studies. Research & Development Technical Report P2-178/TR/10. Bristol, England: Environment Agency. MacLane, S., & Birkhoff, G. (1979). Algebra. London: Macmillan. Macneil, I. (1980). The new social contract: An inquiry into modern contractual relations.New Haven, CT: Yale University Press. Macrae, C. N., Stangor, C., & Hewstone, M. (Eds.). (1996). Stereotypes and stereotyping.New York: Guilford Press. Maedche, A., & Staab, S. (2001). Ontology learning for the Semantic Web. IEEE Intelligent Systems, 16(2), 72–79. Magerko, B., & Laifo, J. (2003). Building an interactive drama architecture with a high degree of interactivity. At the First International Conference on Technologies for Interactive Digital Storytelling and Entertainment (= TIDSE ’03), Darmstadt, Germany, March 2003. Magnenat-Thalmann, N., & Gilles, B. (2007). Towards an individualised physiological model of the musculoskeletal system. In The Digital Patient, special issue of ERCIM News, 69 (April), 25–26. Accessible at the webpage http://ercim-news.ercim.org/ of the European Research Consortium for Informatics and Mathematics. Magnenat Thalmann, N., & Thalmann, D. (1991a). Complex models for visualizing synthetic actors. IEEE Computer Graphics and Applications, 11(5), 32–44. Magnenat Thalmann, N., & Thalmann, D. (1991b). Animation of synthetic actors and 3D interaction. Laboratoire d’Infographie, École Polytechnique Fédérale, Lausanne, and Groupe MIRALab, Université de Génève, Geneva, Switzerland, pp. 27–49. Magnenat Thalmann, N., & Thalmann, D. (Eds.). (1996). Interactive computer animation. London: Prentice Hall. Magnenat Thalmann, N., & Thalmann, D. (Eds.). (2001). Deformable avatars. Dordrecht, The Netherlands: Kluwer. Magnenat Thalmann, N., & Thalmann, D. (Eds.). (2005). Virtual humans: Thirty years of research, what next? The Visual Computer, 21(12), 997–1015. Magnussen, S., Melinder, A., Stridbeck, U., & Raja, A. (2010). Eliefs about factors affecting the reliability of eyewitness testimony: A comparison of judges, jurors and the general public. Applied Cognitive Psychology, 24, 122–133. doi://10.1002/acp.1550 Maguire, M. (2000). Policing by risks and targets: Some dimensions and implications of intelligence-led crime control. In J. Sheptycki (Ed.), special issue on Surveillance and Intelligence-Leg Policing, Policing and Society, 9, 315–336. Maguire, M., & John, T. (1995). Intelligence, surveillance and informants: Integrated approaches. Crime Detection and Prevention Series Paper 64. London: Home Office. Mahendra, B. (2007). Expert witness update. In an Expert Witness Supplement to The New Law Journal, 157(7294) (London, 26 October 2007), 1490–1491. Mahesh, K. (1996). Ontology development for machine translation: Ideology and methodol- ogy. Memoranda in Computer and Cognitive Science, MCCS-96-292. Las Cruces, NM: New Mexico State University, Computing Research Laboratory. Maida, A. S. (1991). Maintaining mental models of agents who have existential misconceptions. Artificial Intelligence, 50, 331–383. 1206 References

Maida, A. S. (1995). Review of Ballim & Wilks (1991). Minds and Machines, 5(2), 277–280. Maida, A. S., & Shapiro, S. C. (1982). Intensional concepts in propositional semantic networks. Cognitive Science, 6(4), 291–330. Maji, P., & Pal, S. K. (2007). RFCM: A hybrid clustering algorithm using rough and fuzzy sets. Fundamenta Informaticae, 80, 477–498. Maley, Y., & Fahey, R. (1991). Presenting the evidence: Constructions of reality in court. International Journal for the Semiotics of Law, 4(10), 3–17. Malinowski, E. R. (1991). Factor analysis in chemistry. New York: Wiley. Maloney, K., Carter, A. L., Jory, S., & Yamashita, B. (2005). Three-dimensional representation of bloodstain pattern analysis. Journal of Forensic Identification, 55(6), 711–725. Maloney, K., Killeen, J., & Maloney, A. (2009). The use of HemoSpat to include bloodstains located on nonorthogonal surfaces in area-of-origin calculations. Journal of Forensic Identification, 59(5), 513–524. http://hemospat.com/papers/pdf/JFI% 20-%20HemoSpat%20Using%20Nonorthogonal%20Surfaces.pdf Maloney, A., Nicloux, C., Maloney, K., & Heron, F. (2001). One-sided impact spatter and area-of-origin calculations. Journal of Forensic Identification, 61(2), 123–135. http://hemospat.com/papers/pdf/JFI%20-%20One-Sided%20Impact%20Spatter% 20and%20Area-of-Origin%20Calculations.pdf Malpass, R. S., & Devine, P. G. (1981). Eye-witness identification: Lineup instructions and the absence of the offender. Journal of Applied Psychology, 66, 482–489. Malsch, M., & Nijboer, J. F. (Eds.). (1999). Complex cases: Perspectives on the Netherlands criminal justice system. (Series Criminal Sciences). Amsterdam: THELA THESIS. Maltoni, D., Maio, D., Jain, A. K., & Prabhakar, S. (2009). Handbook of fingerprint recognition (2nd ed.). New York: Springer. The 1st edition was of 2003. Mandler, J. M., & Johnson, N. S. (1977). Remembrance of things parsed: Story structure and recall. Cognitive Psyhology, 9, 111–191. Mani, I. (2001). Automatic summarization. (Natural Language Processing, 3). Amsterdam: Benjamins. Mann, S., Vrij, A., & Bull, R. (2004). Detecting true lies: Police officers’ ability to detect suspect’ lies. Journal of Applied Psychology, 89, 137–149. Mannila, H., Toivonen, H., & Verkamo, A. I. (1997). Discovery of frequent episodes in event sequences. Data Mining and Knowledge Discovery, 1(3), 259–289. Manning, C., & Schutze, H. (1999). Foundations of statistical natural language processing. Cambridge, MA: The MIT Press. Manning, K., & Srihari, S. N. (2009). Computer-assisted handwriting analysis: interaction with legal issues in U. S. courts. In Proceedings of the third international workshop on computational forensics, The Hague, Netherlands. Berlin: Springer. Manouselis, N., Salokhe, G., & Johannes Keizer, J. (2009). Agricultural metadata and semantics. Special issue of the International Journal of Metadata, Semantics and Ontologies, 4(1–2). Manschreck, T. C. (1983). Modeling a paranoid mind: A narrower interpretation of the results. [A critique of Colby (1981).] The Behavioral and Brain Sciences, 6(2), 340–341. [Answered by Colby (1983).] Marafioti, L. (2000). Scelte autodifensive dell’indagato e alternative al silenzio. Turin, Italy: Giappichelli. Marcus, P. (2000). The process of interrogating criminal suspects in the United States. In Proceedings of the second world conference on new trends in criminal investigation and evi- dence, Amsterdam, 10–15 December 1999; = C. M. Breur, M. M. Kommer, J. F. Nijboer, & J. M. Reijntjes. (Eds.). (2000). New trends in criminal investigation and evidence (Vol. 2, pp. 447–456). Antwerp, Belgium: Intersentia. Mares, E. (2006). Relevance logic. Stanford Encyclopedia of Philosophy (entry revised from an original version of 1998). http://plato.stanford.edu/entries/logic-relevance/ Mares, E., & Meyer, R. K. (2001). Relevant logics. In L. Goble (Ed.), The Blackwell guide to philosophical logic (pp. 280–308). Oxford: Blackwell. Mares, E. D. (1992). Andersonian deontic logic. Theoria, 58, 3–20. References 1207

Mares, E. D. (1997). Relevant logic and the theory of information. Synthese, 109, 345–360. Mares, E. D. (2004). Relevant logic: A philosophical interpretation. Cambridge: Cambridge University Press. Marinai, S., & Fujisawa, H. (Eds.). (2008). Machine learning in document analysis and recogni- tion. (Studies on Computational Intelligence, 90). Berlin: Springer. Marineau, R. F. (1989). Jacob Levy Moreno, 1889–1974: Father of psychodrama, sociometry, and group psychotherapy. London: Routledge. Marshall, C. C. (1989). Representing the structure of legal argument. In Proceedings of the second international conference on artificial intelligence and law. New York: ACM Press, pp. 121–127. Martino, A. A. (1997). Quale logica per la politica. In A. A. Martino (Ed.), Logica delle norme (pp. 5–21). Pisa, Italy: SEU: Servizio Editoriale Universitario di Pisa, on behalf of Università degli Studi di Pisa, Facoltà di Scienze Politiche. English translation: A logic for politics. Accessible online at a site of his publications: http://www.antonioanselmomartino.it/index.php? option=com_content&task=view&id=26&Itemid=64 Martino, A. A., & Nissan, E. (Eds.). (1998). Formal models of legal time. Special issue, Information and Communications Technology Law, 7(3). Martino, A. A., & Nissan, E. (Eds.). (2001). Formal approaches to legal evidence. Special issue, Artificial Intelligence and Law, 9(2/3), 85–224. Martins, J. P. (1990). The truth, the whole truth, and nothing but the truth: An indexed bibliography to the literature of truth maintenance systems. AI Magazine, 11(5), 7–25. Martins, J. P., & Shapiro, S. C. (1983). Reasoning in multiple belief spaces. In Proceedings of the eighth International Joint Conference on Artificial Intelligence (IJCAI’83), Karlsruhe, Germany. San Mateo, CA: Morgan Kaufmann, pp. 370–373. http://ijcai.org/search.php Martins, J. P., & Shapiro, S. C. (1988). A model for belief revision. Artificial Intelligence, 35, 25–79. Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50, 370–396. Mateas, M. (2001). A preliminary poetics for interactive drama and games. Digital Creativity, 12(3), 140–152. Also in Proceedings of SIGGRAPH 2001: Art Gallery, art and culture papers, New York: Association for Computing Machinery, pp. 51–58. Mateas, M. (2004). A preliminary poetics for interactive drama and games [a longer version]. In N. Wardrip-Fruin & P. Harrigan (Eds.), First person: New media as story, performance, and game, Cambridge, MA: MIT Press. Mateas, M. (2005). Beyond story graphs: Story management in game worlds. Short communication at the Session on Story generators: Approaches for the generation of literary artefacts. At the ACH/ALLC 2005 Conference, of the Association for Computing and the Humanities and the Association for Linguistic and Literary Computing. Mateas, M., & Sengers, P. (Eds.). 2003. Narrative intelligence. Amsterdam: Benjamins. Mateas, M., Domike, S., & Vanouse, P. (1999). Terminal Time: An ideologically biased history machine. In a special issue on Creativity in the Arts and Sciences of the AISB Quarterly, 102, 36–43. Mateas, M., & Stern, A. (2003). Integrating plot, character and natural language processing in the interactive drama Façade. At the First International Conference on Technologies for Interactive Digital Storytelling and Entertainment (= TIDSE ’03), Darmstadt, Germany, March 2003. Mateas, M., & Stern, A. (2005). Build it to understand it: Ludology meets narratology in game design space. In Proceedings of the Digital Interactive Games Research Association conference (DiGRA 2005), Vancouver, BC, Canada, June 2005; included in the Selected Papers volume. Matthijssen, L. J. (1999). Interfacing between lawyers and computers: An Architecture for knowledge based interfaces to legal databases. Dordrecht, The Netherlands: Kluwer Law International. Maxion, R. A., & Townsend, T. N. (2002). Masquerade detection using truncated command lines. In Proceeedings of the International Conference on Dependable Systems and Networks (DSN-02), Washington, DC, June 2002. Los Alamitos, CA: IEEE Computer Society Press, pp. 219–228. 1208 References

Mazzoni, G. A. L., Loftus, E. F., & Kirsch, I. (2001). Changing beliefs about implausible autobi- ographical events: A little plausibility goes a long way. Journal of Experimental Psychology: Applied, 7, 51–59. Posted at: http://faculty.washington.edu/eloftus/Articles/mazzloft.htm McAllister, H. A., & Bregman, N. J. (1989). Juror underutilization of eyewitness nonidentifica- tions: A test of the disconfirmed expectancy explanation. Journal of Applied Social Psychology, 19, 20–29. McBurney, P., & Parsons, S. (2001). Intelligent systems to support deliberative democracy in environmental regulation. In D. M. Peterson, J. A. Barnden, & E. Nissan (Eds.), Artificial Intelligence and Law, special issue, Information and Communications Technology Law, 10(1), 79–89. McBurney, P., & Prakken, H. (2004). Argumentation in dialogues. In J. Fox (Ed.), Theoretical framework for argumentation (pp. 57–84). ASPIC Consortium. McCabe, S. (1988). Is jury research dead? In M. Findlay & P. Duff (Eds.), The Jury under attack (pp. 27–39). London: Butterworths. McCallum, A., Corrada-Emmanuel, A., & Wang, X. (2005). The author-recipient-topic model for topic and role discovery in social networks, with application to Enron and academic email. In Proceedings of the SIAM international conference on data mining, SIAM workshop on link analysis, counterterrorism and security. Philadelphia, PA: SIAM. McCann, D., Culshaw, M. G., & Fenning, P. J. (1997). Setting the standard for geophysical sur- veys in site investigations. In D. M. McCann, M. Eddleston, P. J. Fenning, & G. M. Reeves (Eds.), Modern geophysics in engineering geology (pp. 3–34). (Engineering Geology Special Publications, 12.) London: Geological Society. McClelland, J. L., & Rumelhart, D. E. (1989). Explorations in parallel distributed processing. Cambridge, MA: The MIT Press. McConville, M., , A., & Leng, R. (1991). The case for the prosecution. London: Routledge. McCormick, E. J. (1964). Human factors engineering. New York: McGraw-Hill. McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59(1), 1–16. McCornack, S. A., Levine, T. R., Solowczuk, K. A., & Torres, H. I. (1992). When the alteration of information is viewed as deception: An empirical test of information manipulation theory. Communication Monographs, 59(1), 17–29. McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas imminent in nervous activity. Bulletin of Mathematical Biophysics,66 5(4), 115–133. doi://10.1007/BF02478259 McCulloch, M., Jezierski, T., Broffman, M., Hubbard, A., Turner, K., & Janecki, T. (2006). Diagnostic accuracy of canine scent detection in early- and late-stage lung and breast cancers. Integrative Cancer Therapies, 5, 1–10. McGrath, C., Blythe, J., & Krackhardt, D. (1997). The effect of spatial arrangement on judgments and errors in interpreting graphs. Social Networks, 19(3), 223–242. McGuire, P. G. (2000). The New York Police Department COMPSTAT process. In V. Goldsmith, P. G. McGuire, J. H. Mollenkopf, & T. A. Ross (Eds.), Analyzing crime patterns: Frontiers of practice (pp. 11–22). Thousand Oaks, CA: Sage. McHugh, J. (2001). Intrusion and intrusion detection. International Journal of Information Security, 1(1), 14–35. Berlin: Springer. McLeod, J. A. (2011). Daughter of the empire state: The life of Judge Jane Bolin. Champaign, IL: University of Illinois Press. McLeod, M. (1991). Death on the doorstep (As the police searched for clues, they began to ask, Is some killer playing a game with us?). Reader’s Digest (U.S. edition), September 1991, pp. 135–140. Condensed from Florida Magazine (the Sunday supplement of Orlando Sentinel) of 12 May 1991.

66 The current name of the journal is Bulletin of Mathematical Biology. References 1209

McMenamin, G. R. (Ed.). (1993). Forensic stylistics. Amsterdam: Elsevier. Also: special issue, Forensic Science International, 58(1/2), 1993. McNally, R. J. (2003). Remembering Trauma. Cambridge, MA: Harvard University Press. McNeal, G. S. (2007). Unfortunate legacies: Hearsay, ex parte affidavits and anonymous witnesses at the IHT [i.e., Iraqi High Tribunal]. In G. Robertson (Ed.), Fairness and evidence in war crimes trials. Special issue of International Commentary on Evidence, 4(1). The Berkeley Electronic Press (article accessible on the Web at this address: http://www.bepress.com/ice/ vol4/iss1/art5) McQuiston-Surret, D., Topp, L. D. & Malpass, R. S. (2006). Use of facial composite systems in US law enforcement agencies, Psychology, Crime & Law, 12, 505–517. Me, G. (2008). Investigation strategy for the small pedophiles world. In M. Quigley (Ed.), Encyclopedia of information ethics and security (pp. 418–425). Hershey, PA: IGI Global (formerly Idea Group). Meade, M. L., & Roediger, H. L., III. (2002). Explorations in the social contagion of memory. Memory & Cognition, 30, 995–1009. Meester, R. W. J., & Sjerps, M. (2004). Why the effect of prior odds should accompany the likeli- hood ratio when reporting DNA evidence (with discussion between A. P. Dawid, D. J. Balding, J. S. Buckleton and C. M. Triggs). Law, Probability and Risk, 3, 51–86. Mégret, M. (1956). La guerre psychologique. (Collection “Que sais-je?”, 713.) Paris: Presses Universitaires de France (PUF). Meehan, J. (1976). The metanovel: Writing stories by computer. Ph.D. Dissertation, Research Report #74 (now YALE/DCS/tr074). New Haven, CT: Computer Science Department, Yale University. Meehan, J. R. (1977). TALE-SPIN, an interactive program that writes stories. In Proceedings of the fifth International Joint Conference on Artificial Intelligence (IJCAI’77), Cambridge, MA, August 1977. San Mateo, CA: Morgan Kaufmann, Vol. 1, pp. 91–98. http://ijcai.org/search.php Meehan, J. (1981a). TALE-SPIN. Chapter 9 In R. C. Schank & C. K. Riesbeck (Eds.), Inside com- puter understanding: Five programs plus miniatures (pp. 197–226). Hillsdale, NJ: Lawrence Erlbaum Associates; cf. J. Meehan’s “Micro TALE-SPIN”, ch. 10, ibid., 227–258. (There is a consolidated bibliography at the end of the volume: pp. 373–377.) Meehan, J. (1981b). Micro TALE-SPIN. Chapter 10 In R. C. Schank & C. K. Riesbeck (Eds.), Inside computer understanding: Five programs plus miniatures (pp. 227–258). Hillsdale, NJ: Lawrence Erlbaum Associates. (There is a consolidated bibliography at the end of the volume: pp. 373–377.) Meikle, T., & Yearwood, J. (2000). A framework for designing a decision support system to support discretion. At Intelligent Decision Support for Legal Practice (IDS 2000).InProceedings of the International ICSC Congress “Intelligent Systems & Applications” (ISA 2000), Wollongong, NSW, Australia, December 2000. Wetaskiwin, AB, Canada: ICSC Academic Press, Vol. 1, pp. 101–108. Meissner, C., & Kassin, S. (2002). He’s guilty: Investigator bias in judgements of truth and deception. Law and Human Behavior, 26, 469–480. Meissner, C. A., & Brigham, J. C. (2001). Thirty years of investigating the own-race bias in memory for faces: A meta-analytic review. Psychology, Public Policy, and Law, 7(1), 3–35. doi://10.1037/1076-8971.7.1.3 Meister, J. C. (2003). Computing action. Berlin: de Gruyter. Meldman, J. A. (1975). A preliminary study in computer-aided legal analysis. Dissertation. Technical Report MAC-TR-157. Cambridge, MA: Massachusetts Institute of Technology. Mellett, J. S. (1996). GPR in forensic and archaeological work: Hits and misses. In Symposium on the Application of Geophysics to Environmental Engineering Problems (SAGEEP), 1991. Environmental & Engineering Geophysical Society Co., USA, pp. 487–491. Melnik, M., & Alm, J. (2002). Does a seller’s ecommerce reputation matter? Evidence from eBay auctions. Journal of Industrial Economics, 50, 337–349. Menard, V. S. (1993). Admission of computer generated visual evidence: Should there be clear standards? Software Law Journal, 6, 325. 1210 References

Memon, A. (2008). A field evaluation of the VIPER system in Scotland. http://www.sipr.ac.uk/ downloads/Memon_%20VIPER%20Field%20study.pdf Memon, A., Bartlett, J. C., Rose, R., & Gray, C. (2003). The aging eyewitness: The effects of face-age and delay upon younger and older observers. Journal of Gerontology, 58, 338–345. Memon, A., & Bull, R. (Eds.). (1999). Handbook of the psychology of interviewing. Chichester: Wiley. Published in paperback, 2001. Memon, A., & Gabbert, F. (2003a). Unravelling the effects of a sequential lineup. Applied Cognitive Psychology, 6, 703–714. Memon, A., & Gabbert, F. (2003b). Improving the identification accuracy of senior witnesses: Do pre-lineup questions and sequential testing help? Journal of Applied Psychology, 88(2), 341–347. Memon, A., Hope, L., Bartlett, J., & Bull, R. (2002). Eyewitness recognition errors: The effects of mugshot viewing and choosing in young and old adults. Memory and Cognition, 30, 1219–1227. Memon, A., Hope, L., & Bull, R. H. C. (2003). Exposure duration: Effects on eyewitness accuracy and confidence. British Journal of Psychology, 94, 339–354. Memon, A., & Wright, D. (1999). The search for John Doe 2: Eyewitness testimony and the Oklahoma bombing. The Psychologist, 12, 292–295. Memon, A., Vrij, A., & Bull, R. (1998). Psychology and law: Truthfulness, accuracy and credi- bility. London: McGraw-Hill. Second edition: Psychology and law. Truthfulness, accuracy and credibility of victims, witnesses and suspects. Chichester: Wiley, 2003. Mena, J. (2003). Investigative data mining for security and criminal detection. Amsterdam & Boston (Newton, MA): Butterworth-Heinemann (of Elsevier). Mendelsohn, S. (1891). The criminal jurisprudence of the jews. Baltimore, MD: M. Curlander; 2nd edn., New York: Sepher-Hermon Press, 1968. Merkl, D., & Schweighofer, D. (1997). The exploration of legal text corpora with hierarchical neural networks: A guided tour in public international law. In Proceedings of sixth International Conference on Artificial Intelligence and Law (ICAIL’97), Melbourne, Australia. New York: ACM Press, pp. 98–105. Merkl, D., Schweighofer, E., & Winiwarter, W. (1999). Exploratory analysis of concept and document spaces with connectionist networks. Artificial Intelligence and Law, 7(2/3), 185–209. Merlino, A., Morey, D., & Maybury, M. T. (1997). Broadcast news navigation using story segments. In Proceedings of ACM Multimedia ’97, pp. 381–391. Merricks, T. (1995). Warrant entails truth. Philosophy and Phenomenological Research, 55(4), 841–855. Merrill, T. W., & Smith, H. E. (2000). Optimal standardization in the law of property: The numerus clausus principle. Yale Law Journal, 110, 1–70. Merton, R. K. (1948) The self-fulfilling . The Antioch Review, 8, 193–210. Mertz, E., & Yovel, J. (2005). Courtroom narrative. In D. Herman, M. Jahn, & M.-L. Ryan (Eds.), Routledge encyclopedia of narrative theory (pp. 86–88). London: Routledge, 2005 (hardcover), 2008 (paperback). Meudell, P. R., Hitch, G. J., & Boyle, M. M. (1995). Collaboration in recall: Do pairs of peo- ple cross-cue each other to produce new memories? The Quarterly Journal of Experimental Psychology, 48a, 141–152. Meyer, R. K., & Friedman, H. (1992). Whither relevant arithmetic? The Journal of Symbolic Logic, 57, 824–831. Michie, D., Spiegelhalter, D. J., & Taylor, C. C. (Eds.). (1994). Machine learning, neural and statistical classification. West Sussex. England: Ellis Horwood. Michon, J. A., & Pakes, F. J. (1995). Judicial decision-making: A theoretical perspective. Chapter 6.2 In R. Bull & D. Carson (Ed.), Handbook of psychology in legal contexts (pp. 509–525). Chichester: Wiley. References 1211

Miikkulainen, R. (1993). Subsymbolic natural language processing. Cambridge, MA: MIT Press. The book is based on a dissertation posted at: ftp://ftp.cs.utexas.edu/pub/neural-nets/papers/ miikkulainen.diss.tar Miikkulainen, R., & Dyer, M. G. (1991). Natural language processing with modular PDP networks and distributed lexicon. Cognitive Science, 15, 343–400. Miller, F. (1969). Prosecution: The decision to charge a suspect with a crime. Boston: Little, Brown. Miller, L. S. (1984). Bias among forensic document examiners: A need for procedural changes. Journal of Police Science and Administration, 12(4), 407–411. Miller, L. S. (1987). Procedural bias in forensic science examinations of human hair. Law and Human Behavior, 11(2), 157–163. Miller, M. T. (2003). Crime scene investigation. Chapter 8 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (1st ed.). Boca Raton, FL: CRC Press. Also as Chapter 10 in 2nd edition, 2005. Also published in the 3rd edition, 2009. Miller, R. E. (1973). A comparison of some theoretical models of parallel computation. IEEE Transactions on Computers, C 22, 710–717. Milne, R., & Bull, R. (1999). Investigative interviewing: Psychology and practice. Chichester: Wiley. Minh, T. T. H. (2007). Approaches to XML schema matching. Ph.D. Thesis. Norwich, England: University of East Anglia. Minkov, E., & Cohen, W. W. (2006). An email and meeting assistant using graph walks. In Third Conference on Email and Anti-Spam CEAS 2006, Palo Alto, CA. New York: ACM, pp. 14–20. Minsky, M. (1975). A framework for representing knowledge. In P. Winston (Ed.), The psychology of computer vision. New York: McGraw-Hill. Minsky, M. (2002). The emotion machine (Part 6). New York: Pantheon. http://web.media.mit. edu/~minsky/E6/eb6.html Minsky, M., & Papert, S. (1969). Perceptrons: An introduction to computational geometry. Cambridge, MA: MIT Press. Mishler, E. G. (1995). Models of narrative analysis: A typology. Journal of Narrative and Life History, 5(2), 87–123. Mahwa, NJ: Lawrence Erlbaum Associates. Misra, S., Abraham, K. I., Obaidat, M. S., & Krishna, P. V. (2009). LAID: A learning automata- based scheme for intrusion detection in wireless sensor networks. Security in Wireless Sensor Networks,67 2(2), 105–115. Mitchell, H. B. (2010). Markov random fields. Chapter 17 In H. B. Mitchell, Image fusion: Theories, techniques and applications (pp. 205–209). Berlin: Springer. doi://10.1007/978-3- 642-11216-4_17 Mitchell, T. M. (1997). Machine learning. New York: McGraw-Hill. Mitra, S., Banka, H., & Pedrycz, W. (2006). Rough-fuzzy collaborative clustering. IEEE Transactions on Systems, Man & Cybernetics, B, 36, 795–805. MITRE. (2001). Stopping traffic: Anti drug network (ADNET). MITRE digest archives. http:// www.mitre.org/news/digest/archives/2001/adnet.html Mitschick, A., & Meissner, K. (2008). Metadata generation and consolidation within an ontology- based document management system. International Journal of Metadata, Semantics and Ontologies, 3(4), 249–259. Mittag, D. (2004). Evidentialism. Internet Encyclopedia of Philosophy. www.iep.utm.edu Mizanur Rahman, S. M., Nasser, N., Inomata, A., Okamoto, T., Mambo, M., & Okamoto, E. (2008). Anonymous authentication and secure communication protocol for wireless mobile ad hoc networks. Security and Communication Networks, 1(2), 179–189. Moens, M.-F. (2000). Automatic indexing and abstracting of document texts. Dordrecht, The Netherlands: Kluwer.

67 The journal Security in Wireless Sensor Networks is published by Wiley. 1212 References

Moens, M.-F. (2001). Legal text retrieval. Artificial Intelligence and Law, 9(1), 29–57. Moens, M.-F., Uyttendaele, C., & Dumortier, J. (1997). Abstracting of legal cases: The SALOMON experience. In Proceedings of the sixth international conference on artificial intelligence and law. Melbourne, Australia. New York: ACM Press, pp. 114–122. Moens, M.-F., Uyttendaele, C., & Dumortier, J. (1999). Abstracting of legal cases: The potential of clustering based on the selection of representative objects. Journal of the American Society for Information Science, 50(2), 151–161. Moenssens, A. (1999). Is fingerprint identification a “science”? Forensic-Evidence.com. http:// www.forensicevidence.com/site/ID00042.html Moenssens, A. (2003). Fingerprint identification: A valid reliable “forensic science”? Criminal Justice, 18, 31–37. Moh, S.-K. (1950). The deduction theorems and two new logical systems. Methodos, 2, 56–75. Mokherjee, D., & Sopher, B. (1994). Learning behavior in an experimental matching pennies game. Games and Economic Behavior, 7(1), 62–91. Orlando, FL: Academic. Molina, D. K. (2009). Handbook of forensic toxicology for medical examiners. Boca Raton, FL: CRC Press. Mommers, L. (2003). Application of a knowledge-based ontology of the legal domain in col- laborative workspaces. In G. Sartor (Ed.), Proceedings of the ninth International Conference on Artificial Intelligence and Law (ICAIL 2003), Edinburgh, Scotland, 24–28 June 2003 (pp. 70–76). New York: ACM Press. Monahan, J., & Loftus, E. F. (1982). The psychology of law. Annual Review of Psychology, 33, 441–475. Monmonier, M. S. (1996). How to lie with maps. Chicago: University of Chicago Press. Moore, D. S. (2007). Recent advances in trace explosives detection instrumentation. Sense Imaging, 8, 9–38. doi://10.1007/s11220-007-0029-8 Moorman, K. (1997). A functional theory of creative reading: Process, knowledge, and evaluation. Doctoral dissertation. Atlanta, GA: College of Computing, Georgia Institute of Technology. Moorman, K., & Ram, A. (1994). Integrating creativity and reading: A functional approach. At the Sixteenth annual conference of the cognitive science society. Moreno, J. L. (1953). Who shall survive: Foundations of sociometry, group psychotherapy, and sociodrama. Boston, MA: Beacon House. (Originally published in 1934 and later in 1953 and 1978) Morgan, J. E. (2008). Noncredible competence: How to handle “newbies”, “wannabes”, and forensic “experts” who know better or should know better. In R. L. Heilbronner (Ed.), Neuropsychology in the courtroom: Expert analysis of reports and testimony.NewYork: Guilford Press. Morris, R. N. (2000). Forensic handwriting identification: Fundamental concepts and principles. London & San Diego, CA: Academic. Morrison, R. D. (2002). Subsurface models used in environmental forensics. Chapter 8 In B. L. Murphy & R. D. Morrison (Eds.), Introduction to environmental forensics (pp. 311–367). San Diego, CA & London: Academic pp. 311–367. Mortera, J., & Dawid, A. P. (2006). Probability and evidence. Research Report 264, March. Department of Statistical Science, University College London. Mortera, J., Dawid, A. P., & Lauritzen, S. L. (2003). Probabilistic expert systems for DNA mixture profiling. Theoretical Population Biology, 63, 191–205. Morton, A. (2003). A guide through the theory of knowledge (3rd ed.). Oxford: Blackwell. Mørup, M. (2011). Applications of tensor (multiway array) factorizations and decompositions in data mining. Wiley Interdisciplinary Reviews (WIREs): Data Mining and Knowledge Discovery, 1(1), 24–40. doi://10.1002/widm.1 Morzy, M. (2008). New algorithms for mining the reputation of participants of online auctions. Algorithmica, 52, 95–112. Moulin, B. (1992). A conceptual graph approach for representing temporal information in discourse. Knowledge-Based Systems, 5(3), 183–192. References 1213

Moulin, B., & Rousseau, D. (1994). A multi-agent approach for modelling conversations. In Proceedings of the international avignon conference AI 94, Natural language processing sub-conference, Paris, France, June 1994, pp. 35–50. Mueller, E. T. (1987). Daydreaming and computation: A computer model of everyday creativity, learning, and emotions in the human stream of thought. Doctoral dissertation. Technical Report CSD-870017, UCLA-AI-87-8 Computer Science Department, University of California, Los Angeles. On microfilm, Ann Arbor, MI: UMI. Mueller, E. T. (1990). Daydreaming in humans and machines: A computer model of the stream of thought. Norwood, NJ: Ablex. Mueller, E. T. (1998). Natural language processing with ThoughtTreasure. New York: Signiform. Mueller, E. T. (1999a). A database and lexicon of scripts for ThoughtTreasure. CogPrints cog00000555. Mueller, E. T. (1999b). Prospects for in-depth story understanding by computer. CogPrints cog00000554. http://web.media.mit.edu/~mueller/papers/storyund.html Mueller, E. T. (2002). Story understanding. In Nature encyclopedia of cognitive science. London: Nature Publishing Group. Mueller, E. T. (2003). Story understanding through multi-representation model construction. In G. Hirst & S. Nirenburg (Eds.), Text meaning: Proceedings of the HLT-NAACL 2003 workshop (pp. 46–53). East Stroudsburg, PA: Association for Computational Linguistics. Mueller, E. T. (2004). Understanding script-based stories using commonsense reasoning. Cognitive Systems Research, 5(4), 307–340. Mueller, E. T. (2004). Event calculus reasoning through satisfiability. Journal of Logic and Computation, 14(5), 703–730. Mueller, E. T. (2006). Commonsense reasoning. San Francisco: Morgan Kaufmann. Mueller, E. T. (2007). Modelling space and time in narratives about restaurants. Literary and Linguistic Computing, 22(1), 67–84. Mueller, E. T., & Dyer, M. G. (1985a). Towards a computational theory of human daydreaming. In Proceedings of the seventh annual conference of the cognitive science society. Hillsdale, NJ: Lawrence Erlbaum, pp. 120–129. Mueller, E. T., & Dyer, M. G. (1985b). Daydreaming in humans and computers. In Proceedings of the ninth International Joint Conference on Artificial Intelligence (IJCAI’85), Los Angeles, CA, 18–24 August 1985. San Mateo, CA: Morgan Kaufmann. http://ijcai.org/search.php Mukherjee, I., & Schapire, R. E. (2011). A theory of multiclass boosting. Advances in Neural Information Processing Systems, 23. http://www.cs.princeton.edu/~schapire/papers/multiboost. pdf Munn, K., & Smith, B. (Eds.). (2008). Applied ontology: An introduction. Lancaster, (Metaphysical Research, 9.) Frankfurt/M, Germany: Ontos Verlag, & England: Gazelle. Murbach, R., & Nonn, E. (1991). Sentencing by artificial intelligence tools: Some possibilities and limitations. Paper presented at The Joint Meeting of the Law and Society Association and the Research Committee of the Sociology of Law of the International Sociological Association, Amsterdam, 1991. Murphy, B. L., & Morrison, R. D. (Eds.). (2002). Introduction to environmental forensics.San Diego, CA & London: Academic. Murray, R. C., & Tedrow, J. C. F. (1975). Forensic geology: Earth sciences and criminal investigations. New Brunswick, NJ: Rutgers University Press. Musatti, C. L. (1931). Elementi di psicologia della testimonianza (1st ed.). Padova, Italy: CEDAM, 1931. Second edition, with comments added by the author, Padova: Liviana Editrice, 1989. Na, H.-J., Yoon, D.-H., Kim, Ch.-S., & Hwang, H. S. (2005). Vulnerability evaluation tools of matching algorithm and integrity verification in fingerprint recognition. In R. Khosla, R. J. Howlett, & L. C. Jain (Eds.), Knowledge-based intelligent information and engineering sys- tems: 9th international conference, KES 2005, Melbourne, Australia, September 14–16, 2005, Proceedings, Part IV (pp. 993–999). (Lecture Notes in Computer Science, Vol. 3684). Berlin: Springer. 1214 References

Naess, E., Frincke, D. A., McKinnon, A. D., &. Bakken, D. E. (2005). Configurable middleware- level intrusion detection for embedded systems. At the Second International Workshop on Security in Distributed Computing Systems (SDCS),In:Proceedings of the 25th International Conference on Distributed Computing Systems Workshops (ICDCS 2005 Workshops), 6–10 June 2005, Columbus, OH, USA. IEEE Computer Society 2005, pp. 144–151. Nagel, I. H., & Hagan, J. (1983). Gender and crime: Offense patterns and criminal court sanctions. In M. Tonry & N. Morris (Eds.), Crime and justice: An annual review of research (Vol. 4, pp. 91–144). Chicago, IL: University of Chicago Press. Nagel, S. (1962). Judicial backgrounds and criminal cases. Journal of Criminal Law, Criminology and Police Science, 53, 333–339. Nagel, S. (1964). Testing empirical generalisations. In G. Schubert (Ed.), Judicial behaviour: A reader in theory and research (pp. 518–529). Chicago, IL: Rand McNally & Company. Nakhimovsky, A., & Myers, T. (2002). XML programming: Web applications and web services with JSP and ASP. (The Expert’s Voice Series.) Berkeley, CA: Apress. Nambiar, P., Bridges, T. E., & Brown, K. A. (1995). Quantitative forensic evaluation of bite marks with the aid of a shape analysis computer program. I: The development of SCIP and the similarity index. Journal of Forensic Odontostomatology, 13(2), 18–25. Nance, D. A., & Morris, S. B. (2002). An empirical assessment of presentation formats for trace evidence with a relatively large and quantifiable random match probability. Jurimetrics Journal, 42, 403–445. Nance, D. A., & Morris, S. B. (2005). Juror understanding of DNA evidence: An empirical assessmeng of presentation formats for trace evidence with a relatively small random-match probability. Journal of Legal Studies, 34, 395–443. Nanto, H., Sokooshi, H., & Kawai, T. (1993). Aluminum-doped ZnO thin film gas sensor capable of detecting freshness of sea foods. Sensors & Actuators, 14, 715–717. Napier, M. R., & Baker, K. P. (2005). Criminal personality profiling. Chapter 31 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. National Criminal Intelligence Service. (2000). The national criminal intelligence model. London: NCIS. National Research Council. (1996). The evaluation of forensic DNA evidence. Washington, DC: National Academy Press. Nebel, B. (1994). Base revision operations and schemes: semantics, representation, and complex- ity. In A. G. Cohn (Ed.), Proceedings of the 11th European conference on artificial intelligence. New York: Wiley. Neill, A. (1991). Fear, fiction and make-believe. The Journal of Aesthetics and Art Criticism, 49, 47–56. Neill, A. (1993). Fiction and the emotions. American Philosophical Quarterly, 30, 1–13. Neill, A. (1995). Emotional responses to fiction: Reply to Radford. The Journal of Aesthetics and Art Criticism, 53(1), 75–78. Neimark, J. (1996). The diva of disclosure, memory researcher Elizabeth Loftus. Psychology Today, 29(1). Article downloadable from: http://faculty.washington.edu/eloftus/Articles/ psytoday.htm Nenov, V. I., & Dyer, M. G. (1993). Perceptually grounded language learning: Part 1: A neural network architecture for robust sequential association. Connection Science, 5(2), 115–138. Nenov, V. I., & Dyer, M. G. (1994). Perceptually grounded language learning: Part 2: DETE: A neural/procedural model. Connection Science, 6(1), 3–41. Neville, J., Adler, M., & Jensen, D. (2003). Clustering relational data using attribute and link information. In Proceedings of the Text Mining and Link Analysis Workshop, 18th International Joint Conference on Artificial Intelligence. Neville, J., & Jensen, D. (2003). Collective classification with relational dependency networks. In S. Džeroski, L. De Raedt, & S. Wrobel (Eds.), Proceedings of the second Multi-Relational Data Mining workshop (MRDM-2003), Washington, DC, 27 August 2003, at the Ninth ACM References 1215

SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’03), pp. 77–91. http://www.cs.purdue.edu/homes/neville/papers/neville-jensen-mrdm2003.pdf Neville, J., Jensen, D., Friedland, L., & Hay, M. (2003). Learning relational probability trees. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 625–630. http://www.cs.purdue.edu/homes/neville/papers/neville-et-al- kdd2003.pdf Neville, J., Rattigan, M., & Jensen, D. (2003). Statistical relational learning: Four claims and a survey. In Proceedings of the Workshop on Learning Statistical Models from Relational Data, 18th International Joint Conference on Artificial Intelligence. Neville, J., Simsek, O., Jensen, D., Komoroske, J., Palmer, K., & Goldberg, H. (2005). Using relational knowledge discovery to prevent securities fraud. In Proceedings of the 11th ACM SIGKDD international conference on Knowledge Discovery and Data Mining (KDD’05), Chicago, IL, 21–24 August 2005. New York: ACM Press, pp. 449–458. http://www.cs.purdue. edu/homes/neville/papers/neville-et-al-kdd2005.pdf Newburn, T., Williamson, T., & Wright, A. (Eds.). (2007). Handbook of criminal investigation. Cullompton: Willan Publishing. Newell, A. (1962). Some problems of the basic organisation in problem solving programs. In M. C. Yovits, G. T. Jacobi & G. D. Goldstein (Eds.), Proceedings of the second conference on self-organizing systems (pp. 393–423). Washington, DC: Spartan Books. Newman, M. E. (2003). The structure and function of complex networks. SIAM Review, 45(2), 167–256. Newman, M. E. (2010). Networks: An introduction. Oxford: Oxford University Press. Ng, H. T., Teo, L. H., & Kwan, J. L. P. (2000). A machine learning approach to answering ques- tions for reading comprehension tests. In Proceedings of the 2000 joint SIGDAT conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC- 2000), pp. 124–132. http://www.comp.nus.edu.sg/~nght/pubs/emnlp_vlc00.pdf.gz Ng, T.-T., & Chang, S.-F. (2004). A model for image splicing. At the IEEE International Conference on Image Processing (ICIP), Singapore, October 2004. Ng-Thow-Hing, V. (1994). A biomechanical musculotendon model for animating articulated objects. MSc Thesis (supervised by E. Fiume). Toronto, Canada: University of Toronto, Department of Computer Science. Ng-Thow-Hing, V. (2001). Anatomically based models for physical and geometric reconstruction of animals. PhD Thesis (supervised by E. Fiume). Toronto, Canada: University of Toronto, Department of Computer Science. Nicolle, D. (1990). The Mongol Warlords: Genghis Khan, Kublai Khan, Hülegü, Tamerlane,with plates by R. Hook. Poole, Dorset, England: Firebird Books. Nicoloff, F. (1989). Threats and illocutions. Journal of Pragmatics, 13(4), 501–522. Nicolson, D. (1994). Truth, reason and justice: Epistemology and politics in evidence discourse. The Modern Law Review, 57(5), 726–744. Nielsen, L., & Nespor, S. (1993). Genetic test, screening, and use of genetic data by public author- ities: In criminal justice, social security, and alien and foreigners acts. Copenhagen: Danish Centre for Human Rights. Niesz, A. J., & Holland, N. (1984). Interactive fiction. Critical Inquiry, 11, 110–129. Nigro, H. O., González Císaro, S. E., & Xodo, D. H. (Eds.). (2008). Data mining with ontologies: implementations, findings and frameworks. Hershey, PA: Information Science Reference. Nijboer, J. F. (2000). Challenges for the law of evidence. In C. M. Breur, M. M. Kommer, J. F. Nijboer, & J. M. Reijntjes (Eds.), New trends in criminal investigation and evidence, Vol. 2 = Proceedings of the second world conference on new trends in criminal investigation and evidence, Amsterdam, 10–15 December 1999. Antwerp, Belgium: Intersentia, 2000, pp. 1–9. Nijboer, J. F. (2008). Current issues in evidence and procedure: Comparative comments from a Continental perspective. International Commentary on Evidence,68 6(2), Article 7. http://www. bepress.com/ice/vol6/iss2/art7

68 The e-journal International Commentary on Evidence is published in Berkeley, California. 1216 References

Nijboer, H., & Sennef, A. (1999). Justification. Chapter 2 In M. Malsch & J. F. Nijboer (Eds.), Complex cases: Perspectives on the Netherlands criminal justice system (pp. 11–26). Amsterdam: THELA THESIS. Nijholt, A. (2002). Embodied agents: A new impetus for humor research. In O. Stock, C. Strapparava, & A. Nijholt (Eds.), The April Fools’ Day workshop on computational humour, April 2002 (Proceedings of the 20th Twente Workshop on Language Technology, TWLT 20) (pp. 101–112). Enschede, The Netherlands: University of Twente. Nirenburg, S., & Raskin, V. (1987). The subworld concept lexicon and the lexicon management system. Computational Linguistics, 13(3/4), 276–289. Nirenburg, S., & Raskin, V. (1996). Ten choices for lexical semantics. Memoranda in Computer and Cognitive Science, MCCS-96-304. Las Cruces, NM: New Mexico State University, Computing Research Laboratory. Nirenburg, S., & Raskin, V. (2004). Principles of ontological semantics. Cambridge, MA: MIT Press. Nissan, E. (1982). Proprietà formali nel progetto logico-concettuale di basi di dati.(Italian:Formal properties in the logical and conceptual design of databases.) 2 vols., 400+200 pages. Tesi di Laurea in Ingegneria Elettronica, Dipartimento di Elettronica. Milan: Politecnico di Milano (= Technical University of Milan). Awarded the Burroughs Italiana Prize. Nissan, E. (1983). The info-spatial derivative: A new formal tool for database design. In Proceedings of the AICA’83 conference, Naples, Vol. 2, pp. 177–182. Nissan, E. (1986). The frame-definition language for customizing the RAFFAELLO structure- editor in host expert systems. In Z. Ras´ & M. Zemankova (Eds.), Proceedings of the first International Symposium on Methodologies for Intelligent Systems (ISMIS’86), Knoxville, TN. New York: ACM SIGART Press, pp. 8–18. Nissan, E. (1987a). Nested-relation based frames in RAFFAELLO. Representation & meta- representation structure & semantics for knowledge engineering. In H. J. Schek & M. Scholl (Eds.), International workshop on theory and applications of nested relations and complex objects, Darmstadt, Germany, 1987. Report. Rocquencourt, France: INRIA, 1987, pp. 95–99. Nissan, E. (1987b). The wining and dining project. Part II: An expert system for gastronomy and terminal food-processing. In a special issue on information technology, International Journal of Hospitality Management, 6(4), 207–215. Nissan, E. (1987c). Data analysis using a geometrical representation of predicate calculus. Information Sciences, 41(3), 187–258. Nissan, E. (1987d). ONOMATURGE: An expert system for word-formation and morpho-semantic clarity evaluation (in two parts). In H. Czap & C. Galinski (Eds.), Terminology and knowl- edge engineering [Proceedings of the First International Conference], Trier, West Germany (pp. 167–176 and 177–189). Frankfurt/M, West Germany: Indeks Verlag. Nissan, E. (1988). ONOMATURGE: An expert system in word-formation. Ph.D. Dissertation (Computer Science). 3 vols., ca. 600 pages (in English). Beer-Sheva, Israel: Ben-Gurion University of the Negev. Project awarded the 1988 IPA Award in Computer Science. Nissan, E. (1991). Artificial intelligence as a dialectic of science and technology, and other aspects. Chapter 5 In M. Negrotti (Ed.), Understanding the artificial: On the future shape of artifi- cial intelligence (pp. 77–90). Heidelberg: Springer. Italian version: L’intelligenza artificiale come dialettica fra scienza e tecnologia. Chapter 5 In M. Negrotti (Ed.), Capire l’artificiale (pp. 119–140). Turin: Bollati-Boringhieri (1990); also in the 2nd edition of the Italian book, of 1993. Nissan, E. (1992). Deviation models of regulation: A knowledge-based approach. Informatica e Diritto, year 18 (= 2nd series, vol. 1), (1/2), 181–212. Nissan, E. (1995a). Meanings, expression, and prototypes. Pragmatics & Cognition, 3(2), 317–364. Nissan, E. (1995b). SEPPHORIS: An augmented hypergraph-grammar representation for events, stipulations, and legal prescriptions. Law, Computers, and Artificial Intelligence, 4(1), 33–77. References 1217

Nissan, E. (1996). From ALIBI to COLUMBUS. In J. Hulstijn & A. Nijholt (Eds.), Automatic interpretation and generation of verbal humor: Proceedings of the 12th Twente workshop on language technology, Twente (pp. 69–85). Enschede, The Netherlands: University of Twente. Nissan, E. (1997a). Notions of place: A few considerations. In A. A. Martino (Ed.), Logica delle norme (pp. 256–302). Pisa, Italy: SEU. Nissan, E. (1997b). Notions of place, II. In A. A. Martino (Ed.), Logica delle norme (pp. 303–361). Pisa, Italy: SEU. Nissan, E. (1997c). Emotion, culture, communication. Pragmatics & Cognition, 5(2), 355–369. Nissan, E. (1997d). Review of: N. Sharkey (Ed.), Connectionist natural language processing, Kluwer, Dordrecht & Intellect, Oxford, 1992. Pragmatics and Cognition, 5(2), 383–384. Nissan, E. (1998a). Advances in deontic logic (review). Computers and Artificial Intelligence, 17(4), 392–400. Nissan, E. (1998b). Review of: A. G. B. ter Meulen, Representing time in natural language: The dynamic interpretation of tense and aspect (Cambridge, MA: The MIT Press, 1997). Computers and Artificial Intelligence, 17(1), 98–100. Nissan, E. (1999). Using the CuProS metarepresentation language for defining flexible nested- relation structures for monolingual and multilingual terminological databases. [Proceedings of the EAFT] Conference on co-operation in the field of terminology in Europe, Paris, 17–19 May 1999. Paris: Union Latine, 2000, pp. 337–343. Nissan, E. (2000a). Artificial intelligence and criminal evidence: A few topics. In C. M. Breur, M. M. Kommer, J. F. Nijboer, & J. M. Reijntjes (Eds.), New trends in criminal investigation and evidence, Vol. 2 = Proceedings of the second world conference on new trends in crimi- nal investigation and evidence, Amsterdam, 10–15 December 1999 (pp. 495–521). Antwerp, Belgium: Intersentia. Nissan, E. (2000b). Computer-generated alternative coinages: An automated ranking model for their psychosemantic transparency. In Proceedings of the EAFT conference on co-operation in the field of terminology in Europe, Paris, May 17–19, 1999 (Union Latine, Paris, 2000), pp. 321–336. Nissan, E. (2000c). Registers of use, and ergolectal versus literary niches for neologizing creativity: What do the makers of technical terminology stand to learn from such contrastive analysis? In Proceedings of the EAFT conference on co-operation in the field of terminology in Europe, Paris, May 1999, pp. 227–239. Nissan, E. (2001a). The Bayesianism debate in legal scholarship. [Review article on Allen & Redmayne (1997).] Artificial Intelligence and Law, 9(2/3), 199–214. Nissan, E. (2001b). Can you measure circumstantial evidence? The background of probative for- malisms for law. [A review essay on I. Rosoni, Quae singula non prosunt collecta iuvant: la teoria della prova indiziaria nell’età medievale e moderna. Milan, Italy: Giuffrè, 1995.]. Information and Communications Technology Law, 10(2), 231–245. Nissan, E. (2001c). The Jama legal narrative. Part II: A foray into concepts of improbability. Information & Communications Technology Law, 10(1), 39–52. Part I is Geiger et al. (2001). Nissan, E. (2001d). An AI formalism for competing claims of identification: Capturing the “Smemorato di Collegno” amnesia case. Computing and Informatics, 20(6), 625–656. Nissan, E. (2001e). Review of: E. Harnon & A. Stein (Eds.), Rights of the Accused, Crime Control and Protection of Victims [special volume of the Israel Law Review, 31(1–3), 1997]. Information and Communications Technology Law, 10(2), 247–254. Nissan, E. (2001f). Review of: R. Bull and D. Carson (Eds.), Handbook of Psychology in Legal Contexts (Chichester, West Sussex, England: Wiley, 1995). Artificial Intelligence and Law, 9(2/3), pp. 219–224. Nissan, E. (2001g). Modelling spatial relations in the traveller’s conditional divorce problem. In M. Koppel & E. Merzbach (Eds.), Higgaion: Studies in rabbinic logic (Vol. 5, pp. 8–21). Jerusalem: Aluma. Nissan, E. (2002a). The COLUMBUS Model (2 parts). International Journal of Computing Anticipatory Systems, 12, 105–120 and 121–136. 1218 References

Nissan, E. (2002b). A formalism for misapprehended identities: Taking a leaf out of Pirandello. In O. Stock, C. Strapparava, & A. Nijholt (Eds.), The April Fools’ Day Workshop on Computational Humour, Proceedings of the Twentieth Twente Workshop on Language Technology (TWLT20), Trento, Italy, April 15–16, 2002 (pp. 113–123). Enschede, The Netherlands: University of Twente. Nissan, E. (2003a). Identification and doing without it, I: A situational classification of misap- plied personal identity, with a formalism for a case of multiple usurped identity in Marivaux. Cybernetics and Systems, 34(4/5), 317–358. Nissan, E. (2003b). Identification and doing without it, II: Visual evidence for pinpointing identity. How Alexander was found out: Purposeful action, enlisting support, assumed iden- tity, and recognition. A goal-driven formal analysis. Cybernetics and Systems, 34(4/5), 359–380. Nissan, E. (2003c). Identification and doing without it, III: Authoritative opinions, purposeful action, relabelled goods, and forensic examinations. The case of the stuffed birds: Its narrative dynamics set in formulae. Cybernetics and Systems, 34(6/7), 467–500. Nissan, E. (2003d). Identification and doing without it, IV: A formal mathematical analysis for the feveroles case, of mixup of kinds and ensuing litigation; and a formalism for the “Cardiff Giant” double hoax. Cybernetics and Systems, 34(6/7), 501–530. Nissan, E. (2003e). Facets of abductive reasoning. [Review essay on: Magnani, L. (2001). Abduction, reason, and science: Processes of discovery and explanation.NewYork: Kluwer/Plenum; Josephson, J. R., & Josephson, S. G. (Eds.). (1994). Abductive inference: Computation, philosophy, technology. Cambridge: Cambridge University Press; Bunt, H., & Black, W. (Eds.). (2000). Abduction, belief and context in dialogue: Studies in computational pragmatics. Amsterdam: Benjamins.] Cybernetics and Systems, 34(4/5), 381–399. Nissan, E. (2003f). Review of Hastie (1993). Cybernetics and Systems, 34(6/7), 551–558. Nissan, E. (2003g). Review of Murphy & Morrison (2002). Cybernetics & Systems, 34(6/7), 571–579. Nissan, E. (2003h). Review of Mani (2001). Cybernetics & Systems, 34(4/5), 559–569. Nissan, E. (2003i). Recollecting from abroad: Marco Somalvico (1941–2002). In the special sec- tion (pp. 36–81) “In memoria di Marco Somalvico”, AI∗IA Notizie: Periodico dell’Associazione Italiana per l’Intelligenza Artificiale, 16(3), 38–39. Nissan, E. (2004). Legal evidence scholarship meets artificial intelligence. [Reviewing MacCrimmon & Tillers (2002).] Applied Artificial Intelligence, 18(3/4), 367–389. Nissan, E. (2007a). Tools for representing and processing narratives. In M. Quigley (Ed.), Encyclopedia of information ethics and security (pp. 638–644). Hershey, PA: IGI Global (formerly Idea Group), 2008 (but available from June 2007). Nissan, E. (2007b). Goals, arguments, and deception: A formal representation from the AURANGZEB project. I: An Episode from the Succession War. II: A Formalism for the Capture of Murad. Journal of Intelligent & Fuzzy Systems, 18(3), 281–305 and 307–327. Nissan, E. (2007c). Three perspectives on pretexts: Seeking self-exoneration by hierarchical decomposition; making an -evoking claim; and rhetorical cover-up. In M. T. Turell, J. Cicres, & M. Spassova (Eds.), Proceedings of the second IAFL European conference on forensic linguistics/language and the law (IAFL’06), Barcelona, Spain, 14–16 September 2006 (pp. 293–303). Barcelona: Documenta Universitaria, 2008. Nissan, E. (2007d). Guest editorial of “Marco Somalvico Memorial Issue”. Issue edited by M. Colombetti, G. Gini & E. Nissan. Journal of Intelligent & Fuzzy Systems, 18(3), 211–215. Nissan, E. (2008a). Select topics in legal evidence and assistance by artificial intelligence techniques. Cybernetics and Systems, 39(4), 333–394. Nissan, E. (2008b). Tools from artificial intelligence for handling legal evidence. In M. Quigley (Ed.), Encyclopedia of information ethics and security (pp. 42–48). Hershey, PA: IGI Global. Nissan, E. (2008c). Argument structure models and visualization. In M. Pagani (Ed.), Encyclopedia of multimedia technology and networking (Vol. 1, pp. 75–82). Hershey, PA: IGI Global., 2nd Edition (3 vols.). Nissan, E. (2008d). Argumentation and computing. In M. Quigley (Ed.), Encyclopedia of information ethics and security (pp. 30–35). Hershey, PA: IGI Global. References 1219

Nissan, E. (2008e). Argumentation with Wigmore Charts and computing. In M. Quigley (Ed.), Encyclopedia of information ethics and security (pp. 36–41). Hershey, PA: IGI Global. Nissan, E. (2008f). Tools for representing and processing narratives. In M. Quigley (Ed.), Encyclopedia of information ethics and security (pp. 638–644). Hershey, PA. Nissan, E. (2008g). Nested beliefs, goals, duties, and agents reasoning about their own or each other’s body in the TIMUR model: A formalism for the narrative of tamerlane and the three painters. Journal of Robotic and Intelligent and Robotic Systems, 52(3–4), 515–582 (68 pages) + this paper’s contents on pp. 340–341. Nissan, E. (2008h). From embodied agents or their environments reasoning about the body, to virtual models of the human body: A quick overview. Journal of Robotic and Intelligent and Robotic Systems, 52(3–4), 489–513 + contents of this paper (on p. 340). Nissan, E. (2008i). Medieval (and later) compulsory signs of group identity disclosure. Part I: The general pattern at the core of the social dynamics of the Jewish badge, set in episodic formulae and in systems & control block schemata. Journal of Sociocybernetics, 6(1), Summer 2008, 11–30. At www.unizar.es/sociocybernetics/ Nissan, E. (2008j). Epistemic formulae, argument structures, and a narrative on identity and deception: A formal representation from the AJIT subproject within AURANGZEB. Annals of Mathematics and Artificial Intelligence, 54(4), 2008 [2009], 293–362. Nissan, E. (2008k). Chance vs. causality, and a taxonomy of explanations. In M. Negrotti (Ed.), Natural chance, artificial chance, thematic volume of Yearbook of the Artificial: Nature, Culture & Technology, 5. Basel, Switzerland: Peter Lang, 2008, pp. 195–258. Also an Italian translation: Il caso in relazione alla causalità, ed una tassonomia delle eziologie. In: Lanzavecchia, G., & Negrotti, M. (Eds.). (2008). L’enigma del caso: Fatti, ipotesi e immagini (pp. 93–149). Milan: Edizioni Goliardiche. Nissan, E. (2008) [2010]. Un mistero risolto? Riflessioni in margine a Il serpente biblico di Valerio Marchi. Rassegna Mensile di Israel (Rome),74(1/2), 95–124. A different version, shorter but with an additional final section, is: La storia regionale come chiave per comprendere un para- dosso della storia d’Italia: Considerazioni su Il serpente biblico di Valerio Marchi. Stradalta: Rivista dell’Associazione Storica Gonarese, 2 (Gonars, Friuli, 2009), 73–80. Nissan, E. (2009a). Legal evidence, police intelligence, crime analysis or detection, forensic test- ing, and argumentation: An overview of computer tools or techniques. Journal of Law and Information Technology, 17(1), 1–82. Nissan, E. (2009b). Eude and Eglon, Eleazar the Maccabee, and two early modern Indian nar- ratives: Factors explaining the convergence of phylogenetically unconnected tales. Journal of Indo-Judaic Studies, 10, 81–92. Nissan, E. (2009c). Computational models of the emotions: from models of the emotions of the individual, to modelling the emerging irrational behaviour of crowds. AI & Society: Knowledge, Culture and Communication, 24(4), 403–414. Nissan, E. (2009d). Review of: A. Adamatzky, Dynamics of Crowd-Minds: Patterns of Irrationality in Emotions, Beliefs and Actions (World Scientific Series on Nonlinear Science, Series A, Vol. 54), Singapore, London, and River Edge, NJ: World Scientific, 2005. Pragmatics & Cognition, 17(2), 472–481. Nissan, E. (2009) [2010]. Medieval (and later) compulsory signs of group identity disclosure. Part II: The intervention of Joseph Cazès in Teheran in 1898, set in episodic formulae. Journal of Sociocybernetics, 7(1), 54–96. At www.unizar.es/sociocybernetics/ Nissan, E. (2010a). Wearing the badge of the Alliance, vs. having to wear a badge to be told apart: Joseph Cazès in Teheran in 1898. Cognitive analysis, and cultural aspects. In a special issue on “Knowledge and Cognitive Science” of the International Journal on Humanistic Ideology: Studies into the Nature and Origin of Humanistic Ideas, 3(1), 59–108. Nissan, E. (2010b). Revisiting Olender’s The Languages of Paradise, placed in a broader con- text. Quaderni di Studi Indo-Mediterranei, 3. Alessandria, Piedmont, Italy: Edizioni dell’Orso, pp. 330–360. 1220 References

Nissan, E. (2010c). Multilingual lexis, semantics, and onomasiology. Terminological database modelling, by using the CuProS metarepresentation language: An XML-compatible XML- precursor enabling flexible nested-relation structures. In N. Dershowitz & E. Nissan (Eds.), Language, culture, computation: Essays in honour of Yaacov Choueka.Vol.2:Tools for text and language, and the cultural dimension (in press). Berlin: Springer. Nissan, E. (2010d). Narratives, formalism, computational tools, and nonlinearity. In N. Dershowitz & E. Nissan (Eds.), Language, culture, computation: Essays in Honour of Yaacov Choueka (2 vols.), Vol. 1: Theory, techniques, and applications to e-science, law, narratives, information retrieval, and the cultural heritage (in press). Berlin: Springer. Nissan, E. (2010e). Ethnocultural barriers medicalized: A critique of Jacobsen. Journal of Indo- Judaic Studies, 11, 75–119. Nissan, E. (2010) [2011]. Ancient Jewish ideas about the Ocean and about how the Mediterranean Sea originated. Part III (§§ 12–15) in: E. Nissan, Going west vs. going east: Ancient Greek, Roman, Carthaginian, Mauretanian, and Celtic conceptions about or involvement with the Ocean, what early rabbinic texts say about the Ocean and the formation of the Mediterranean, and beliefs about reaching the Antipodes. (Review article.) MHNH [μηνη]: revista interna- cional de investigación sobre magia y astrología antiguas (Málaga),10, 279–310. Nissan, E. (2011a). The rod and the crocodile: Temporal relations in textual hermeneutics: An application of Petri nets to semantics. Semiotica, 184(1/4), 187–227. Nissan, E. (2011b). Aspects of Italy’s Jewish experience, as shaped by local and global factors. In C. Gelbin & S. L. Gilman (Eds.), Jewish culture in the age of globalisation. Special issue in the European Review of History/Revue européenne d’histoire, 18(1), 131–142. Nissan, E. (2011c). The Paradox of the Italian Jewish Experience in 1990–2010. Changing Jewish Communities, no. 66, 15 March 2011 (online, refereed monthly of the Institute for Global Jewish Affairs/Jerusalem Center for Public Affairs). http://jcpa.org/JCPA/Templates/ ShowPage.asp?DRIT=4&DBID=1&LNGID=1&TMID=111&FID=623&PID=0&IID= 6194&TTL=The_Paradox_of_the_Italian_Jewish_Experience_in_1990-2010 Nissan, E. (2011d). Reflections on a New Edition of Martial’s Liber spectaculorum: Supplementary information from Jewish sources about the arena games. Ludica: annali di storia e civiltà del gioco, nos. 13/14 (pp. 224–240). Rome: Viella, for Treviso: Fondazione Benetton, 2007–2008 [March 2011]. Nissan, E. (2011e). Risks of ingestion: On eating tomatoes in Agnon, and on the water of Shittim. Revue européenne des études hébraïques (REEH), Paris, 14, 2009 [2011], 46–79. Nissan, E. (2011f). ate it: The fate of homework as a situational archetype for a pretext. Social context, medium, and formalism. The American Journal of Semiotics, 27(1–4), 115–162. Nissan, E. (forthcoming a). All the appearance of a pretext – In Courtroom examples, and in Gag cartoons. Submitted book. Contains four fairly autonomous essays: Cognitive states, arguments, and representing legal narratives: The pragmatics of a claim with little credibility, Part I: “The dog ate it, m’lud” – The Newcastle case. An analysis with Wigmore charts. Cognitive states, arguments, and representing legal narratives: The pragmatics of a claim with little credibility, Part II: Mice ate the evidence – The Sofri case: An analysis with episodic formulae. The pragmatics of a claim with little credibility. Part III: A typology for “My dog ate my homework”: (1) An analysis with decision tables of a topos in humour. The pragmatics of a claim with little credibility. Part IV: A typology for “My dog ate my homework”: (2) Taxonomy enrichment by devising further situations and analysing their cognitive features. Nissan, E. (forthcoming b). An Analysis with Wigmore Charts of Gulotta’s last speech in defence to the bench in defence at the Bolzano trial. To appear in: G. Gulotta, M. Liberatore, & E. Nissan. Memories Under Trial. Nissan, E., Cassinis, R., & Morelli, L. M. (2008). Have computation, animatronics, and robotic art anything to say about emotion, compassion, and how to model them? The survivor project. Pragmatics & Cognition, 16(1), 3–37 (2008). As a continuation of 15(3) (2007), special issue References 1221

on “Mechanicism and autonomy: What can robotics teach us about human cognition and action?”, third in the series Cognition and Technology. Nissan, E., & Dragoni, A. F. (2000). Exoneration, and reasoning about it: A quick overview of three perspectives. Session on Intelligent Decision Support for Legal Practice (IDS 2000), In Proceedings of the international ICSC congress “Intelligent Systems & Applications” (ISA’2000), Wollongong, Australia, December 2000, Vol. 1, pp. 94–100. Nissan, E., & El-Sana, J. (2012). A retrospective of a pioneering project. Earlier than XML, other than SGML, still going: CuProS metadata for deeply nested relations, and navigating for retrieval in RAFFAELLO. In N. Dershowitz & E. Nissan (Eds.), Language, culture, com- putation: Essays in honour of Yaacov Choueka. Vol. 2: Tools for text and language, and the cultural dimension (in press). Berlin: Springer. Nissan, E., Galperin, A., Soper, A., Knight, B., & Zhao, J. (2001). Future states for a present- state estimate, in the contextual perspective of in-core nuclear fuel management. International Journal of Computing Anticipatory Systems, 9, 256–271. Nissan, E., Gini, G., & Colombetti, M. (2008) [2009]. Guest editorial: Marco Somalvico Memorial Issue. Annals of Mathematics and Artificial Intelligence, 54(4), 257–264. doi:10.1007/s10472- 008-9102-9 Nissan, E., Gini, G., & Colombetti, M. (2009a). Guest editorial: An artificial intelligence miscel- lanea, remembering Marco Somalvico. In: Marco Somalvico Memorial Issue. Applied Artificial Intelligence, 23(3), 197–185. Nissan, E., Gini, G., & Colombetti, M. (2009b). Guest editorial: Marco Somalvico Memorial Issue. In: Marco Somalvico Memorial Issue (Part I of II). Computational Intelligence, 25(2), 109–113. Nissan, E., Hall, D., Lobina, E., & de la Motte, R. (2004). A formalism for a case study in the WaterTime project: The city water system in Grenoble, from privatization to remunicipaliza- tion. Applied Artificial Intelligence, 18(3/4), 367–389. Nissan, E., & Martino, A. A. (Eds.). (2001). Software, Formal Models, and Artificial Intelligence for Legal Evidence, special issue of Computing and Informatics, 20(6), 509–656. Nissan, E., & Martino, A. A. (Eds.). (2003a). Building blocks for an artificial intelligence frame- work in the field of legal evidence, special issue (two parts), Cybernetics and Systems, 34(4/5), 233–411, 34(6/7), 413–583. Nissan, E., & Martino, A. A. (2003b). Guest editorial. Building blocks for an artificial intelli- gence framework in the field of legal evidence, Part I. In Nissan & Martino (2003a), Part I. Cybernetics and Systems, 34(4/5), 233–244. Nissan, E., & Martino, A. A. (Eds.). (2004a). The construction of judicial proof: A challenge for artificial intelligence modelling, special issue, Applied Artificial Intelligence, 18(3/4), 183–393. Nissan, E., & Martino, A. A. (2004b). Artificial intelligence and formalisms for legal evidence: An introduction. Applied Artificial Intelligence, 18(3/4), 185–229. Nissan, E., & Rousseau, D. (1997). Towards AI formalisms for legal evidence. In Z. W. Ras & A. Skowron (Eds.), Foundations of intelligent systems: Proceedings of the 10th international symposium, ISMIS’97 (pp. 328–337). Berlin: Springer. Nissan, E., & Shemesh, A. O. (2010). Saturnine traits, melancholia, and related conditions as ascribed to Jews and Jewish culture (and Jewish responses) from Imperial Rome to high moder- nity. In A. Grossato (Ed.), Umana, divina malinconia, special issue on melancholia, Quaderni di Studi Indo-Mediterranei, 3 (pp. 97–128). Alessandria, Piedmont, Italy: Edizioni dell’Orso. Nissan, E., & Shimony, S. E. (1996). TAMBALACOQUE: For a formal account of the gist of a scholarly argument. Knowledge Organization, 23(3), 135–146. Nissan, E., & Shimony, S. E. (1997). VEGEDOG: Formalism, vegetarian dogs, and partonomies in transition. Computers and Artificial Intelligence, 16(1), 79–104. Nitta, K., Hasegawa, O., & Akiba, T. (1997). An experimental multimodal disputation dys- tem. In the Proceedings of the IJCAI workshop on intelligent multimodal systems, IJCAI’97, pp. 23–28. [The web page of ETL, with which the authors were affiliated, is http://www.etl.go. jp/welcome.html] Noon, R. K. (1992). Introduction to forensic engineering. (The Forensic Library). Boca Raton, FL: CRC Press. 1222 References

Noon, R. K. (2002). Forensic engineering investigation. Boca Raton, FL: CRC Press. Noon, R. K. (2005a). Structural failures. Chapter 23 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Noon, R. K. (2005b). Vehicular accident reconstruction. Chapter 25 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Norman, D. A., & Rumelhart, D. E. (1975). Explorations in cognition. San Francisco: W. H. Freeman and Company. Norvig, P. (1987). A unified theory of inference for text understanding. Technical Report CSD-87- 339. Berkeley, CA: Computer Science Division, University of California. ftp://sunsite.berkeley. edu/pub/techreps/CSD-87-339.html Norvig, P. (1989). Marker passing as a weak method for text inferencing. Cognitive Science, 13, 569–620. Nourkova, V. V., Bernstein D. M., & Loftus, E. F. (2004). Altering traumatic memories. Cognition and Emotion, 18, 575–585. Novitz, D. (1980). Fiction, imagination and emotion. The Journal of Aesthetics and Art Criticism, 38, 279–288. Nowakowska, M. (1973a). A formal theory of actions. Behavioral Science, 18, 393–416. Nowakowska, M. (1973b). Language of motivation and language of actions. The Hague: Mouton. Nowakowska, M. (1976a). Action theory: Algebra of goals and algebra of means. Design Methods and Theories, 10(2), 97–102. Nowakowska, M. (1976b). Towards a formal theory of dialogues. Semiotica, 17(4), 291–313. Nowakowska, M. (1978). Formal theory of group actions and its applications. Philosophica, 21, 3–32. Nowakowska, M. (1984). Theories of research (2 Vols.). Seaside, CA: Intersystems Publications. Nowakowska, M. (1986). Cognitive sciences: Basic problems, new perspectives, and implications for artificial intelligence. Orlando, FL: Acedemic. Nowakowski [sic], M. (1980). Possibility distributions in the linguistic theory of actions. International Journal of Man-Machine Studies, 12, 229–239. NRC. (1995). National Review Council (U.S.) Committee on Declassification of Information for the Department of Energy Environmental Remediation and Related Programs 1995.AReview of the Department of Energy Classification Policy and Practice. Washington, DC: National Academic Press. NYT. (1995). Woman guilty of murdering husband no. 9. The New York Times, March 19 (Late Edn., Final): Sec. 1, p. 31, col. 1, National Desk. Oatley, G., & Ewart, B. (2003). Crimes analysis software: ‘Pins in Maps’, clustering and Bayes net prediction. Expert Systems with Applications, 25(4), 569–588. Oatley, G., & Ewart, B. (2011). Data mining and crime analysis. Wiley Interdisciplinary Reviews (WIREs): Data Mining and Knowledge Discovery, 1(2), 147–153. doi://10.1002/widm.6 Oatley, G., Ewart, B., & Zeleznikow, J. (2006). Decision support systems for police: Lessons from the application of data mining techniques to ‘soft’ forensic evidence. Journal of Artificial Intelligence and Law, 14(1/2), 35–100. Oatley, G., Zeleznikow, J., & Ewart, B. (2004). Matching and predicting crimes. In A. Macintosh, R. Ellis, & T. Allen (Eds.), Applications and innovations in intelligent systems XII. Proceedings of AI2004, the 24th SGAI international conference on knowledge based systems and applica- tions of artificial intelligence (pp. 19–32). Berlin: Springer. Oatley, G., Zeleznikow, J., Leary, R., & Ewart, B. (2005). From links to meaning: A burglary data case study. In R. Khosla, R. J. Howlett, & L. C. Jain (Eds.), Knowledge-based intelligent information and engineering systems: 9th international conference, KES 2005, Melbourne, Australia, September 14–16, 2005, Proceedings, Part IV (pp. 813–822). (Lecture Notes in Computer Science, Vol. 3684). Berlin: Springer. Oatley, G. C., MacIntyre, J., Ewart, B. W., & Mugambi, E. (2002). SMART software for decision Makers KDD experience. Knowledge Based Systems, 15, 323–333. References 1223

O’Barr, W. M. (1982). Linguistic evidence: Language, power and strategy in the courtroom.New York: Academic. Ochaeta, K. E. (2008). Fraud detection for internet auctions: A data mining approach. PhD thesis. Hsinchu, Taiwan: National Tsing-Hua University. ODBASE. (2005). Ontologies, databases and applications of semantics (ODBASE) 2005 interna- tional conference. Berlin: Springer. Oehler, D. (2009). Rediscovered: Forest owlet. Bird Watcher’s Digest, November/December. Ofshe, R. J., & Leo, R. A. (1997a). The social psychology of police interrogation: The the- ory and classification of true and false confessions. Studies in Law, Politics, and Society, 16, 189–251. Ofshe, R. J., & Leo, R. A. (1997b). The decision to confess falsely: Rational choice and irrational action. Denver University Law Review, 74, 979–1122. Ogata, T. (2004). A computational approach to literary and narrative production: Toward compu- tational narratology. In Art and Science: Proceedings of the 18th congress of the international association of empirical aesthetics, Lisbon, September 2004, pp. 509–516. Ogden, J. (1992). Restoration jocularity at Othello’s expense. Notes and Queries, 39(4), 464. (Vol. 237, new series, Oxford: Oxford University Press.) Ogston, E., & Vassiliadis, S. (2002). Unstructured agent matchmaking: Experiments in timing and fuzzy matching. In Proceedings of the special track on coordination models, languages and applications of the 17th ACM symposium on applied computing, Madrid, Spain, pp. 300–305. Oinonen, K., Theune, M., Nijholt, A., & Heylen, D. (2005). Getting the story right: Making computer-generated stories more entertaining. In M. Maybury, O. Stock, & W. Wahlster (Eds.), Proceedings of intelligent technologies for interactive entertainment (INTETAIN’05) (pp. 264– 268). (Lecture Notes in Artificial Intelligence, 3814). Berlin: Springer. http://dx.doi.org/10. 1007/11590323_32 Oinonen, K., Theune, M., Nijholt, A., & Uijlings, J. (2006). Designing a story database for use in automatic story generation. In R. Harper, M. Rauterberg, & M. Combetto (Eds.), Proceedings of the fifth International Conference on Entertainment Computing (ICEC 2006), Cambridge, UK (pp. 298–301). (Lecture Notes in Computer Science, 4161). Berlin: Springer. http://dx.doi. org/10.1007/11872320_36 Okada, N., & Endo, Ts. (1992). Story generation based on dynamics of the mind. Computational Intelligence, 8(1), 123–160. Olderog, E.-R. (1991). Nets, terms and formulas: Three views of concurrent processes and their relationship. (Cambridge Tracts in Theoretical Computer Science, 23.) Cambridge: Cambridge University Press. O’Looney, J. (2000). Beyond maps: GIS and decision making in local government. Redlands, CA: ESRI Press. Olson, E. A., & Wells, G. L., (2002). What makes a good alibi? A proposed taxonomy. Ames, IA: Iowa State University, n.d. (but 2002). Portions of the data in this report were presented at the 2001 Biennial Meeting of the Society for Applied Research in Memory and Cognition. http:// www.psychology.iastate.edu/~glwells/alibi_taxonomy.pdf Olson, S. L. (2008). The Meinertzhagen mystery: The life and legend of a colossal fraud. The Wilson Journal of Ornithology, 120(4), 917–926. Reviewing Garfield (2007). Onega, S., & Garcia Landa, J. A. (1996). Narratology. London: Longman. Onyshkevych, B., & Nirenburg, S. (1995). A lexicon for knowledge-based MT. Machine Translation, 10(1/2), 5–57. Orgun, M. A., & Meyer, T. (Eds.). (2008). Advances in ontologies. Oxford: Blackwell. Ormerod, T. C., Barrett, E. C., & Taylor, P. J. (2008). Investigating sensemaking in criminal con- texts. In J. M. Schraagen, L. G. Militello, T. C. Ormerod, & R. Lipshitz (Eds.), Naturalistic decision making and macrocognition (pp. 81–102). Farnham, England: Ashgate. O’Rorke, P., & Ortony, A. (1994). Explaining emotions. Cognitive Science, 18, 283–323. Osborn, A. S. (1929). Questioned documents (2nd ed.). Albany, NY: Boyd Printing Company. Reprinted, Chicago: Nelson-Hall Co. Osborne, C. (1997). Criminal litigation (5th ed.). London: Blackstone. O’Shea, C. (2005). Intrusion detection with honeypots. Course presentation [a student project, supervised by K. Jeffay], COMP 290, Spring 2005. Department of Computer Science. Chapel 1224 References

Hill: The University of North Carolina. [A slideshow turned into .pdf] http://www.cs.unc. edu/~jeffay/courses/nidsS05/slides/12-Honeypots.pdf http://www.cs.unc.edu/~jeffay/courses/ nidsS05/slides/Student-Project-Summaries.pdf Oskamp, A., Walker, R. F., Schrickx, J. A., & van den Berg, P. H. (1989). PROLEXS divide and rule: A legal application. In J. C. Smith & R. T. Franson (Eds.), Proceedings of the second International Conference on Artificial Intelligence and Law (ICAIL’89) (pp. 54–62). New York: ACM Press. doi://10.1145/74014.74022 O’Sullivan, M., Ekman, P., & Friesen, W. V. (1988). The effect of comparisons on detecting deceit. Journal of Nonverbal Behavior, 12, 203–215. Osuna, R. G., & Nagle, H. T. (1999). A method for evaluating data preprocessing techniques for odour classification with an array of gas sensors. IEEE Transactions on Systems, Man and Cybernetics, B, 29(5), 626–632. Otgaar, H. (2009). Not all false memory paradigms are appropriate in court. In L. Strömwall & P.A. Granhag (Eds.), Memory: Reliability and personality (pp. 37–46). Göteborg, Sweden: Göteborg University. Otgaar, H., Candel, I., Memon, A., & Almerigogna, J. (2010a). Differentiating between chil- dren’s true and false memories using reality monitoring criteria. Psychology, Crime & Law, 16, 555–566. http://www.personeel.unimaas.nl/henry.otgaar/Otgaar_ChildrenFalseMemoriesRM_ inpress_PCL.pdf Otgaar, H., Candel, I., Scoboria, A., & Merckelbach, H. (2010c). Script knowledge enhances the development of children’s false memories. Acta Psychologica, 133, 57–63. http://www. personeel.unimaas.nl/henry.otgaar/Otgaar_Scriptfalsememories_2010_AP.pdf Otgaar, H., Candel, I., Smeets, T., & Merckelbach, H. (2010d). “You didn’t take Lucy’s skirt off”: The effect of misleading information on omissions and commissions in children’s memory reports. Legal & Criminological Psychology, 15, 229–241. http://www.personeel.unimaas.nl/ henry.otgaar/Otgaar_ChildrenOmissionsCommissionMisleading_2010_LCP.pdf Otgaar, H., Meijer, E. H., Giesbrecht, G., Smeets, T., Candel, I., & Merckelbach, H. (2010b). Children’s suggestion-induced omission errors are not caused by memory erasure. Consciousness and Cognition, 19, 265–269. http://www.personeel.unimaas.nl/henry.otgaar/ Otgaar_OmissionErrorserasure_2010_C&C.pdf Otgaar, H., & Smeets, T. (2010). Adaptive memory: Survival processing increases both true and false memory in adults and children. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 1010–1016. http://www.personeel.unimaas.nl/henry. otgaar/Otgaar_AdaptiveMemoryFalseMemory_2010_JEPLMC.pdf Otgaar, H. P., Candel, I., & Merckelbach, H. (2008). Children’s false memories: Easier to elicit for a negative than a neutral event. Acta Psychologica, 128, 350–354. http://www.personeel. unimaas.nl/henry.otgaar/Otgaar_Children’sFalseMemoriesNegativeNeutral3_2008_AP.pdf Otgaar, H. P., Candel, I., Merckelbach, H., & Wade, K. A. (2009). Abducted by a UFO: Prevalence information affects young children’s false memories for an implausible event. Applied Cognitive Psychology, 23, 115–125. http://www.personeel.unimaas.nl/henry.otgaar/ Otgaar_PrevalenceUFOChildren’sfalsememories3_2009_ACP.pdf Oudot, L. (2003). Fighting spammers with honeypots: Part 1. http://www.securityfocus.com/ Oudot, L., & Holz, T. (2004). Defeating honeypots: Network issues, Part 1. http://www. securityfocus.com/ Ouellette, J. (1999). Electronic noses sniff our new markets. Industrial Physics, 5, 26–29. Overill, R. E. (2009). Development of Masters modules in computer forensics and cybercrime for computer science and forensic science students. International Journal of Electronic Security & Digital Forensics, 2(2), 132–140. http://www.dcs.kcl.ac.uk/staff/richard/IJESDF_2009.pdf Overill, R. E., Silomon, J. A. M., Kwan, Y. K., Chow, K.-P., Law, Y. W., & Lai, K. Y. (2009). A cost-effective digital forensics investigation model. In Proceedings of the Fifth Annual IFIP WG 11.9 International Conference on Digital Forensics, Orlando, FL, 25–28 January 2009, Advances in Digital Forensics V. Berlin: Springer, pp. 193–202. Overill, R. E., & Silomon, J. A. M. (2010a). Digital meta-forensics: Quantifying the investigation. In Proceedings of the Fourth International Conference on Cybercrime Forensics Education & References 1225

Training (CFET 2010), Canterbury, Kent, England, 2–3 September 2010. http://www.dcs.kcl. ac.uk/staff/richard/CFET_2010.pdf Overill, R. E, Silomon, J. A. M., & Chow, K.-P. (2010b). A complexity based model for quantifying forensic evidential probabilities. In Proceedings of the Third International Workshop on Digital Forensics (WSDF 2010), Krakow, Poland, 15–18 February 2010, pp. 671–676. http://www.dcs. kcl.ac.uk/staff/richard/F2GC_2010.pdf Overill, R. E, Silomon, J. A. M., Kwan, Y. K., Chow, K.-P., Law, Y. W., & Lai, K. Y. (2010). Sensitivity analysis of a Bayesian network for reasoning about digital forensic evidence. In Proceedings of the Fourth International Workshop on Forensics for Future Generation Communication environments (F2GC-2010), Cebu, Philippines, 11–13 August 2010. http:// www.dcs.kcl.ac.uk/staff/richard/F2GC_2010.pdf Owen, G. (1995). Game theory (3rd ed.). San Diego, CA: Academic. Owens, C. C. (1990). Indexing and retrieving abstract planning knowledge. Doctoral dissertation. New Haven, CT: Computer Science Department, Yale University. Owens, C. C. (1994). Retriever and Anon: Retrieving structures from memory. In R. C. Schank, A. Kass, & C. K. Riesbeck (Eds.), Inside case-based explanation (pp. 89–126). Hillsdale, NJ: Erlbaum. Özsoyoglu,˘ Z. M. (Ed.). (1988). Nested relations. Special issue of The IEEE Data Engineering Bulletin, 11(3), September. New York: IEEE. Özsoyoglu,˘ Z. M., & Yuan, L. Y. (1987). A new normal form for nested relations. ACM Transactions on Database Systems, 12, 111–136. Pacuit, E. (2005). Topics in social software: Information in strategic situations. Doctoral disserta- tion. New York: City University of New York. Pacuit, E., & Parikh, R. (2007). Social interaction, knowledge, and social software. In D. Goldin, S. Smolka, & P. Wegner (Eds.), Interactive computation: The new paradigm (pp. 441–461). New York: Springer. Pacuit, P., Parikh, R., & Cogan, E. (2006). The logic of knowledge based obligation. Knowledge, Rationality and Action, a subjournal of Synthese, 149(2), 311–341. Paglieri, F. (2009). Ruinous arguments: Escalation of disagreement and the dangers of arguing. In H. Hansen, C. Tindale, R. Johnson, & A. Blair (Eds.), Argument cultures: Proceedings of OSSA 2009. CD-ROM. Windsor, ON: OSSA. Paglieri, F., & Castelfranchi, C. (2005). Revising beliefs through arguments: Bridging the gap between argumentation and belief revision in MAS. In I. Rahwan, P. Moraitis, & C. Reed (Eds.), Argumentation in multi-agent systems (pp. 78–94). Berlin: Springer. Paglieri, F., & Castelfranchi, C. (2010). Why argue? Towards a cost–benefit analysis of argumen- tation. Argument & Computation, 1(1), 71–91. Paley, B., & Geiselman, R. E. (1989). The effects of alternative photospread instructions on suspect identification performance. American Journal of Forensic Psychology, 7, 3–13. Pallotta, G. (1977). Dizionario storico della mafia. (Paperbacks società d’oggi, 8.) Rome: Newton Compton Editori. Palmer, M. S., Passonneau, R. J., Weir, C., & Finin, T. W. (1993). The KERNEL text understanding system. Artificial Intelligence, 63(1/2), 17–68. Pamplin, C. (2007a). Cross-examining the experts. In Expert Witness Supplement to The New Law Journal, 157(7294) (London, 26 October 2007), 1480–1481. Pamplin, C. (2007b). Limiting the evidence. In Expert Witness Supplement to The New Law Journal, 157(7294) (London, 26 October 2007), 1488–1489. Pamula, V. K. (2003). Detection of explosives. Chapter 23 In T. C. Pearce, S. S. Schiffman, H. T. Nagle, & J. W. Gardner (Eds.), Handbook of machine olfaction: Electronic nose tech- nology (pp. 547–560). Weinheim, Baden-Württemberg: Wiley VCH Verlag. Published online: 2004. doi://10.1002/3527601597.ch23 Panangadan, A., Ho, Sh.-Sh., & Talukder, A. (2009). Cyclone tracking using multiple satellite image sources. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, 4–6 November 2009. 1226 References

Pandit, S., Chau, D. H., Wang, S., & Faloutsos, C. (2007). NetProbe: A fast and scalable sys- tem for fraud detection in online auction networks. In WWW 2007: Proceedings of the 16th International Conference on World Wide Web, Banff, AB, Track: Data Mining, Session: Mining in Social Networks. New York: ACM, pp. 201–210. Pang, B., Lee, L., & Vaithyanathan, S. (2002). Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of EMNLP 02, 7th Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Morristown, US, pp. 79–86. http://www.cs.cornell.edu/home/llee/papers/sentiment.pdf Pankanti, S., Prabhakar, S., & Jain, A. K. (2002). On the individuality of fingerprints. IEEE Transactions on Pattern Analysis & Machine Intelligence (IEEE PAMI), 24, 1010–1025. Pannu, A. S. (1995). Using genetic algorithms to inductively reason with cases in the legal domain. In Proceedings of Fifth International Conference on Artificial Intelligence and Law.NewYork: ACM Press, pp. 175–184. Papadimitriou, C. H. (1994). Computational complexity. Reading, MA: Addison-Wesley. Papageorgis, D., & McGuire, W. J. (1961). The generality of immunity to persuasion produced by pre-exposure to weakened counterarguments. Journal of Abnormal and Social Psychology, 62, 475–481. Papineau, D. (1991). Correlations and causes. British Journal for the Philosophy of Science, 42, 397–412. Pardo, M. S. (2005). The field of evidence and the field of knowledge. Law and Philosophy, 24, 321–391. Pardue, H. L. (Ed.). (1994). Analytical aspects of forensic science. Special issue, Analytica Chimica Acta, 288(1/2). Amsterdam: Elsevier. Parent, X. (2003). Remedial interchange, contrary-to-duty obligation and commutation. Journal of Applied Non-Classical Logics, 13(3/4), 345–375. Parikh, R. (2001). Language as social software. In J. Floyd & S. Shieh (Eds.), Future pasts: The analytic tradition in twentieth century philosophy (pp. 339–350). Oxford: Oxford University Press. Parikh, R. (2002). Social software. Synthese, 132, 187–211. Parkinson, B. (1995). Ideas and realities of emotion. London: Routledge. Parry, A. (1991). A universe of stories. Family Process, 30(1), 37–54. Parsons, S., & McBurney, P. (2003). Argumentation-based communication between agents. In M.-P. Huget (Ed.), Communication in multiagent systems: Agent communication languages and conversation policies. (Lecture Notes in Computer Science, 2650). Berlin: Springer. Parton, D. A., Hansel, M., & Stratton, J. R. (1991). Measuring crime seriousness: Lessons from the National Survey of Crime Severity. The British Journal of Criminology, 31, 72–85. Partridge, R. E. (1991). Battle scarred. [A two-paragraph item.] Reader’s Digest (U.S, edition), April 1991, p. 120. Parunak, H., Ward, A., Fleischer, M., & Sauter, J. (1997). A marketplace of design agents for distributed concurrent set-based design. In Proceedings of the Fourth ISPE International Conference on Concurrent Engineering: Research and Applications (ISPE/CE97),Troy,MI. Pattenden, R. (1993). Conceptual versus pragmatic approaches to hearsay. Modern Law Review,69 56(2), 138–156. Pawlak, Z. (1991). Rough sets: Theoretical aspects of reasoning about data. (Theory and Decision Library, 9. System Theory, Knowledge Engineering, and Problem Solving, Series D). Dorrdrecht, The Netherlands: Kluwer. PCMLP. (n.d.). Geographical links. Inside the website of the Programme in Comparative Media Law & Policy (PCMLP), Centre for Socio-Legal Studies, Wolfson College, University of Oxford. Retrieved ca. 2000; http://pcmlp.socleg.ox.ac.uk/regional.html

69 The journal Modern Law Review is published in Oxford by Blackwell. References 1227

Pearce, T. C., Schiffman, S. S., Nagle, H. T., & Gardner, J. W. (Eds.). (2002). Handbook of machine olfaction: Electronic nose technology. Weinheim, Baden-Württemberg: Wiley-VCH. doi://10.1002/3527601597 Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. San Mateo, CA: Morgan-Kaufmann. Pearl, J. (1993). From conditional oughts to qualitative decision theory. In Uncertainty in AI: Proceedings of the Ninth Conference,70 Washington, DC, July 1993, pp. 12–20. Pearl, J. (2001). Bayesianism and causality, and why I am only a half-Bayesian. In D. Corfield & J. Williamson (Eds.), Foundations of Bayesianism (pp. 19–36). (Kluwer Applied Logic Series, 24). Dordrecht, The Netherlands: Kluwer. http://ftp.cs.ucla.edu/pub/stat_ser/r284-reprint.pdf Pearman, D. A., & Walker, K. J. (2004). An examination of J. W. Heslop Harrison’s unconfirmed plant records from Rum. Watsonia, 25, 45–63. Pease, K., Ireson, J., Billingham, S., & Thorpe, J. (1977). The development of a scale of offence seriousness. International Journal of Criminology and Penology, 5, 17–29. Pei, J., Jiang, D., & Zhang, A. (2005). On mining cross-graph quasi-cliques. In Proceedings of the 2005 International Conference on Knowledge Discovery and Data Mining (KDD 2005), Chicago, IL, August 2005, pp. 228–238. Peinado, F., Cavazza, M., & Pizzi, D. (2008). Revisiting character-based affective storytelling under a narrative BDI framework. In U. Spierling & N. Szilas (Eds.), Proceedings of the first international conference on interactive digital storytelling, Erfurt, Germany, 26–29 November 2008 (pp. 83–88). (Lecture Notes in Computer Science, Vol. 5334.), Berlin: Springer. Peinado, F., & Gervás, P. (2004). Transferring game mastering laws to interactive digital story- telling. In S. Göbel, U. Spierling, A. Hoffmann, I. Iurgel, O. Schneider, J. Dechau, et al. (Eds.), Technologies for interactive digital storytelling and entertainment: Proceedings of the 2nd international conference on technologies for interactive digital storytelling and entertainment, TIDSE’04, Darmstadt, Germany, 24–26 June 2004 (pp. 48–54). (Lecture Notes in Computer Science, 3105). Berlin: Springer. Peinado, F., & Gervás, P. (2005a). Creativity issues in plot generation. In P. Gervás, T. Veale, & A. Pease (Eds.), Workshop on computational creativity, working notes. 19th international joint conference on artificial intelligence, Edinburgh, Scotland, 30 July–5 August 2005 (pp. 45– 52). Also: Technical Report 5-05. Departamento de Sistemas Informáticos y Programación, Universidad Complutense de Madrid. Peinado, F., & Gervás, P. (2005b). A Generative and Case-based Implementation of Proppian Morphology. In B. Lönneker, J. C. Meister, P. Gervás, F. Peinado, & M. Mateas (Eds.), Story generators: Models and approaches for the generation of literary artifacts.Atthe17th Joint International Conference of the Association for Computers and the Humanities and the Association for Literary and Linguistic Computing (ACH/ALLC), Victoria, BC, 15–18 June 2005 (pp. 129–133). Humanities Computing and Media Centre, University of Victoria. Peinado, F., & Gervás, P. (2006a). Minstrel reloaded: From the magic of Lisp to the formal semantics of OWL. In S. Göbel, R. Malkewitz, & I. Iurgel (Eds.), Proceedings of the third international conference on Technologies for Interactive Digital Storytelling and Entertainment (TIDSE), Darmstadt, Germany, 4–6 December 2006 (pp. 93–97). (Lecture Notes in Computer Science, 4326.) Berlin: Springer. Peinado, F., & Gervás, P. (2006b). Evaluation of automatic generation of basic stories. In a special issue on Computational Creativity, New Generation Computing, 24(3), 289–302. Peinado, F., & Gervás, P. (2007) Automatic direction of interactive storytelling: Formalizing the game master paradigm. In M. Cavazza & S. Donikian (Eds.), Proceedings of the fourth

70 The UAI conference has been held every year since 1985. Proceedings of some past conferences (most of those from the 2000s) can be viewed online at http://uai.sis.pitt.edu/ Hardcopy versions of the proceedings can be purchased through Brightdoc, at https://store.brightdoc.com/store/default. asp?clientid=212 1228 References

International Conference on Virtual Storytelling: Using virtual reality technologies for sto- rytelling (ICVS), Saint-Malo, France, 5–7 December 2007 (pp. 196–201). (Lecture Notes in Computer Science, 4871.) Berlin: Springer. Peinado, F., & Navarro, A. (2007). RCEI: An API for remote control of narrative environments. In M. Cavazza & S. Donikian (Eds.), Proceedings of the fourth International Conference on Virtual Storytelling: Using virtual reality technologies for storytelling (ICVS), Saint-Malo, France, 5–7 December 2007 (pp. 181–186). (Lecture Notes in Computer Science, 4871). Berlin: Springer. Peinado, F., Gervás, P., & Díaz-Agudo, B. (2004). A Description Logic Ontology for Fairy Tale Generation. In T. Veale, A. Cardoso, F. Camara Pereira, & P. Gervás (Eds.), 4th international conference on Language Resources and Evaluation, Proceedings of the workshop on language resources for linguistic creativity, LREC’04, Lisbon, 29 May 2004 (pp. 56–61). ELRA. Peirce, C. S. (1903). Harvard lectures on pragmatism. In C. Hartshorne & P. Weiss (Eds.), Collected papers of Charles Sanders Peirce (Vol. 5). Cambridge, MA: Harvard University Press. (8 vols. published in 1931–1958 (vols. 7 and 8, ed. A. W. Burks).71 Volumes reissued as 8 vols. in 4 by the Belknap Press of Harvard University Press, ca. 1965–1967. The 1931–1958 edn. was reprinted as 8 vols. in Bristol, England: Thoemmes Press, 1998.) Peirce, C. S. [1901] (1955). Abduction and induction. In J. Buchler (Ed.), Philosophical writings of peirce (pp. 150–156). New York: Dover. Pelosi, P., & Persaud, K. C. (1988). Gas sensors: towards an artificial nose. In P. Dario (Ed.), Sensors and sensory systems for advanced robotics (pp. 361–381). Berlin: Springer. Pemberton, L. (1989). A modular approach to story generation. In Proceedings of the Fourth European Meeting of the Association for Computational Linguistics (EACL-89), Manchester, England, 10–12 April 1989, pp. 217–224. Pennec, X. (2007). From Riemannian geometry to computational anatomy of the brain. In The Digital Patient, special issue of ERCIM News, 69 (April), pp. 15–16. Article download- able from the webpage http://ercim-news.ercim.org/content/view/166/314/ of the European Research Consortium for Informatics and Mathematics. Pennington, N., & Hastie, R. (1981). Juror decision-making models: The generalization gap. Psychological Bulletin, 89, 146–287. Pennington, N., & Hastie, R. (1986). Evidence evaluation in complex decision making. Journal of Personality and Social Psychology, 51, 242–258. Pennington, N., & Hastie, R. (1988). Explanation-based decision making: Effects of memory struc- ture on judgment. Journal of Experimental Psychology: Learning, Memory and Cognition, 14, 521–533. Pennington, N., & Hastie, R. (1992). Explaining the evidence: Tests of the story model for juror decision making. Journal of Personality and Social Psychology, 62, 189–206. Pennington, N., & Hastie, R. (1993). The story model for juror decision making. In R. Hastie (Ed.), Inside the Juror: The psychology of juror decision making (pp. 192–221). Cambridge, England: Cambridge University Press. Pennington, D. C., & Lloyd-Bostock, S. (Eds.). (1987). The psychology of sentencing: Approaches to consistency and disparity. Oxford: Centre for Socio-Legal Studies. Penrod, S. (2005). Eyewitness identification evidence: How well are witnesses and police performing? Criminal Justice Magazine, 54, 36–47. Penrod, S., Loftus, E., & Winkler, J. (1982). The reliability of witness testimony: A psychological perspective. In N. L. Kerr & R. M. Bray (Eds.), The criminal justice system (pp. 119–168). New York: Academic. Penry, J. (1974). Photo-Fit. Forensic Photography, 3(7), 4–10.

71 Vol. 1: Principles of Philosophy.Vol.2:Elements of Logic.Vol.3:Exact Logic.Vol.4:The Simplest Mathematics.Vol.5:Pragmatism and Pragmaticism.Vol.6:Scientific Metaphysics. Vol. 7: Science and Philosophy.Vol.8:Reviews, Correspondence, and Bibliography. References 1229

Perdisci, R., Ariu, D., Fogla, P., Giacinto, G., & Lee, W. (2009). McPAD: A multiple classier system for accurate payload-based anomaly detection. In a special issue on Traffic Classification and Its Applications to Modern Networks of Computer Networks, 5(6), 864–881. http://3407859467364186361-a-1802744773732722657-s-sites.googlegroups.com/ site/robertoperdisci/publications/publication-files/McPAD-revision1.pdf Pérez y Pérez, R., & Sharples, M. (2001). MEXICA: A computer model of a cognitive account of creative writing. Journal of Experimental and Theoretical Artificial Intelligence, 13(2), 119– 139. http://www.eee.bham.ac.uk/sharplem/Papers/mexica_jetai.pdf Perloff, M. (2003). Taking agents seriously. Cybernetics and Systems, 34(4/5), 253–281. Peron C. S. J, & Legary, M. (2005). Digital anti-forensics: Emerging trends data transformation techniques. http://www.seccuris.com/documents/papers/Seccuris-Antiforensics.pdf Perrins, C. (1988). Obituary: Salim Moizuddin Abdul Ali (1896–1987). Ibis: Journal of the British Ornithologists’ Union, 130(2), 305–306. Oxford: Blackwell. Persaud, K. C. (1992). Electronic gas and odor detectors that mimic chemoreception in animals. TRAC Trends in Analytical Chemistry, 11, 61–67. Persaud, K. C. (2005). Medical applications of odor-sensing devices. International Journal of Lower Extremities Wounds, 4, 50–56. Persaud, K. C., Bartlett, J., & Pelosi, P. (1993). Design strategies for gas and odour sensors which mimic the olfactory system. In P. Dario, G. Sandini, & P. Aebisher (Eds.), Robots and biological systems: Towards a new bionics? (pp. 579–602). Berlin: Springer. Persaud, K. C., & Dodd, G. (1982). Analysis of discrimination mechanisms in the mammalian olfactory system using a model nose. Nature, 299, 352–355. Persaud, K. C., Qutob, A. A., Travers, P., Pisanelli, A. M., & Szyszko, S. (1994). Odor evaluation of foods using conducting polymer arrays and neural net pattern recognition. In K. Kurihara, N. Suzuki, & H. Ogawa (Eds.), Olfaction and taste XI (pp. 708–710). Tokyo & Berlin: Springer. Petacco, A. (1972). Joe Petrosino. (In Italian.) Milan: Arnoldo Mondadori Editore. Peter, R. (1999). Bird taxidermy. (Norman Cottage Pocket Book.) Oakham, Rutland, East Midlands, England: R. Merchant. Peters, G. A., & Peters, B. J. (1994). Automotive engineering and litigation. (Wiley Law Publications.) New York: Wiley. Peterson, D. M., Barnden, J. A., & Nissan, E. (Eds.) (2001). Artificial Intelligence and Law, special issue of Information & Communications Technology Law, 10(1). Peterson, J. L. (1981). Petri net theory and the modelling of systems. Englewood Cliffs, NJ: Prentice-Hall. Peterson, M. (2005). Intelligence-led policing: The new intelligence architecture. Washington, DC: Bureau of Justice Assistance. http://www.ojp.usdoj.gov/BJA/pdf/IntelLedPolicing.pdf Petri, C. A. (1966). Communication with automata. Supplement 1 to Technical Report RADC-TR- 65-377, Vol. 1. Rome, NY: Rome Air Development Center, Griffiths Air Force Base, January 1966. Translated by C. F. Greene, Jr., from: Kommunikation mit Automaten, University of Bonn, Bonn, West Germany, 1962. Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and peripheral routes to attitude change. New York: Springer. Petty R. E., Wegener, D. T., & White, P. H. (1998). Flexible correction processes in social judgment: implications for persuasion. Social Cognition, 16, 93–113. Peuquet, D. J., & Duan, N. (1995). An event-based spatiotemporal data model (ESTDM) for temporal analysis of geographical data. International Journal of Geographical Information Science, 9(1), 7–24. Pfeiffer III, J., & Neville, J. (2011). Methods to determine node centrality and clustering in graphs with uncertain structure. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media. http://www.cs.purdue.edu/homes/neville/papers/pfeiffer-icwsm2011.pdf Pharr, M., & Humphreys, G. (2004). Physically based rendering: From theory to implementation. San Francisco: Morgan Kaufmann. 1230 References

Philipps, L. (1989). Are legal decisions based on the application of rules or prototype recogni- tion? Legal science on the way to neural networks. In Pre-Proceedings of the 3rd International Conference on Logica, Informatica, Diritto. Florence: Istituto per la Documentazione Giudiziaria, pp. 673–680. Philipps, L. (1991). Distribution of damages in car accidents through the use of neural networks. Cardozo Law Review, 13(2/3), 987–1001. Phillips, L. (1993). Vague legal concepts and fuzzy logic: An attempt to determine the required period of waiting after traffic accidents. In Proceedings of the Computer and Vagueness: Fuzzy Logic and Neural Nets, Munich. In Informatica e diritto (Florence), 2, 37–51. Philipps, L. (1999). Approximate syllogisms: On the logic of everyday life. Artificial Intelligence and Law, 7(2/3), 227–234. Phillips, M., & Huntley, C. (1993). Dramatica: A new theory of story. http://www.dramatica.com/ theory/theory_book/dtb.html Phillips, L., & Sartor, G. (1999). From legal theories to neural networks and fuzzy reasoning. Artificial Intelligence and Law, 7(2/3), 115–128. Philp, R. P. (2002). Application of stable isotopes and radioisotopes in environmental forensics. Chapter 5 In B. L. Murphy & R. D. Morrison (Eds.), Introduction to environmental forensics (pp. 99–136). San Diego, CA & London: Academic. Phua, C., Lee, V., Smith-Miles, K., & Gayler, R. (2005). A comprehensive survey of data-mining- based fraud detection research. Clayton, VIC: Clayton School of Information Technology, Monash University; 2005. In 2010, it was accessible at: http://clifton.phua.googlepages.com/ Phuoc, N. Q., Kim, S.-R., Lee, H.-K., & Kim, H. S. (2009). PageRank vs. Katz Status Index, a theoretical approach. In Proceedings of the Fourth International Conference on Computer Sciences and Convergence Information Technology (ICCIT’09), Seoul, South Korea, 24–26 November 2009, pp. 1276–1279. Pickel, D., Manucy, G., Walker, D., Hall, S., & Walker, J. (2004). Evidence for canine olfactory detection of melanoma. Applied Animal Behaviour Science, 89, 107–116. Pietroski, P. M. (1994). A “should” too many. Behavioral and Brain Sciences, 17(1), 26–27. Pildes, R. H. (1999). Forms of formalism. Chicago Law Review, 66, 607–621. Pisanelli, A. M., Qutob, A. A., Travers, P., Szyszko, S., & Persaud, K. C. (1994). Applications of multi-array polymer sensors to food industries. Life Chemistry Reports, 11, 303–308. Plamper, J. (2010). The history of emotions: An interview with William Reddy, Barbara Rosenwein, and Peter Stearns. History and Theory,72 49, 237–265. Plantinga, A. (1993a). Warrant: The current debate. Oxford: Oxford University Press. Plantinga, A. (1993b). Warrant and proper function. Oxford: Oxford University Press. Planty, M., & Strom, K. J. (2007). Understanding the role of repeat victims in the production of annual US victimization rates. Journal of Quantitative Criminology, 23(3), 179–200. Plewe, B. (1997). GIS online: Information retrieval, mapping, and the internet.SantaFe,NM: Onword Press. Poesio, M. (2005). Domain modelling and NLP: Formal ontologies? Lexica? Or a bit of both? Applied Ontology, 1(1), Amsterdam: IOS Press, pp. 27–33. Politis, D., Donos, G., Christou, G., Giannakopoulos, P., & Papapanagiotou-Leza, A. (2008). Implementing e-justice on a national scale: Coping with Balkanization and socio-economical divergence. Journal of Cases on Information Technology, 10(2), 41–59. http://www.igi-global. com/articles/details.asp?ID=7910 http://www.igi-global.com/journals/details.asp?id=202 Pollard, D. E. B. (1997). Logic of fiction. In P. V. Lamarque & R. E. Asher (Eds.), Concise encyclopedia of philosophy of language (pp. 264–265). Oxford: Pergamon. Pollock, J. (1989). How to build a person: A prolegomenon. Cambridge, MA: Bradford (MIT Press). Pollock, J. L. (2010). Defeasible reasoning and degrees of justification. Argument & Computation, 1(1), 7–22.

72 See fn. 141 in Chapter 8. References 1231

Poole, D. (1989). Explanation and prediction: An architecture for default and abductive reasoning. Computational Intelligence, 5(2), 97–110. Poole, D. L. (1988). A Logical framework for default reasoning. Artificial Intelligence, 36, 27–47. Poole, D. (2002) Logical argumentation, abduction and Bayesian decision theory: A Bayesian approach to logical arguments and its application to legal evidential reasoning. In M. MacCrimmon & P. Tillers (Eds.), The dynamics of judicial proof: Computation, logic, and common sense (pp. 385–396). (Studies in Fuzziness and Soft Computing, Vol. 94). Heidelberg: Physical-Verlag. Pound, R. (1908). Mechanical jurisprudence. Columbia Law Review, 8, 605–623. Popescu, A.-M., & Etzioni, O. (2005). Extracting product features and opinions from reviews. In Proceedings of HLT-EMNLP, 2005, pp. 339–346. Popescu, A. C., & Farid, H. (2004). Exposing digital forgeries by detecting duplicated image regions. Technical Report TR2004-515. Hanover, NH: Department of Computer Science, Dartmouth College. Popescu, A. C., & Farid, H. (2005a). Exposing digital forgeries by detecting traces of re-sampling. IEEE Transactions on Signal Processing, 53(2), 758–767. Popescu, A. C., & Farid, H. (2005b). Exposing digital forgeries in color filter array interpolated images. IEEE Transactions on Signal Processing, 53(10), 3948–3959. www.cs.dartmouth.edu/ farid/publications/sp05a.html Popov, V. (2003). Social network analysis in decision making: A literature review. WaterTime Background Paper, PSIRU. London: University of Greenwich, January. Porat, A., & Stein, A. (2001). Tort liability under uncertainty. Oxford: Oxford University Press. Porter, S., Woodworth, M., Earle, J., Drugge, J., & Boaer, D. (2003). Characteristics of vio- lent behaviour exhibited during sexual homicides by psychopathic and non-psychopathic murderers. Law & Human Behavior, 27, 459–470. Porter, S., & Yuille, J. C. (1995). Credibility assessment of criminal suspects through statement analysis. Psychology, Crime, and Law, 1, 319–331. Porter, S., & Yuille, J. C. (1996). The language of deceit: An investigation of the verbal clues to deception in the interrogation context. Law and Human Behavior, 20, 443–459. Porter, A., & Prince, R. (2010). Lie detector tests on your taxes in Clegg’s ‘War on middle class’. London: The Daily Telegraph, 20 September, p. 1, bottom left. Porter, S., & Yuille, J. C. (1995). Credibility assessment of criminal suspects through statement analysis. Psychology, Crime, and Law, 1, 319–331. Posner, R. A. (1999). An economic approach to the law of evidence. Stanford Law Review, 51, 1477–1546. Pouget, F., & Holz, T. (2005). A pointillist approach for comparing honeypots. In K. Julisch & C. Krügel (Eds.), Detection of Intrusions and Malware, and Vulnerability Assessment: Proceedings of the Second International Conference (DIMVA 2005), Vienna, Austria, July 7–8, 2005 (pp. 51–68). Lecture Notes in Computer Science, Vol. 3548. Berlin: Springer. Poulin, D., Mackaay [sic], E., Bratley, P., & Frémont, J. (1989). Time server: A legal time special- ist. In A. A. Martino (Ed.), Pre-proceedings of the third international conference on “logica, Informatica, Diritto: Legal Expert Systems”, Florence, 1989 (2 vols. + Appendix) (Vol. 2, pp. 733–760). Florence: Istituto per la Documentazione Giuridica, Consiglio Nazionale delle Ricerche. Poulin, D., Mackaay [sic], E., Bratley, P., & Frémont, J. (1992). Time server: A legal time specialist. In A. Martino (Ed.), Expert systems in law (pp. 295–312). Amsterdam: North-Holland. Poulovassilis, A., & Levene, M. (1994). A nested-graph model for the representation and manipulation of complex objects. ACM Transactions on Information Systems, 12, 35–68. Pour Ebrahimi, B., Bertels, K., Vassiliadis, S., & Sigdel, K. (2004). Matchmaking within multiagent systems. In Proceedings of ProRisc2004, Veldhoven, The Netherlands, pp. 118–124. Prada, R., Machado, I., & Paiva, A. (2000). TEATRIX: Virtual environment for story creation. In Proceedings of the Fifth International Conference in Intelligent Tutoring Systems, pp. 464–473. Prag, J., & Neave, R. (1997) Making faces: Using forensic and archaeological evidence. London: Published for the Trustees of the British Museum by British Museum Press. 1232 References

Prakken, H. (1993a). Logical tools for modelling legal argument. Ph.D. thesis. Amsterdam: Vrije University. Prakken, H. (1993b). A logical framework for modelling legal argument. In Proceedings of the Fourth International Conference on Artificial Intelligence and Law. New York: ACM Press, pp. 1–9. Prakken, H. (1997). Logical tools for modelling legal argument: A study of defeasible reasoning in law. Dordrecht, The Netherlands: Kluwer. Prakken, H. (2000). On dialogue systems with speech acts, arguments, and counterarguments. In M. Ojeda-Aciego, I. P. de Guzman, G. Brewka, & L. Moniz Pereira (Eds.), Proceedings of JELIA2000: The seventh European workshop on logic for artificial intelligence (pp. 239–253). (Springer Lecture Notes in Artificial Intelligence, 1919). Berlin: Springer. Prakken, H. (2001). Modelling reasoning about evidence in legal procedure. In Proceedings of the Eighth International Conference on Artificial Intelligence and Law (ICAIL 2001), St. Louis, MO. New York: ACM Press, pp. 119–128. Prakken, H. (2002). Incomplete arguments in legal discourse: A case study. In T. J. M. Bench- Capon, A. Daskalopulu, & R. Winkels (Eds.), Legal knowledge and information systems. JURIX 2002: The fifteenth annual conference (pp. 93–102). Amsterdam: IOS Press. Prakken, H. (2004). Analysing reasoning about evidence with formal models of argumentation. Law, Probability & Risk, 3, 33–50. Prakken, H. (2005). Coherence and flexibility in dialogue games for argumentation. Journal of Logic and Computation, 15, 1009–1040. Prakken, H. (2006). Formal systems for persuasion dialogue. The Knowledge Engineering Review, 21, 163–188. Prakken, H. (2008a). A formal model of adjudication dialogues. Artificial Intelligence and Law, 16, 305–328. Prakken, H. (2008b). Formalising ordinary legal disputes: A case study. Artificial Intelligence and Law, 16, 333–359. Prakken, H., & Renooij, S. (2001). Reconstructing causal reasoning about evidence: A case study. In B. Verheij, A. R. Lodder, R. P. Loui, & A. J. Muntjwerff (Eds.), Legal knowledge and information systems. Jurix 2001: The 14th annual conference (pp. 131–137). Amsterdam: IOS Press. Prakken, H., & Sartor, G. (1995a). On the relation between legal language and legal argument: Assumptions, applicability and dynamic priorities. In Proceedings of the Fifth International Conference on Artificial Intelligence and Law. New York: ACM Press, pp. 1–10. Prakken, H., & Sartor, G. (1995b). Argumentation framework: The missing link between argu- ments and procedures. European Journal of Law, Philosophy and Computer Science, 1/2, 379–396. Bologna, Italy: CLUEB. Prakken, H., & Sartor, G. (1996a). A dialectical model of assessing conflicting arguments in legal reasoning. Artificial Intelligence and Law, 4(3/4), 331–368. Alternative title: Rules about rules: Assessing conflicting arguments in legal reasoning; reprinted in H. Prakken & G. Sartor (Eds.), Logical models of legal argumentation (pp. 175–212). Dordrecht, The Netherlands: Kluwer, 1997. Prakken, H., & Sartor, G. (Eds.). (1996b). Logical models of legal argumentation, special issue of Artificial Intelligence and Law, 5 (1996), 157–372. Reprinted as Logical Models of Legal Argumentation, Dordrecht, The Netherlands: Kluwer, 1997. Prakken, H., & Sartor, G. (1998). Argumentation frameworks: The missing link between argu- ments and procedure. European Journal of Law, Philosophy and Computer Science, 1/2, 379–396. Prakken, H., & Sartor, G. (2002). The role of logic in computational models of logic argument: A critical survey. In A. Kakas & F. Sadri (Eds.), Computational logic: Logic programming and beyond. Essays in Honour of Robert A. Kowalski, Part II (pp. 342–380). (Lecture Notes in Computer Science, 2048). Berlin: Springer. Prakken, H., & Sergot, M. J. (1996). Contrary-to-duty obligations. Studia Logica, 57, 91–115. References 1233

Prakken, H., & Sergot, M. J. (1997). Dyadic deontic logic and contrary-to-duty obligations. In D. N. Nute (Ed.), Defeasible deontic logic: Essays in nonmonotonic normative reasoning (pp. 223–262). (Synthese Library, 263.) Dordrecht: Kluwer. Prakken, H., Reed, C., & Walton, D. N. (2003). Argumentation schemes and generalisations in rea- soning about evidence. In G. Sartor (Ed.), Proceedings of the ninth International Conference on Artificial Intelligence and Law (ICAIL 2003), Edinburgh, Scotland, 24–28 June 2003 (pp. 32–41). New York: ACM Press. Prakken, H., Reed, C., & Walton, D. N. (2004). Argumentation schemes and burden of proof. In F. Grasso, C. Reed, & G. Carenini (Eds.), Proceedings of the fourth workshop on Computational Models of Natural Argument (CMNA IV) at ECAI 2004, Valencia, Spain, pp. 81–86. Prakken, H., & Vreeswijk, G. A. W. (2002). Encoding schemes for a discourse support system for legal argument. In G. Carenini, F. Grasso, & C. Reed (Eds.), Proceedings of the ECAI- 2002 workshop on computational models of natural argument,atECAI 2002, Lyon, France, pp. 31–39. Prendinger, H., & Ishizuka, M. (Eds.). (2004). Life-like characters: Tools, affective functions and applications. Berlin: Springer. Priebe, C. E., Conroy, J. M., Marchette, D. J., & Park, Y. (2005). Scan statistics on Enron graphs. In Proceedings of the SIAM International Conference on Data Mining, SIAM Workshop on Link Analysis, Counterterrorism and Security. Philadelphia, PA: SIAM. Principe, G., & Ceci, S. (2002). I saw it with my own ears: The effect of peer conversations on children’s reports of non-experienced events. Journal of Experimental Child Psychology, 83, 1–25. Principe, J. C., Euliano, N. R., & Lefebvre, W. C. (2000). Neural and adaptive systems: Fundamentals through simulations. New York: Wiley. Propp, V. (1928). Morfologija skazki. In Voprosy poetiki (Vol. 12). Leningrad: Gosudarstvennyi Institut Istorii Iskusstva. English editions: Morphology of the Folktale, edited by S. Pirkova- Jakobson, translated by L. Scott (Indiana University Research Center in Anthropology, Folklore and Linguistics, publication series, 10; Indiana University, Bloomington, IN, 1958). Reprinted in: International Journal of American Linguistics, Vol. 24, No. 4, Part 3 (Bibliographical and Special Series of the American Folklore Society, 9). New English translation: Morphology of the Folktale, 2nd edn., ed. by L.A. Wagner (Austin, TX: University of Texas Press, 1968.)73 Revised Russian edn., Leningrad: Nauka, 1969; whence French edn., Morphologie du conte (collection Poétique; Paris: Éditions du Seuil, 1970). Proth, J.-M., & Xie, X. (1996). Petri nets: A tool for design and management of manufacturing systems. Chichester: Wiley. Provos, N., & Holz, T. (2007). Virtual honeypots: From Botnet tracking to intrusion detection. Reading, MA: Addison-Wesley. Pu, D., & Srihari, S. N. (2010). A probabilistic measure for signature verification based on Bayesian learning. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, August 23–26, 1010. Pühretmair, F., & Wöβ, W. (2001). XML-based integration of GIS and heterogeneous tourism information. In K. Dittrich, A. Geppert, & M. Norrie (Eds.), Advanced information systems engineering (pp. 346–358). Berlin: Springer. Purchase, H. C., Cohen, R. F., & James, M. (1997). An experimental study of the basis for graph drawing algorithms. ACM Journal of Experimental Algorithmics, 2(4), 4-es. Pye, K. (2006). Evaluation of the significance of geological and soil trace evidence (abstract). In A. Ruffell (Ed.), Abstract book of geoscientists at crime scenes: First, inaugural meeting of the Geological Society of London, 20 December 2006 (pp. 24–15). Forensic Geoscience Group. http://www.geolsoc.org.uk/pdfs/FGtalks&abs_pro.pdf

73 American authors are used to refer to the Austin, Texas editions of Propp’s book. 1234 References

Pye, K. (2007). Geological and soil evidence: Forensic applications. Boca Raton, FL: CRC Press. Pye, K., & Croft, D. J. (Eds.). (2004). Forensic geoscience: Principles, techniques and applica- tions. (Special Publications, 232.) London: Geological Society. Pyle, D. (1999). Data preparation for data mining. San Francisco: Morgan Kaufmann. Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, 1, 81–106. Quinlan, J. R. (1993). C4.5: Programs for machine learning. San Mateo, CA: Morgan Kaufmann. Quinlan, J. R. (1996). Bagging Boosting and C4.5. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI 96), Portland, OR. American Association for Artificial Intelligence, pp. 725–730. Rabinovich, A. (1997). A birdwatcher with an attitude. Jerusalem Post Internet Edition, June 9, 1997. http://www.jpost.com/com/Archive/09.Jun.1997/Features/Article-22.html Racter (1984). The policeman’s beard is half constructed.NewYork:Warner. Radev, D. R., Jing, H., & Budzikowska, M. (2000). Summarization of multiple documents: Clustering, sentence extraction, and evaluation. In Proceedings of the Workshop of Automatic Text Summarization, New Brunswick, NJ: Association for Computational Linguistics, pp. 21–30. Radford, C. (1975). How can we be moved by the fate of Anna Karenina? In Proceedings of the Aristotelian Society, Supplementary volume 49. Radford, C. (1995). Fiction, pity, fear, and jealousy. The Journal of Aesthetics and Art Criticism, 53(1), 71–75. Rahman, H. (2009). Prospects and scopes of data mining applications in society development activ- ities. Chapter 9 In H. Rahman (Ed.), Data mining applications for empowering knowledge societies (pp. 162–213). Hershey, PA: Information Science Reference (IGI Press). Rahwan, I. (2005). Guest editorial: Argumentation in multi-agent systems. (Special issue.) Journal of Autonomous Agents and Multi-Agent Systems, 11, 115–125. Rahwan, I., & McBurney, P. (2007). Guest editors’ introduction: Argumentation technology. (Special issue.) IEEE Intelligent Systems, 22, 21–23. Rahwan, I., & Simari, G. R. (Eds.). (2009). Argumentation in artificial intelligence. Berlin: Springer. Raja, A., & Goel, A. (2007). Introspective self-explanation in analytical agents. In Proceedings of AAMAS 2007 Workshop on Metareasoning in Agent-based Systems, Hawaii, May 2007, pp. 76–91. http://www.viscenter.uncc.edu/TechnicalReports/CVC Rakover, S. S., & Cahlon, B. (1989). To catch a thief with a recognition test: The model and some empirical results. Cognitive Psychology, 21, 423–468. Rakover, S. S., & Cahlon, B. (2001). Face recognition: Cognitive and computational processes. (Advances in Consciousness Research, Series B, Vol. 31.) Amsterdam: Benjamins. Ram, A. (1989). Question-driven understanding: An integrated theory of story understand- ing, memory, and learning. Technical Report YALE/DCS/tr710. New Haven, CT: Computer Science Department, Yale University. Ram, A. (1994). AQUA: Questions that drive the explanation process. In R. C. Schank, A. Kass, & C. K. Riesbeck (Eds.), Inside case-based explanation (pp. 207–261). Hillsdale, NJ: Erlbaum. Ramakrishnan, V., Malgireddy, M., & Srihari, S. N. (2008). Shoe-print extraction from latent images using CRFs. In Computational Forensics: Proceedings of the International Workshop, Washington D.C., 2008. (Lecture Notes in Computer Science, 5158.) Berlin: Springer, pp. 105–112. Ramakrishnan, V., & Srihari, S. N. (2008). Extraction of shoeprint patterns from impression evi- dence using conditional random fields. In Proceedings of the International Conference on Pattern Recognition, Tampa, FL, 2008. Ramamoorthi, R., & Hanrahan, P. (2001) An efficient representation for irradiance environment maps. In SIGGRAPH ’01: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM Press, 2001, pp. 497–500. Randell, D. A., & Cohn, A. G. (1992). Exploiting lattices in a theory of space and time. Computers and Mathematics with Applications, 23(6/9), 459–476. Also in: Lehmann, F. (Ed.). Semantic networks. Oxford: Pergamon Press. The book was also published as a special issue of Computers and Mathematics with Applications, 23(6–9). References 1235

Raskin, V. (1987). Semantics of lying. In R. Crespo, B. Dotson-Smith, & H. Schultink (Eds.), Aspects of language: Studies in honour of Mario Alinei, Vol. 2: Theoretical and applied semantics (pp. 443–469). Amsterdam: Rodopi. Raskin, V. (1993). Semantics of lying. Dordrecht, The Netherlands: Kluwer. Raskin, J.-F., Tan, Y.-H., & van der Torre, L. W. N. (1996). Modeling deontic states in Petri nets. Discussion Paper 111. Rotterdam, The Netherlands: Erasmus University Research Institute for Decision and Information Systems (EURIDIS). Raskin, V., Atallah, M. J., Hempelmann, C. F., & Mohamed, D. H. (2001). Hybrid data and text system for downgrading sensitive documents. Technical Report, Center for Education and Research in Information Assurance and Security. West Lafayette, IN: Purdue University. https:// www.cerias.purdue.edu/assets/pdf/bibtex_archive/2001-154.pdf Rasmussen, P. C. (1998). Rediscovery of an Indian enigma: The Forest Owlet. Bulletin of the Oriental Bird Club, 27. http://www.orientalbirdclub.org/publications/bullfeats/forowlet.html Rasmussen, P. C., & Ishtiaq, F. (1999). Vocalizations and behaviour of Forest Spotted Owlet Athene blewitti. Forktail, 15, 61–66. http://orientalbirdclub.org/publications/forktail/15pdfs/ Rasmussen-ForestOwlet.pdf Rasmussen, P. C., & King, B. F. (1998). The rediscovery of the Forest Owlet Athene (Heteroglaux) blewitti. Forktail, 14, 53–55. http://www.orientalbirdclub.org/publications/forktail/14pdfs/ King-Owlet.pdf Rasmussen, P. C., & Prys-Jones,ˆ R. P. (2003). History vs mystery: The reliability of museum specimen data. Bulletin of the British Ornithologists’ Club, 123A, 66–94. Ratcliffe, J. H. (2002). Intelligence-led policing and the problems of turning rhetoric into practice. Policing and Society, 12(1), 53–66. Ratcliffe, J. H. (2003). Intelligence-led policing. Trends and Issues in Crime and Criminal Justice, 248,6. Ratcliffe, J. H. (2004). Geocoding crime and a first estimate of an acceptable minimum hit rate. International Journal of Geographical Information Science, 18(1), 61–73. Ratcliffe, J. H. (2005). The effectiveness of police intelligence management: A New Zealand case study. Police Practice and Research, 6(5), 435–451. Ratcliffe, J. H. (2007). Integrated intelligence and crime analysis: Enhanced information management for law enforcement leaders (2nd ed.). Washington, DC: Police Foundation. COPS: Community Oriented Policing Services, U.S. Department of Justice. http://www. policefoundation.org/pdf/integratedanalysis.pdf Ratcliffe, J. H. (2008). Intelligence-led policing. Cullompton: Willan Publishing. Rattani, A., Mehrotra, H., & Gupta, P. (2008). Multimodal biometric systems. In M. Quigley (Ed.), Encyclopedia of information ethics and security (pp. 478–485). Hershey, PA: IGI Global (formerly Idea Group), 2008 (but available from June 2007). Rattner, K. (1988). Convicted but innocent: Wrongful conviction and the criminal justice system. Law and Human Behavior, 12, 283–293. Read, S. (1988). Relevant logic: A philosophical examination of inference. Oxford: Blackwell. Revised edition published online and freely accessible, 2010, at http://www.st-andrews.ac.uk/~ slr/Relevant_Logic.pdf Reddy, W. M. (1997). Against constructionism: The historical ethnography of emotions. Current Anthropology, 38(2), 327–351. Reddy, W. M. (2001). The navigation of feeling: A framework for the history of emotions. Cambridge: Cambridge University Press. Redlich, A., & Goodman, G. (2003). Taking responsibility for an act not committed: The effects of age and suggestibility. Law and Human Behavior, 27, 141–156. Redmayne, M. (1999). A likely story! [A review of W.A. Dembsky, The Design Inference: Eliminating Chance Through Small Probabilities. Cambridge, England: Cambridge University Press, 1998.] Oxford Journal of Legal Studies, 19, 659–672. Redmayne, M. (2002). Appeals to reason. The Modern Law Review, 65(1), 19–35. Oxford: Blackwell. Redmond, M. A., & Blackburn, C. (2003). Empirical analysis of case-based reasoning and other prediction methods in a social science domain: Repeat criminal victimization. In 1236 References

K. D. Ashley & D. G. Bridge (Eds.), Case-based reasoning research and development: Proceedings of the 5th International Conference on Case-Based Reasoning (ICCBR 2003), Trondheim, Norway, June 23–26, 2003. (Lecture Notes in Computer Science, 2689.) Berlin: Springer. Redsicker, D. R. (2005). Basic fire and explosion ivestigation. Chapter 24 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press, 2005. Also in 3rd edition, 2009. Reed, C., & Grasso, F. (Eds.). (2007). Recent advances in computational models of natural argument. Special issue in the International Journal of Intelligent Systems, 22. Reed, C., & Norman, T. J. (Eds.). (2003). Argumentation machines: New frontiers in argument and computation. Dordrecht, The Netherlands: Kluwer. Reed, C., & Norman, T. J. (Eds.). (2004). Argumentation machines: New frontiers in argument and computation. (Argumentation Library, 9.) Dordrecht, Netherlands: Kluwer. Reed, C. A., & Rowe, G. W. A. (2001). Araucaria: Software for puzzles in argument diagramming and XML. Technical report, Department of Applied Computing, University of Dundee. (The Araucaria software is in the public domain, and can be downloaded free of charge from the website http://www.computing.dundee.ac.uk/staff/creed/araucaria/). Reed, C. A., & Rowe, G. W. A. (2004). Araucaria: Software for argument analysis, diagramming and representation. International Journal on Artificial Intelligence Tools, 14(3/4), 961–980. Reeves, J. (1991). Computational morality: A process model of belief conflict and resolu- tion for story understanding. Technical Report 910017, Computer Science Department. Los Angeles, CA: University of California, Los Angeles. ftp://ftp.cs.ucla.edu/tech-report/1991- reports/910017.pdf Reichenbach, H. (1949). The theory of probability. Berkeley, CA: University of California Press. Reilly, W. S. N. (1996). Believable social and emotional agents. Technical Report CMU-CS-96- 138. Pittsburgh, PA: School of Computer Science, Carnegie Mellon University. http://www-2. cs.cmu.edu/afs/cs.cmu.edu/project/oz/web/papers/CMU-CS-96-138-1sided.ps Reiner, R. (2000). The politics of the police. Oxford: Oxford University Press. Reis, D., Melo, A., Coelho, A. L., & Furtado, V. (2006). GAPatrol: An evolutionary multia- gent approach for the automatic definition of hotsports and patrol routes. In J. S. Sichman, H. Coelho, & S. O. Rezende (Eds.), Advances in artificial intelligence – IBERAMIA-SBIA 2006, 2nd International Joint Conference, 10th Ibero-American Conference on AI, 18th Brazilian AI Symposium (pp. 118–127). (Lecture Notes in Computer Science, 4140). Berlin: Springer. Ren, A., Stakhanova, N., & Ghorbani, A. A. (2010). An online adaptive approach to alert cor- relation. In C. Kreibich & M. Jahnke (Eds.), Detection of Intrusions and Malware, and Vulnerability Assessment: Proceedings of the seventh international conference (DIMVA 2010), Bonn, Germany, July 8–9, 2010 (pp. 153–172). (Lecture Notes in Computer Science, Vol. 6201.) Berlin: Springer. Rendell, K. W. (1994). Forging history: The detection of fake letters & manuscripts. Norman, OK: University of Oklahoma Press. Resnick, P., Zeckhauser, R., Friedman, E., & Kuwabara. K. (2000). Reputation systems. Communications of the ACM, 43(12), 45–48. http://www.si.umich.edu/~presnick/papers/ cacm00/reputations.pdf Resnick, P., Zeckhauser, R., Swanson, J., & Lockwood, K. (2003). The value of reputation on eBay: A controlled experiment. Technical report. Restall, G. (1996). Information flow and relevant logics. In J. Seligman & D. Westerstahl (Eds.), Logic, language and computation (Vol. 1, pp. 463–478). Stanford, CA: Center for the Study of Language and Information (CSLI). Reutenauer, C. (1990). The mathematics of petri nets. London: Prentice-Hall International. Ribaux, O., & Margot, P. (1999). Inference structures for crime analysis and intelligence: The example of burglary using forensic science data. Forensic Science International, 100, 193–210. Richards, W. D. (1999). MultiNet. [Software tool.] At http://www.sfu.ca/~richards/Multinet/ Richards, W. D., & Rice, R. E. (1981). The NEGOPY network analysis program. Social Networks, 3(3), 215–223. References 1237

Rickman, B. (2003). The Dr. K– project. In M. Mateas & P. Sengers (Eds.), Narrative intelligence (pp. 131–142). Amsterdam: Benjamins. Ricordel, P., & Demazeau, Y. (2000). From analysis to deployment: A multi-agent platform sur- vey. In A. Omicini, R. Tolksdorf, & F. Zambonelli (Eds.), Proceedings of the first international workshop on Engineering Societies in the Agents World (ESAW), ECAI2000 (pp. 93–105). Lectures Notes in Artificial Intelligence, Vol. 1972. Berlin: Springer. Riedl, M., Saretto, C. J., & Young, R. M. (2003). Managing interaction between users and agents in a multi-agent storytelling environment. In AAMAS ’03: Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems.NewYork:ACM. Riedl, M., & Young, R. M. (2004). An intent-driven planner for multi-agent story generation. In Proceedings of the Third International Conference on Autonomous Agents and Multi-Agent Systems, July 2004. http://liquidnarrative.csc.ncsu.edu/papers.html Riedl, M. O. (2003). Actor conference: Character-focused narrative planning. Liquid Narrative Technical Report TR03-000. Raleigh, NC: North Carolina State University. http:// liquidnarrative.csc.ncsu.edu/pubs/tr03-000.pdf Riedl, M. O. (2004). Narrative generation: Balancing plot and character. PhD Dissertation. Raleigh, NC: Department of Computer Science, North Carolina State University. http://people. ict.usc.edu/~riedl/pubs/dissertation.pdf Riedl, M. O., Rowe, J. P., & Elson, D. K. (2008). Toward intelligent support of authoring Machinima media content: Story and visualization. In INTETAIN ’08: Proceedings of the 2nd International Conference on INtelligent TEchnologies for Interactive EnterTAINment. Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering (ICST), Brussels, Belgium. New York: ACM. Riepert, T., Drechsler, T., Schild, H., Nafe, B., & Mattern, R. (1996). Estimation of sex on the basis of radiographs of the calcaneus. Forensic Science International, 77(3), 133–140. Riloff, E., & Thelen, M. (2000). A rule-based question answering system for reading comprehen- sion tests. In Proceedings of the ANLP/NAA CL 2000 Workshop on Reading Comprehension Tests as Evaluation for Computer-Based Language Understanding Systems. Ringle, M. (1979). Philosophy and artificial intelligence. In M. Ringle (Ed.), Philosophical perspectives in artificial intelligence. Atlantic Highlands, NJ: Humanities Press. Ringle, M. (1983). Psychological studies and artificial intelligence. The AI Magazine, 4(1), 37–43. Ripley, S. D. (1976). Reconsideration of Athene blewitti (Hume). Journal of the Bombay Natural History Society, 73, 1–4. Ripley, B. D. (1996). Pattern recognition and neural networks. Cambridge: Cambridge University Press. Risinger, D. M. (2007a). Goodbye to all that, or a fool’s errand, by one of the fools: How I stopped worrying about court responses to handwriting identification (and “forensic science” in general) and learned to love misinterpretations of Kumho Tire v. Carmichael. Tulsa Law Review, 43(2), 447–475. With an Appendix, being Risinger (2007b). Risinger, D. M. (2007b). Appendix: Cases involving the reliability of handwriting identifica- tion expertise since the decision in Daubert. Tulsa Law Review, 43(2), 477–596. http://www. bioforensics.com/sequential_unmasking/Risinger-Appendix.pdf Risinger, D. M., Saks, M. J., Thompson, W. C., & Rosenthal, R. (2002). The Daubert/Kumho implications of observer effects in forensic science: Hidden problems of expectation and suggestion. California Law Review, 90(1), 1–56. http://www.bioforensics.com/sequential_ unmasking/observer_effects.pdf Rissland, E. L., & Friedman, M. T. (1995). Detecting change in legal concepts. In Proceedings of the Fifth International Conference on Artificial Intelligence and Law (ICAIL’95).NewYork: ACM Press, pp. 127–136. Rissland, E. L., & Skalak, D. B. (1991). CABARET: Statutory interpretation in a hybrid architecture. International Journal of Man-Machine Studies, 34, 839–887. Rissland, E. L., Skalak, D. B., & Friedman, M. T. (1996). BankXX: Supporting legal arguments through heuristic retrieval. Artificial Intelligence and Law, 4(1), 1–71. 1238 References

Ritchie, G. (2004). The linguistic analysis of jokes. London: Routledge. Ritterband, P., & Wechsler, H. S. (1994). Jewish learning in American universities: The first century. Bloomington, IN: Indiana University Press. Roberts, A. (2008). Eyewitness identification evidence: Procedural developments and the ends of adjudicative accuracy. International Commentary on Evidence, 6(2), Article 3. http://www. bepress.com/ice/vol6/iss2/art3 Roberts, D. L., Elphick, C. S., & Reed, J. M. (2009). Identifying anomalous reports of putatively extinct species and why it matters. Conservation Biology, online publica- tion at doi://10.1111/j.1523-1739.2009.01292.x The paper was then published in print, in Conservation Biology, 24(1), 189–196, in Feb. 2010. Roberts, L. (1991). Fight erupts over DNA fingerprinting. Science, 254, 1721–1723. Robertson, B., & Vignaux, G. A. [T.] (1995). Interpreting evidence: Evaluating forensic science in the courtroom. Chichester: Wiley. Rogers, M. (2005). Anti-forensics. http://www.cyberforensics.purdue.edu/docs/Lockheed.ppt Rokach, L., & Maimon, O. Z. (2008). Data mining with decision trees: Theory and applications. (Series in Machine Perception and Artificial Intelligence, Vol. 69.) Singapore: World Scientific. Roscoe, A. W. (1998). The theory and practice of concurrency. (Prentice Hall Series in Computer Science.) Hemel Hempstead, Hertfordshire: Prentice Hall. Roscoe, B. A., & Hopke, P. K. (1981). Comparison of weighted and unweighted target transfor- mation rotations in factor analysis. Computers and Chemistry, 5, 1–7. Rosenberg, N. (1994). Hollywood on trials: Courts and films, 1930–1960. Law and History Review, 12(2), 342–367. Rosenberg, S. T. (1977). Frame-based text processing. Technical Report AIM-431. Cambridge, MA: Artificial Intelligence Laboratory, Massachusetts Institute of Technology. ftp:// publications.ai.mit.edu/ai-publications/pdf/AIM-431.pdf Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organi- zation in the brain. Psychological Review, 65, 386–408. Rosoni, I. (1995). Quae singula non prosunt collecta iuvant: la teoria della prova indiziaria nell’età medievale e moderna. Milan, Italy: Giuffrè. [Reviewed in Nissan (2001b).] Ross, A., & Jain, A. K. (2003). Information fusion in biometrics. Pattern Recognition Letters, 24(13), 2115–2125. Ross, A. A. (2003). Information fusion in fingerprint authentication. Ph.D. Dissertation. Department of Computer Science & Engineenring, Michigan State University. http://www.csee. wvu.edu/~ross/pubs/RossPhDThesis_03.pdf Ross, D. F., Read, J. D., & Toglia, M. P. (Eds.). (1994). Adult eyewitness testimony: Current trends and developments. Cambridge: Cambridge University Press. Ross, S., Spendlove, D., Bolliger, S., Christe, A., Oesterhelweg, L., Grabherr, S., et al. (2008). Postmortem whole-body CT angiography: Evaluation of two contrast media solutions. AJR: American Journal of Roentgenology, 190(5), 1380–1389. Ross, T. (1995). Fuzzy logic with engineering applications. New York: McGraw-Hill. Rossiter, B. N., Sillitoe, T. J., & Heather, M. A. (1993). Models for legal documentation: Using formal methods for quality assurance in hypertext systems. (Technical Report Series, 464.) Newcastle upon Tyne, England: University of Newcastle upon Tyne, Computing Science. Rousseau, D. (1995). Modelisation et simulation de conversations dans un univers multi-agent. Ph.D. Dissertation. Technical Report #993, Montreal, Canada: Department of Computer Science and Operational Research, University of Montreal. Rousseau, D. (1996). Personality in synthetic agents. Technical Report KSL-96-21, Knowledge Systems Laboratory, Stanford University. Rousseau, D., Moulin, B., & Lapalme, G. (1996). Interpreting communicative acts and building a conversational model. Journal of Natural Language Engineering, 2(3), 253–276. Rousseeuw, P. J., & Hubert, M. (2011). Robust statistics for outlier detection. Wiley Interdisciplinary Reviews (WIREs): Data Mining and Knowledge Discovery, 1(1), 73–79. doi://10.1002/widm.2 Routley, R., Meyer, R. K., Plumwood, V., & Brady, R. (Eds.). (1983). Relevant logic and its rivals, I. Atascadero, CA: Ridgeview. Vol. 2 is Brady (2003). References 1239

Rubinstein, A. (1998). Modelling bounded rationality. Cambridge, MA: MIT Press. Rubinstein, R. (1997). Optimization of computer simulation models with rare events. European Journal of Operations Research, 99, 89–112. Rudman, J. (1997). The state of authorship attribution studies: Some problems and solutions. Computers and the Humanities, 31(4), 351–365. Dordrecht: Kluwer. Ruffell, A. (Ed.). (2006) Abstract book of geoscientists at crime scenes: First, inaugural meeting of the Geological Society of London, Forensic Geoscience Group, London, 20 December 2006. http://www.geolsoc.org.uk/pdfs/FGtalks&abs_pro.pdf Rumble, W. E., Jr. (1965). Legal realism, sociological jurisprudence and Mr. Justice Holmes. Journal of the History of Ideas, 26(4), 547–566. Rumelhart, D. E. (1975). Notes on a schema for stories. In D. G. Bobrow & A. Collins (Eds.), Representation and understanding: studies in cognitive science (pp. 185–210). New York: Academic. Rumelhart, D. E. (1977a). Toward an interactive model of reading. In S. Domic (Ed.), Attention and performance VI. Hillsdale NJ: Lawrence Erlbaum Associates. Rumelhart, D. E. (1977b). Understanding and summarizing brief stories. In D. La Berge & S. J. Samuels (Eds.), Basic processes in reading: Perception and comprehension. Hillsdale, NJ: Lawrence Erlbaum Associates. Rumelhart, D. E. (1980a). Schemata: The building blocks of cognition. In R. J. Spiro, B. C. Bruce, & W. F. Brewer (Eds.), Theoretical issues in reading comprehension (pp. 38–58). Hillsdale, NJ: Erlbaum. Rumelhart, D. E. (1980b). On evaluating story grammars. Cognitive Science, 4, 313–316. Rumelhart, D. E., Hinton, G. E., & Williams, R. (1986a). Learning internal representations by error propagation. In D. E. Rumelhart & J. L. McClelland (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Cambridge, MA: MIT Press. Rumelhart, D. E., Hinton, G. E., & Williams, R. (1986b, October 9). Learning representations by back-propagating errors. Letters to Nature (Nature), 323, 533–536. Rumelhart, D. E., & Ortony, A. (1977). The representation of knowledge in memory. In R. C. Anderson, R. J. Spiro, & W. E. Montague (Eds.), Schooling and the acquisition of knowledge. Hillsdale, NJ: Lawrence Erlbaum Associates. Rumelhart, D. E., Smolensky, P., McClelland, J. L., & Hinton, G. E. (1986c). Schemata and sequential thought processes in PDP models. In D. E. Rumelhart & J. L. McClelland (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 2, pp. 7–57). Cambridge, MA: MIT Press. Russano, M. B., Meissner, C. A., Narchet, F. M., & Kassin, S. M. (2005). Investigating true and false confessions within a novel experimental paradigm. Psychological Science, 16, 481–486. Ryan, M.-L. (2005). Narrative. In D. Herman, M. Jahn, & M.-L. Ryan (Eds.), Routledge encyclopedia of narrative theory (pp. 344–348). London: Routledge, 2005 (hbk), 2008 (pbk). Ryan, P. Y. A, Schneider, S. A., Goldsmith, M., Lowe, G., & Roscoe, A. W. (2000). Modelling and analysis of security protocols. Harlow: Pearson Education. Sabater, J., & Sierra, C. (2005). Review on computational trust and reputation models. Artificial Intelligence Review, 24, 33–60. Saferstein, R. E. (1995). Criminalistics: An introduction to forensic science (5th ed.). Englewood Cliffs, NJ: Prentice-Hall. 6th edn., 1998. Sainsbury, R. M. (1990). Concepts without boundaries. Inaugural Lecture, King’s College London. Reprinted in: R. Keefe & P. Smith (Eds.), Vagueness: A reader. Cambridge, MA: MIT Press, 1996. Saks, M. J., & Koehler, J. J. (2008). The individualization fallacy in forensic science evidence. Vanderbilt Law Review, 61, 199–219. Sakurai, Y., & Yokoo, M. (2003). A false-name-proof double auction protocol for arbitrary eval- uation values. In AAMAS 2003: Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems. Salmon, W. C. (1967). The foundations of scientific inference. Pittsburgh, PA: Universitgy of Pittsburgh Press. 1240 References

Salton, G. (1989). Automatic text processing: The transformation, analysis, and retrieval of information by computer. Reading, MA: Addison-Wesley Publishing Company. Salton, G., & Buckley, C. (1988). Term-weighting approaches in automatic text retrieval. Information Processing and Management, 24(5), 513–523. Sammes, T., & Jenkinson, B. (2000). Forensic computing: A practitioner’s guide. London & Heidelberg: Springer. Sanders, W. B. (1977). Detective work: A study of criminal investigation. New York: Free Press. Santos, E., Jr., & Shimony, S. E. (1994). Belief updating by enumerating high-probability independence-based assignments. In R. Lopez de Mántaras & D. Poole (Eds.), Uncertainty in artificial intelligence: Proceedings of the tenth conference (pp. 506–513). San Mateo, CA: Morgan Kaufmann. Santtila, P., Alkiora, P., Ekholm, M., & Niemi, P. (1999). False confessions to robbery: The role of suggestibility, anxiety, memory disturbance and withdrawal symptoms. The Journal of Forensic Psychiatry, 10, 399–415. Sappington, D. (1984). Incentive contracting with asymmetric and imperfect precontractual knowledge. Journal of Economic Theory, 34, 52–70. Saretto, C. J. (2001). Mediating user interaction in narrative-structured virtual environments. M.Sc. thesis (advisor: R. M. Young). Raleigh, NC: Computer Science, North Carolina State University. http://liquidnarrative.csc.ncsu.edu/papers.html Sartor, G. (1994). A formal logic for legal argumentation. Ratio Juris, 7, 212–226. Sartwell, C. (1992). Why knowledge is merely true belief. Journal of Philosophy, 89, 167–180. Sartwell, C. (1995). Radical externalism concerning experience. Philosophical Studies, 78, 55–70. Sattler, U. (2003). Description logics for ontologies. In Proceedings of the International Conference on Conceptual Structures (ICCS 2003). (Lecture Notes in AI, Vol. 2746.) Berlin: Springer. Savage, L. J. (1962). The foundations of statistical inference. London: Methuen and Co. Ltd. Sawday, J. (1996). The body emblazoned: Dissection and the human body in renaissance culture. London: Routledge. Sawyer, A. G. (1981). Repetition, cognitive responses and persuasion. In R. E. Petty, T. M. Ostrom, & T. C. Brock (Eds.), Cognitive responses in persuasion (pp. 237–261). Hillsdale, NJ: Erlbaum. Sbriccoli, M. (1991). “Tormentum id est torquere mentem”. Processo inquisitorio e interrogatorio per tortura nell’Italia comunale. In J.-C. Maire Vigeur & A. Paravicini Bagliani (Eds.), La parola all’accusato (Prisma, 139.) (pp. 17–33). Palermo: Sellerio. Scampicchio, M., Ballabio, D., Arecchi, A., Cosio, S. M., & Mannino, S. (2008). Amperometric electronic tongue for food analysis. Microchimica Acta, 163, 11–21. Schafer, B., & Keppens, J. (2007). Legal LEGO: Model based computer assisted teaching in evi- dence courses. Journal of Information, Law & Technology, Special Issue on Law, Education and Technology, http://www2.warwick.ac.uk/fac/soc/law/elj/jilt/2007_1/schafer_keppens/schafer_ keppens.pdf Schank, P., & Ranney, M. (1995). Improved reasoning with Convince Me. In CHI ’95: Conference Companion on Human Factors in Computing Systems. New York: ACM Press, pp. 276–277. Schank, R., & Abelson, R. (1977). Scripts, plans, goals and understanding. Hillsdale, NJ: Lawrence Erlbaum. Schank, R. C., Goldman, N., Rieger, C., & Riesbeck, C. K. (1973). MARGIE: Memory, analysis, response generation and inference in English. In Proceedings of the Third International Joint Conference on Artificial Intelligence, pp. 255–261. Schank, R. C., Goldman, N., Rieger, C., & Riesbeck, C. K. (1975). Inference and paraphrase by computer. Journal of the ACM, 22(3), 309–328. Schank, R. C., Kass, A., & Riesbeck, C. K. (Eds.). (1994). Inside case-based explanation. Hillsdale, NJ: Erlbaum. Schank, R. G. (1972). Conceptual dependency: A theory of natural language understanding. Cognitive Psychology, 3, 552–631. References 1241

Schank, R. G. (1986). Explanation patterns: Understanding mechanically and creatively. Hillsdale, NJ: Lawrence Erlbaum Associates. Schank, R. G., & Riesbeck, C. K. (Eds.). (1981). Inside computer understanding: Five programs plus miniatures. Hillsdale, NJ: Lawrence Erlbaum Associates. Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. Urbana, IL: University of Illinois Press. Schartum, D. W. (1994). Dirt in the machinery of government? Legal challenges connected to computerized case processing in public administration. International Journal of Law and Information Technology, 2, 327–354. Schild, U. J. (1995). Intelligent computer systems for criminal sentencing. In The Fifth International Conference on Artificial Intelligence and Law: Proceedings of the Conference, Washington, DC. New York: ACM Press, pp. 229–239. Schild, U. J. (1998). Criminal sentencing and intelligent decision support. Artificial Intelligence and Law, 6(2–4), 151–202. Schild, U. J., & Kerner, Y. (1994). Multiple explanation patterns. In S. Wess, K.-D. Althoff, & M. Richter (Eds.), Topics in case-based reasoning, Proceedings of the First European Workshop, EWCBR 93 (pp. 353–364). (Lecture Notes in Artificial Intelligence, 837.) Berlin: Springer. Shirley, S. G., & Persaud, K. C. (1990). The biochemistry of vertebrate olfaction and taste. Seminars Neuroscience, 2, 59–68. Schlesinger, P., & Tumber, H. (1994). Reporting crime: The media politics of criminal justice. Oxford: Clarendon Press. Schmid, N (2009). Handbuch des Schweizerischen Strafprozessrechts.Zürich&St.Gallen, Switzerland: Dike Verlag. Schneider, S. A. (1999). Concurrent and real time systems: The CSP approach. Chichester: Wiley. Schneider, S. A. (2001). The B-method: An introduction. Palgrave Cornerstones in Computer Science. London: Palgrave Macmillan. Schneider, V., Nagano, T., & Geserick, G. (Eds). (1994). Advances in legal medicine. Special issue, Forensic Science International, 69(3). Amsterdam: Elsevier. Schoenlein, R. W., Chattopadhyay, S., Chong, H. H. W., Glover, T. E., Heimann, P. A., Shank, C. V., et al. (2000). Generation of femtosecond pulses of synchrotron radiation. Science, 287, 2237. Schonlau, M., DuMouchel, W., Ju, W., Karr, A. F., Theus, M., & Vardi, Y. (2001). Computer intrusion: Detecting masquerades. Statistical Science, 16(1), 58–74. Schooler, J. W., Gerhard, D., & Loftus, E. F. (1986). Qualities of the unreal. Journal of Experimental Psychology: Learning, Memory and Cognition, 12, 171–181. Schraagen, J. M., & Leijenhorst, H. (2001). Searching for evidence: Knowledge and search strate- gies used by forensic scientists. In E. Salas & G. Klein (Eds.), Linking expertise and naturalistic decision making (pp. 263–274). Mahwah, NJ: LEA. Schreiber, F. A. (1991). State and time granularity in systems description: An example. IEEE Real-Time Systems Newsletter, 7(3), 12–17. http://home.dei.polimi.it/schreibe/papers/states2. ps (sic:/schreibe/not/schreiber/) Schreiber, F. A. (1994). Is time a real time? An overview of time ontology in informatics. In W. A. Halang & A. D. Stoyenko (Eds.), Real time computing (pp. 283–307). (NATO ASI, Vol. F 127.) Berlin: Springer. Schreiber, T. J., Akkermanis, A. M., Anjewierden, A. A., de Hoog, R., Shadbolt, A., Van de Velde, W., et al. (1999). Knowledge engineering and management: The common Kads methodology. Cambridge, MA: MIT Press. Schreiber, F. A., Belussi, A., De Antonellis, V., Fugini, M. G., Pozzi, G., Tanca, L., et al. (2003). The design of the DEAFIN web-geographical information system: An experience in the inte- gration of territorial reclamation support services. In A. Dahanayake & W. Gerhardt (Eds.), Web-enabled systems integration: Practice and challenges (pp. 142–168). Hershey, PA: Idea Group Publishing. 1242 References

Schroeder, J., Xu, J., Chen, H., & Chau, M. (2007). Automated criminal link analysis based on domain knowledge. Journal of the American Society for Information Science and Technology, 58(6), 842–855. doi://10.1002/asi.v58:6 Schubert, L. K., & Hwang, C. H. (1989). An episodic knowledge representation for narra- tive texts. In Proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning. San Mateo, CA: Morgan Kaufmann, pp. 444–458. Schubert, L. K., & Hwang, C. H. (2000). Episodic logic meets Little Red Riding Hood: A compre- hensive natural representation for language understanding. In L. M. Iwanska & S. C. Shapiro (Eds.), Natural language processing and knowledge representation (pp. 111–174). Cambridge, MA: MIT Press. http://www.cs.rochester.edu/~schubert/papers/el-meets-lrrh.ps Schultz, M., Eskin, E., Zadok, E., & Stolfo, S. (2001). Data mining methods for detection of new malicious executables. At the 2001 IEEE Symposium on Security and Privacy, pp. 38–49. Schum, D. A. (1986). Probability and the processes of discovery, proof, and choice. Boston University Law Review, 66, 825–876. Schum, D. A. (1987). Evidence and inference for the intelligence analyst (2 Vols.). Lanham, MD: University Press of America. Schum, D. A. (1989). Knowledge, credibility, and probability. Journal of Behavioural Decision Making, 2, 39–62. Schum, D. A. (1993). Argument structuring and evidence evaluation. In R. Hastie (Ed.), Inside the Juror: The psychology of Juror decision making (pp. 175–191). Cambridge, England: Cambridge University Press. Schum, D. A. (1994). The evidential foundations of probabilistic reasoning.(WileySeriesin Systems Engineering.) New York: Wiley. Reprinted, Evanston, IL: Northwestern University Press, 2001. Schum, D. (2001). Evidence marshaling for imaginative fact investigation. Artificial Intelligence and Law, 9(2/3), 165–188. Schum, D. A., & Martin, A. W. (1982). Formal and empirical research on cascaded inference in jurisprudence. Law and Society Review, 17, 105–151. Schum, D., & Tillers, P. (1989). Marshalling evidence throughout the process of fact investigation: A simulation. Report Nos. 89-01 through 89-04, supported by NSF Grant No. SES 8704377. New York: Cardozo School of Law. Schum, D., & Tillers, P. (1990a). A technical note on computer-assisted Wigmorean argument structuring. Report No. 90-01 (Jan. 15, 1990), supported by NSF Grant No. SES 8704377. New York: Cardozo School of Law. Schum, D., & Tillers, P. (1990b). Marshalling thought and evidence about witness credibility (March 15, 1990), supported by NSF Grants Nos. SES 8704377 and 9007693.NewYork: Cardozo School of Law. Schum, D., & Tillers, P. (1991). Marshalling evidence for choice and inference in litigation. Cardozo Law Review, 13, 657–704. Also Report 91–03 (March 18, 1991), supported by NSF Grant Nos. SES 8704377 and 9007693. New York: Cardozo School of Law. Schunn, C. D., Okada, T., & Crowley, K. (1995). Is cognitive science truly interdisciplinary? The case of interdisciplinary collaborations. In J. D. Moore & J. F. Lehman (Eds.), Proceedings of the 17th annual conference of the cognitive science society (pp. 100–105). Mahwa, NJ: Elbaum. Schwartz, A., & Scott, R. E. (2003). Contract theory and the limits of contract law. Yale Law Journal, 113, 541–619. Schweighofer, E., & Merkl, D. (1999). A learning technique for legal document analysis. In Proceedings of the Seventh International Conference on Artificial Intelligence and Law (ICAIL’99), Oslo, Norway, 14–17 June 1999. New York: ACM Press, pp. 156–163. Schwikkard, P. J. (2008). The muddle of silence. International Commentary on Evidence, 6(2), Article 4. http://www.bepress.com/ice/vol6/iss2/art4 Scientific Working Group on Friction Ridge Analysis Study and Technology. (2002). Friction ridge examination methodology for latent print examiners. http://www.swgfast.org/ References 1243

Scientific Working Group on Friction Ridge Analysis Study and Technology. (2003). Standards for conclusions. http://www.swgfast.org/ Scott, J. (2003). How to write for animation. Woodstock, NY and New York: The Overlook Press. Scott, J. (2006). Social network analysis: A handbook. London: Sage. [Previously: 2nd edition, 2000 (also cited).] Scott, M. S. (2000). Problem-oriented policing: Reflections on the first 20 years. Washington, DC: Office of Community Oriented Policing Services [COPS Office], U.S. Department of Justice. http://www.popcenter.org/Library/RecommendedReadings/Reflections.pdf Seabrook, J. (2006). The Meinertzhagen Ruse. New York: The New Yorker. Searle, J. (1969). Speech acts: An essay in the philosophy of language. Cambridge: Cambridge University Press. Sebastiani, F. (2002). Machine learning in automated text categorization. ACM Computing Surveys, 34(1), 1–47. Sebeok, T. A., & Umiker-Sebeok, J. (1979). “You know my method”: A juxtaposition of Sherlock Holmes and C. S. Peirce. In N. Baron & N. Bhattacharya (Eds.), Methodology in semiotics, special issue of Semiotica, 26(3/4), 203–250. Sebeok, T. A., & Umiker-Sebeok, J. (1980). “You know my method”: A juxtaposition of Sherlock Holmes and C. S. Peirce. Bloomington, IN: Gaslight Publications. Sebeok, T. A., & Umiker-Sebeok, J. (1981). Sherlock Holmes no Kogoron: C. S. Peirce to Holmes no Hikakukenkyn, translated into Japanese by T. Tomiyama. Tokyo: Iwanami Shoten. Sebeok, T. A., & Umiker-Sebeok, J. (1982a). “Du kennst meine Methode”: Charles S. Peirce und Sherlock Holmes. Frankfurt am Main, Germany: Suhrkamp. Sebeok, T. A., & Umiker-Sebeok, J. (1982b, March). Sherlock Holmes e le abduzioni. Alfabeta, 34, 15–17. Sebeok, T. A., & Umiker-Sebeok, J. (1983). “Voi conoscete il mio metodo”: un confronto fra Charles S. Peirce e Sherlock Holmes. In T. A. Sebeok & U. Eco (Eds.), The sign of three: Holmes, Dupin, Peirce (pp. 11–54). Bloomington, IN: Indiana University Press. Sebeok, T. A., & Umiker-Sebeok, J. (1989). Peirce and Holmes [In Chinese]. Beijin: Chinese Academy of Social Sciences. Sebeok, T. A., & Umiker-Sebeok, J. (1994). Din Nou Pe Urmele Lui Sherlock Holmes. Cluj: Editura Echinox. [Romanian translation of “You Know My Method”: A Juxtaposition of Sherlock Holmes and C.S. Peirce.] Sebok, A. (1998). Legal positivism in American jurisprudence. Cambridge: Cambridge University Press. Segal, U., & Stein, A. (2006). Ambiguity aversion and the criminal process. Notre Dame Law Review, 81(4), 1495–1551. Segal, M., & Xiao, Y. (2011). Multivariate random forests. Wiley Interdisciplinary Reviews (WIREs): Data Mining and Knowledge Discovery, 1(1), 80–87. doi://10.1002/widm.12 Seidmann, D. J., & Stein, A. (2000). The right to silence helps the innocent: A game-theoretic analysis of the Fifth Amendment privilege. Harvard Law Review, 114, 430–510. Selbak, J. (1994). Digital litigation: The prejudicial effects of computer-generated animation in the courtroom. High Technology Law Journal, 9, 337. Sellier, K. G., & Kneubuehl, B. P. (1994). Wound ballistics and the scientific background. Amsterdam: Elsevier. Seltzer, M. (2006). True crime: Observations on violence and modernity. London: Routledge. Sergot, M. (2005). Modelling unreliable and untrustworthy agent behaviour. In B. Dunin Keplicz, A. Jankowski, A. Skowron, & M. Szczuka (Eds.), International workshop on monitor- ing, security, and rescue techniques in multiagent systems, Plock, Poland, 7–9 June 2004 (pp. 161–177). Berlin: Springer. Seto, Y. (2002). Development of personal authentication systems using fingerprint with smart cards and digital signature technologies. In Proceedings of the Seventh International Conference on Control, Automation, Robotics and Vision (ICARCV 2002), Singapore, 2–5 December 2002. IEEE, Vol. 2, pp. 996–1001. 1244 References

Seto, Y. (2009). Retina recognition. In S. Z. Li & A. K. Jain (Eds.), Encyclopedia of biometrics (pp. 1128–1130). New York: Springer. Sgouros, N. M. (1999). Dynamic generation, management and resolution of interactive plots. Artificial Intelligence, 107(1), 29–62. Shafer, G. (1976). A mathematical theory of evidence. Princeton, NJ: Princeton University Press. Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. Urbana, IL: University of Illinois Press. Shapira, R. (1999). Fuzzy measurements in the Mishnah and Talmud. Artificial Intelligence and Law, 7(2/3), 273–288. Shapira, R. A. (2002). Saving Desdemona. In M. MacCrimmon & P. Tillers (Eds.), The dynamics of judicial proof: Computation, logic, and common sense (pp. 419–435). Studies in Fuzziness and Soft Computing, Vol. 94. Heidelberg: Physical-Verlag. Shapiro, S. C., & Rapaport, W. J. (1995). An introduction to a computational reader of narratives. InJ.F.Duchan,G.A.Bruder,&L.E.Hewitt(Eds.),Deixis in narrative (pp. 79–105). Hillsdale, NJ: Erlbaum. Sharkey, N. (Ed.). (1992). Connectionist natural language processing. Dordrecht, The Netherlands: Kluwer, & Oxford: Intellect. Shebelsky, R. C. (1991). [Joke under the rubric ‘Laughter, the Best Medicine’.] Reader’s Digest (U.S. edition), November 1991, p. 103. Sheptycki, J. (2003). Review of the influence of strategic intelligence on organised crime policy and practice. London: Home Office, Police and Reducing Crime Unit. Sheptycki, J. (2004). Organizational pathologies in police intelligence systems: Some contri- butions to the lexicon of intelligence-led policing. European Journal of Criminology, 1(3), 307–332. Shereshevsky, B.-Z. (1960/61). Hoda’ah (Hoda’at beit-din). A. Lefi din-Torah.[inHebrew: ‘Confession: In Jewish law’]. S.v. Hoda’ah [‘Confession’], by B. Z. Shereshevsky & M. Ben-Porat. Encyclopaedia Hebraica, 13, cols. 665–668. Shetty, J., & Adibi, J. (2004). The Enron email dataset database schema and brief statistical report. Los Angeles, CA: University of Southern California, Information Sciences Institute. http:// www.isi.edu/adibi/Enron/Enron_Dataset_Report.pdf Shim, C.-B., & Shin, Y.-W. (2005). Spatio-temporal modeling of moving objects for content- and semantic-based retrieval in video data. In R. Khosla, R. J. Howlett, & L. C. Jain (Eds.), Knowledge-based intelligent information and engineering systems: 9th international con- ference, KES 2005, Melbourne, Australia, September 14–16, 2005, Proceedings, Part IV (pp. 343–351). (Lecture Notes in Computer Science, Vol. 3684.) Berlin: Springer. Shimony, S. E. (1993). The role of relevance in explanation. I: Irrelevance as statistical indepen- dence. International Journal of Approximate Reasoning, 8(4), 281–324. Shimony, S. E., & Charniak, E. (1990). A new algorithm for finding MAP assignments to belief networks. In P. P. Bonissone, M. Henrion, L. N. Kanal, & J. F. Lemmer (Eds.), Uncertainty in artificial intelligence: Proceedings of the sixth conference (pp. 185–193). Amsterdam: North- Holland. Shimony, S. E., & Domshlak, C. (2003). Complexity of probabilistic reasoning in directed-path singly connected Bayes networks. Artificial Intelligence, 151, 213–225. Shimony, S. E., & Nissan, E. (2001). Kappa calculus and evidential strength: A note on Åqvist’s logical theory of legal evidence. Artificial Intelligence and Law, 9(2/3), 153–163. Shiraev, E., & Levy, D. (2007). Cross-cultural psychology: Critical thinking and contemporary applications (3rd ed.). Boston: Allyn and Bacon. Shirani, B. (2002). Anti-forensics. High Technology Crime Investigation Association. http://www. aversion.net/presentations/HTCIA-02/anti-forensics.ppt Shoham, Y., & Leyton-Brown, K. (2009). Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge: Cambridge University Press. Shoham, Y., & McDermott, D. (1988). Problems in formal temporal reasoning. Artificial Intelligence, 36(1), 49–90. References 1245

Shortliffe, E. H. (1976). Computer based medical consultations: MYCIN. New York: Elsevier. Shortliffe, E. H., & Buchanan, B. G. (1975). A method of inexact reasoning, Mathematical Biosciences, 23, 351–379. Shuirman, G., & Slosson, J. E. (1992). Forensic engineering: Environmental case histories for civil engineers and geologists. San Diego, CA: Academic. Shurmer, H. V. (1990). An electronic nose: A sensitive and discrimination substitute for a mammalian olfactory system. International Electrical Engineering Proceedings, 137, 197–204. Shurmer, H. V., Gardner, J. W., & Chan, H. T. (1989). The application of discrimination techniques to alcohols and tobacco using tin oxide sensors. Sensors & Actuators, 18, 359–369. Shuy, R. W. (1993). Language crimes: The use and abuse of language evidence in the courtroom. Oxford: Blackwell. Shyu, C. H., Fu, C.-M., Cheng, T., & Lee, C. H. (1989). A heuristic evidential reasoning model. In A. A. Martino (Ed.), Pre-proceedings of the third international conference on “Logica, Informatica, Diritto: Legal Expert Systems”, Florence, 1989 (2 vols. + Appendix) (Vol. 1, pp. 661–670). Florence: Istituto per la Documentazione Giuridica, Consiglio Nazionale delle Ricerche. Siddiqui, M. A. (2008). Data mining methods for malware detection. Ph.D. dissertation in Modeling and Simulation (supervised by M.C. Wang). Orlando, FL: College of Sciences, University of Central Florida. http://etd.fcla.edu/CF/CFE0002303/Siddiqui_Muazzam_A_ 200808_PhD.pdf Siegel, J. A., Knupfer, G. C., & Saukko, P. J. (Ed.). (2000). Encyclopedia of forensic sciences (3 Vols.). London: Academic. Sigmund, W. (Ed.). (1995). Environmental poisoning and the law: Proceedings of the conference, 17 September 1994, Kings College, London. London: South West Environmental Protection Agency & Environmental Law Foundation, 1995. Sigurdsson, J. F., & Gudjonsson, G. H. (1996). The psychological characteristics of false con- fessors: A study among Icelandic prison inmates and juvenile offenders. Personality and Individual Differences, 20, 321–329. Sigurdsson, J. F., & Gudjonsson, G. H. (2001). False confessions: The relative importance of psychological, criminological and substance abuse variables. Psychology, Crime and Law, 7, 275–289. Silberman, C. E. (1978). Criminal violence, criminal justice. New York: Random House. Simhon, D., Nissan, E., & Zigdon, N. (1992). Resource evaluation and counterplanning with multiple-layer rulesets, in the BASKETBALL expert system. In G. Tenenbaum, Ts. Raz- Liebermann, & Tz. Artzi (Eds.), Proceedings of the international conference on computer applications in sport and physical education, Natania, Israel (pp. 60–80). Natania: The Wingate Institute. Simon, E., & Gaes, G. (1989). ASSYST: Computer support for guideline sentencing. In The Second International Conference on Artifical Intelligence and Law: Proceedings of the Conference, Vancouver, 1989, pp. 195–200. Simon, E., Gaes, G., & Rhodes, W. (1991). ASSYST: The design and implementation of computer assisted sentencing. Federal Probation, 55, 46–55. Sinai, J. (2006). Combating terrorism insurgency resolution software. In Proceedings of the IEEE International Conference on Intelligence and Security Informatics (ISI 2006), pp. 401–406. Singh, M. (1999). A social semantics for agent communication languages. In Proceedings of the IJCAI’99 Workshop on Agent Communication Languages, Stockholm, Sweden, pp. 75–88. http://ijcai.org/search.php Singh, M., & Huhns, M. (2005). Service-oriented computing: Semantics, processes, agents.New York: Wiley. Siroky, D. S. (2009). Navigating random forests and related advances in algorithmic modeling. Statistics Surveys, 3,147–163. Accessible online by searching the journal’s site at http://www. i-journals.org/ss/search.php 1246 References

Skabar, A., Stranieri, A., & Zeleznikow, J. (1997). Using argumentation for the decomposition and classification of tasks for hybrid system development. In N. Kasabov, R. Kozma, K. Ko, R. O’Shea, G. Coghill, & T. Gedeon (Eds.), Progress in connectionist based information sys- tems (pp. 814–818). Proceedings of the 1997 international conference on neural information processing and intelligent information systems, Singapore. Berlin: Springer. Skagerberg, E. M. (2007). Co-witness feedback in line-ups. Applied Cognitive Psychology, 21, 489–497. Skalak, D. B., & Rissland, E. L. (1992). Arguments and cases: An inevitable intertwining. Artificial Intelligence and Law, 1(1), 3–44. Skulsky, H. (1980). On being moved by fiction. The Journal of Aesthetics and Art Criticism, 39, 5–14. Smith, A. S. (2006). Geomaterials from civil to criminal law; One small step for the geoscientist (abstract). In A. Ruffell, (Ed.), Abstract book of geoscientists at crime scenes: First, inaugural meeting of the Geological Society of London, 20 December 2006 (p.12). London: Forensic Geoscience Group. http://www.geolsoc.org.uk/pdfs/FGtalks&abs_pro.pdf Smith, H. E. (2003). The language of property: Form, context, and audience. Stanford Law Review, 55, 1105–1191. Smith, J. C., Gelbart, D., MacCrimmon, K., Atherton, B., McClean, J., Shinehoft, M., et al. (1995). Artificial intelligence and legal discourse: The Flexlaw legal text management system. Artificial Intelligence and Law, 3, 55–95. Smith, J. M. (1992). SGML and related standards: Document description and processing lan- guages. Ellis Horwood Series in Computers and Their Applications. New York & London: Ellis Horwood. Smith, P. A., Baber, C., Hunter, J., & Butler, M. (2008). Measuring team skills in crime scene examination: Exploring ad hoc teams. Ergonomics, 51, 1463–1488. Smith, R. G. (1977). The CONTRACT NET: A formalism for the control of distributed problem solving. In Proceedings of the Fifth International Joint Conference on Artificial Intelligence (IJCAI-77), Cambridge, MA. http://ijcai.org/search.php Smith, R. G. (1980a). The contract net protocol: High-level communication and control in a distributed problem solver. IEEE Transactions on Computers, C-29(12), 1104–1113. Smith, R. G. (1980b). A framework for distributed problem solving. Ph.D. Dissertation, University of Stanford. Available from UMI Research Press. Smith, S., & Bates, J. (1989). Towards a theory of narrative for interactive fiction. Technical Report CMU-CS-89-121. Pittsburgh, PA: School of Computer Science, Carnegie Mellon University. http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/oz/web/papers/CMU-CS-89-121.ps Smith, T. C., & Witten, I. H. (1991). A planning mechanism for generating story text. Literary and Linguistic Computing, 6(2), 119–126. Also: Technical Report 1991-431-15). Calgary, Canada: Department of Computer Science, University of Calgary. http://pharos.cpsc.ucalgary.ca/Dienst/ Repository/2.0/Body/ncstrl.ucalgary_cs/1991-431-15/pdf Smith, T. F., & Waterman, M. S. (1981). Identification of common molecular subsequences. Journal of Molecular Biology, 147, 195–197. Smullyan, R. M. (1986). Logicians who reason about themselves. In Proceedings of the 1986 Conference on Theoretical Aspects of Reasoning about Knowledge, Monterey, CA. San Francisco, CA: Morgan Kaufmann Publ., pp. 341–352. Snook, B., Taylor, P. J., & Bennell C. (2005). False confidence in computerised geographical profiling [a reply to Rossmo]. Applied Cognitive Psychology, 19, 655–661. Snow, P., & Belis, M. (2002) Structured deliberation for dynamic uncertain inference. In M. MacCrimmon & P. Tillers (Eds.), The dynamics of judicial proof: Computation, logic, and common sense (pp. 397–416). (Studies in Fuzziness and Soft Computing, Vol. 94.) Heidelberg: Physical-Verlag. Söderström, C., Borén, H., Winquist, F., & Krantz-Rülcker, C. (2003). Use of an electronic tongue to analyze mold growth in liquid media. International Journal of Food Microbiology, 83, 253–261. References 1247

Solan, Z., Horn, D., Ruppin, E., & Edelman, S. (2005). Unsupervised learning of natural languages. Proceedings of the National Academy of Sciences, USA, 102(33), 11629–11634. Solka, J. L. (2008). Text data mining: Theory and methods. Statistics Surveys, 2, 94–112. Accessible online by searching the journal’s site at http://www.i-journals.org/ss/search.php Solow, A. R., Kitchener, A. C., Roberts, D. L., & Birks, J. D. S. (2006). Rediscovery of the Scottish polecat, Mustela putorius: Survival or reintroduction? Biological Conservation, 128, 574–575. Song, C. H., Koo, Y. H., Yoo, S. J., & Choi, B. H. (2005). An ontology for integrating multime- dia databases. In R. Khosla, R. J. Howlett, & L. C. Jain (Eds.), Knowledge-based intelligent information and engineering systems: 9th international conference, KES 2005, Melbourne, Australia, September 14–16, 2005, Proceedings, Part III (pp. 157–162). (Lecture Notes in Computer Science, Vol. 3684.) Berlin: Springer. Song, Q., Hu, W., & Xie, W. (2002). Robust support vector machine for bullet hole image classification. IEEE Transaction on Systems, Man and Cybernetics, Part C, 32(4), 440–448. Sorg, M. H. (2005). Forensic anthropology. Chapter 7 In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Sosa, E. (1991). Knowledge in perspective. Cambridge: Cambridge University Press. Sotomayor, S. (2002). A Latina judge’s voice. Judge Mario G. Olmos Memorial Lecture, University of California Berkeley Law School, 2001. Published in the Spring 2002 issue of Berkeley La Raza Law Journal as part of a Symposium entitled Raising the Bar: Latino and Latina Presence in the Judiciary and the Struggle for Representation. The full text of the speech is available at http://www.nytimes.com/2009/05/15/us/politics/15judge.text.html Sowa, J. F. (1984). Conceptual structures: Information processing in mind and machine. Reading, MA: Addison Wesley. Sowa, J. F. (Ed.). (1991). Principles of semantic networks: Explorations in the representation of knowledge. San Mateo, CA: Morgan Kaufmann Publishers. Sowa, J. (1994). Conceptual structures: Information processing in mind and machine. Reading, MA: Addison Wesley. Sowa, J. (1995). Top-level ontological categories. International Journal of Human-Computer Studies, 43(5–6), 669–686. Sowa, J. F. (2006). Semantic networks. Last revised in 2006 (posted at http://www.jfsowa.com/ pubs/semnet.htm). Revised and extended version of an article In: Shapiro, S. C. (Ed.). (1987). Encyclopedia of artificial intelligence. New York: Wiley; 2nd edn., 1992. Sparck Jones, K. (1993). What might be in a summary? In Proceedings of Information Retrieval ’93, Konstanz, Germany, Konstanz: Universitätsverlag, pp. 9–26. Sparrow, M. K. (1991). The application of network analysis to criminal intelligence: An assessment of the prospects. Social Networks, 13, 251–274. Spears, D. (1993). Providing computerised sentencing information to judicial officers: The New South Wales experience. Sydney, NSW: Judicial Commission of New South Wales. Specter, M. M. (1987). The national academy of forensic engineers. Forensic Engineering, 1(1), 61–63. Sperber, D., & Wilson, D. (1986). Loose talk. Proceedings of the Aristotelian Society, New Series, 86, 153–171. Reprinted in Davis, S. (Ed.). (1991). Pragmatics: A reader. Oxford: Oxford University Press. Sperber, D., & Wilson, D. (1990). Literalness looseness, metaphor. A section in their: Rhetoric and relevance. In D. Wellbery & J. Bender (Eds.), The ends of rhetoric: History, theory, practice (pp. 140–155). Stanford, CA: Stanford University Press. Spitzner, L. (2002). Honeypots tracking hackers. Reading, MA: Addison-Wesley Professional. Spitzner, L. (2003a). The honeynet project: Trapping the hackers. IEEE Security and Privacy, 1(2), 15–23. Spitzner, L. (2003b). Honeypots: Definitions and value of honeypots. http://www.tracking-hackers. com Spitzner, L. (2004). Problems and challenges with honeypots. http://www.securityfocus.com/ Spivak, J. (1996). The SGML primer. Cambridge, MA: CTI. 1248 References

Spohn, W. (1988). A dynamic theory of epistemic states. In W. L. Harper & B. Skyrms (Eds.), Causation in decision, belief change, and statistics (pp. 105–134). Dordrecht, The Netherlands: Reidel (Kluwer). Spooren, W. (2001). Review of Lagerwerf (1998). Journal of Pragmatics, 33, 137–141. Srihari, R. K. (2009). Unapparent information revelation: Text mining for counter-terrorism. In S. Argamon & N. Howard (Eds.), Computational methods for counterterrorism.Berlin: Springer. Srihari, S. N., & Ball, G. R. (2008). Writer verification of handwritten Arabic. In Proceedings of the IEEE Eighth International Workshop on Document Analysis Systems (DAS 2008), Nara, Japan, pp. 28–34. Srihari, S. N., Ball, G. R., & Ramakrishnan, V. (2009). Identification of forgeries in handwritten petitions for ballot propositions. In Proceeedings of the SPIE 16th Conference on Document Recognition and Retrieval, San José, CA, January 2009, pp. 7247OS 1–8. Srihari, S. N., Ball, G. R., & Srinivasan, H. (2008). Versatile search of scanned arabic handwriting. In D. Doermann & S. Jaeger (Eds.), Arabic and chinese handwriting recognition. SACH 2006 Summit, College Park, MD, USA, September 27–28, 2006: Selected Papers (pp. 57–69). Lecture Notes in Computer Science, Vol. 4768. Berlin: Springer. Srihari, S. N., Collins, J., Srihari, R. K., Srinivasan, H., & Shetty, S. (2008). Automatic scoring of short handwritten essays in reading comprehension tests. Artificial Intelligence, 172(2/3), 300–324. Srihari, S. N., & Leedham, G. (2003). A survey of computer methods in forensic document exam- ination. In Proceedings of the International Graphonomics Society Conference, Phoenix, AZ, November 2003, pp. 278–282. Srihari, S. N., Srinivasan, H., & Beal, M. (2008). Machine learning for signature verification. In S. Marinai & H. Fujisawa (Eds.), Machine learning in document analysis and recognition (pp. 387–408). (Studies on Computational Intelligence, Vol. 90). Berlin: Springer. Srihari, S. N., Srinivasan, H., & Desai, K. (2007). Questioned document examination using CEDAR-FOX. Journal of Forensic Document Examination, 18(2), 1–20. Srihari, S. N., Srinivasan, H., & Fang, G. (2008). Discriminability of the fingerprints of twins. Journal of Forensic Identification, 58(1), 109–127. Srihari, S. N., & Su, C. (2008). Computational methods for determining individuality. In Computational Forensics: Proceedings of the International Workshop, Washington, DC. (Lecture Notes in Computer Science, Vol. 5158). Berlin: Springer, pp. 11–21. Srinivasan, H., & Srihari, S. N. (2009). Use of conditional random fields for signature-based retrieval of scanned documents. In S. Argamon & N. Howard (Eds.), Computational methods for counterterrorism. Berlin: Springer. Staab, S., & Studer, R. (Eds.). (2009). Handbook on ontologies. (International Handbooks on Information Systems.) Berlin: Springer, 2004; 2nd edn., 2009. Stærkeby, M. (2002). Forensic Entomology Pages, International (website). Division of Zoology, Department of Biology, University of Oslo, Oslo, Norway. http://www.uio.no/~mostarke/ forens_ent/forensic_entomology.html Staples, E. J. (1999). Electronic nose simulation of olfactory response containing 500 orthogonal sensors in 10 seconds. In Proceedings of the 1999 IEEE Ultrasonics Frequency Control and Ferroelectrics Symposium, Lake Tahoe, CA, 2000, pp. 307–313. Staples, E. J. (2000). Electronic nose simulation of olfactory response containing 500 orthogonal sensors in 10 seconds. In Proceedings of the 1999 IEEE Ultrasonics Frequency Control and Ferroelectrics Symposium, Lake Tahoe, CA, 2000, pp. 307–313. Stearns, C. Z., & Stearns, P. N. (1986). Anger: The struggle for emotional control in America’s history. Chicago: University of Chicago Press. Stearns, C. Z., & Stearns, P. N. (1988). Emotion and social change: Toward a new psychohistory. New York: Holmes & Meier. Stearns, P. N. (1989). Jealousy: The evolution of an emotion in American history. New York: New York University Press. References 1249

Stearns, P. N. (1994). American cool: Constructing a twentieth-century emotional style.(The History of Emotions, 3). New York: New York University Press. Stearns, P. N. (1995). Emotion. Chapter 2 In R. Harré & P. Stearns (Eds.), Discursive psychology in practice. London: Sage. Stearns, P. N. & Haggerty, T. (1991). The role of fear: Transitions in American emotional standards for children, 1850–1950. American Historical Review, 96, 63–94. Stearns, P. N., & Stearns, C. Z. (1985). Emotionality: Clarifying the history of emotions and emotional standards. American History Review, 90, 813–836. Stein, A. (1996). The refoundation of evidence law. Canadian Journal of Law & Jurisprudence, 9, 279–284 & 289–322. Stein, A. (2000). Evidential rules for criminal trials: Who should be in charge? In S. Doran & J. Jackson (Eds.), The judicial role in criminal proceedings (pp. 127–143). Oxford: Hart Publishing. Stein, A. (2001). Of two wrongs that make a right: Two paradoxes of the Evidence Law and their combined economic justification. Texas Law Review, 79, 1199–1234. Stein, A. (2005). Foundations of evidence law. Oxford:Oxford University Press. Stein, N. L., & Glenn, C. G. (1979). An analysis of story comprehension in elementary school children. In R. Freedle (Ed.), New directions in discourse processing II. Norwood, NJ: Ablex. Steingrimsdottir, G., Hreinsdottir, H., Gudjonsson, G. H., Sigurdsson, J. F, &. Nielsen, T. (2007). False confessions and the relationship with offending behaviour and personality among Danish adolescents. Legal and Criminological Psychology, 12, 287–296. Steinwart, I., & Christmann, A. (2008). Support vector machines. New York: Springer. Stenross, B., & Kleinman, S. (1989). The highs and lows of emotional labor: Detectives’ encounters with criminals and victims. Journal of Contemporary Ethnography, 17, 435–452. Stephen, J. F. (1863). General view of the criminal law (1st ed.). London: McMillan; 2nd edn., 1890. Reprint, 2nd edn., Littleton, Colorado: F. B. Rothman, 1985. Stephen, J. F. (1948). Adigestofthelawofevidence(12th ed.). Revision by H. L. Stephen & L. F. Sturge. London: McMillan and Co. Ltd. Reprint, with additions, of the 1936 edition. Stephenson, K., & Zelen, M. (1989). Rethinking centrality: Methods and examples. Social Networks, 11(1), 1–38. Sterling, L., & Shapiro, E. (1986). The art of Prolog: Advanced programming techniques. Cambridge, MA: The MIT Press. Stern, D. N. (1985). The interpersonal world of the infant: A view from psychoanalysis and . New York: Basic Books. Stevens, R., Wroe, C., Lord, P. W., & Goble, C. A. (2004). Ontologies in bioinformatics. In S. Staab & R. Studer (Eds.), Handbook on ontologies (pp. 635–658). (International Handbooks on Information Systems.) Berlin: Springer. Steyvers, M., & Tenenbaum, J. B. (2005). The large-scale structure of semantic networks: Statistical analyses and a model of semantic growth. Cognitive Science, 29(1), 41–78. Stiegler, B. (1986). La faute d’Epiméthée. Technologos,74 3, 7–16. Stiff, J. B. (1994). Persuasive communication. New York: Guilford. St. John, M. F. (1992). The story gestalt: A model of knowledge-intensive processes in text comprehension. Cognitive Science, 16, 271–306. Stock, O., Strapparava, C., & Nijholt, A. (Eds.). (2002). The April Fools’ Day workshop on com- putational humour: Proceedings of the twentieth Twente Workshop on Language Technology (TWLT20), Trento, Italy, April 15–16, 2002. Enschede, The Netherlands: University of Twente. Stockmarr, A. (1999). Likelihood ratios for evaluating DNA evidence when the suspect is found through a database search. Biometrics, 55, 671–677. Stolfo, S. J., Creamer, G., & Hershkop, S. (2006). A temporal based forensic analysis of electronic communication. At the 2006 National Conference on Digital Government Research.

74 The French-language journal Technologos used to be published in Paris by the Laboratoire d’Informatique pour les Sciences de l’Homme. 1250 References

Stone, M. (2009, August 22). Criminal trials: The reliability of evidence – Part I. CL&J: Criminal law & Justice Weekly, 173(34), 532–533. Stoney, D. A. (1997). Fingerprint identification: Scientific status. In D. L. Faigman, D. H. Kaye, M. J. Saks, & J. Sanders (Eds.), Modern scientific evidence: The law and science of expert testimony (Vol. 2). St. Paul, MN: West Publishing. Stoney, D. A. (2001). Measurement of fingerprint individuality. In H. C. Lee & R. E. Gaensslen (Eds.), Advances in fingerprint technology (pp. 327–387). Boca Raton, FL: CRC Press. Strange, D., Sutherland, R., & Garry, M. (2006). Event plausibility does not determine children’s false memories. Memory, 14, 937–951. Stranieri, A. (1999). Automating legal reasoning in discretionary domains.Ph.D.Thesis. Melbourne, Australia: La Trobe University. Stranieri, A., Yearwood, J., & Meikl, T. (2000). The dependency of discretion and consistency on knowledge representation. International Review of Law, Computers and Technology, 14(3), 325–340. Stranieri, A., & Zeleznikow, J. (2001a). WebShell: The development of web based expert system shells. At SGES British Expert Systems Conference ES’01. Cambridge: SGES. Stranieri, A., & Zeleznikow, J. (2001b). Copyright regulation with argumentation agents. In D. M. Peterson, J. A. Barnden, & E. Nissan (Eds.), Artificial intelligence and law, special issue of Information & Communications Technology Law, 10(1), 109–123. Stranieri, A., & Zeleznikow, J. (2005a). Knowledge discovery from legal databases. (Springer Law and Philosophy Library, 69.) Dordrecht, The Netherlands: Springer. Stranieri, A., & Zeleznikow, J. (2005b). Knowledge discovery from legal databases. Tutorial given at Tenth International Conference on Artificial Intelligence and Law (ICAIL 2005), in Bologna, Italy. Stranieri, A., Zeleznikow, J., Gawler, M., & Lewis, B. (1999). A hybrid rule–neural approach for the automation of legal reasoning in the discretionary domain of family law in Australia. Artificial Intelligence and Law, 7(2/3), 153–183. Stranieri, A., Zeleznikow, J., & Yearwood, J. (2001). Argumentation structures that integrate dialectical and non-dialectical reasoning. The Knowledge Engineering Review, 16(4), 331–348. Strömwall, L. A., & Granhag, P. A. (2003a). Affecting the perception of verbal cues to deception. Applied Cognitive Psychology, 17, 35–49. Strömwall, L. A., & Granhag, P. A. (2003b). How to detect deception? Asessing the beliefs of police officers, prosecutors and judges. Psychology, Crime and Law, 9(1), 19–36. Strömwall, L. A., & Granhag, P. A. (2007). Detecting deceit in pairs of children. Journal of Applied Social Psychology, 37, 1285–1304. Strömwall, L. A., Hartwig, M., & Granhag, P. A. (2006). To act truthfully: Nonverbal behavior and strategies during a police interrogation. Psychology, Crime & Law, 12, 207–219. Su, X., & Tsai, Ch.-L. (2011). Outlier detection. Wiley Interdisciplinary Reviews (WIREs): Data Mining and Knowledge Discovery, 1(3), 261–268. doi://10.1002/widm.19 Summers, R. S. (1978). Two types of substantive reasons: The core of a theory of common-law justification. Cornell Law Review, 63, 707–788. Sun, J., Tao, D., & Faloutsos, C. (2006). Beyond streams and graphs: Dynamic tensor anal- ysis. In Proceedings of KDD 2006, Philadelphia, PA. http://www.cs.cmu.edu/~christos/ PUBLICATIONS/kdd06DTA.pdf Sun, J., Xie, Y., Zhang, H., & Faloutsos, C. (2007). Less is more: Compact matrix decomposition for large sparse graphs. In Proceedings of SDM, Minneapolis, MN, April 2007. http://www.cs. cmu.edu/~jimeng/papers/SunSDM07.pdf Suprenant, B. A. (1988). Introduction to forensic engineering. Oxford: Pergamon. Sutton, P. T. (1998). Bloodstain pattern interpretation. Short Course Manual. Memphis, TN: University of Tennessee. Swartjes, I. (2009). Whose story is it anyway? How improve informs agency and authorship of emergent narrative. PhD thesis. Enschede, The Netherlands: University of Twente. http:// wwwhome.cs.utwente.nl/~swartjes/dissertation/ References 1251

Swartjes, I., & Theune, M. (2006). A fabula model for emergent narrative. In S. Göbel, R. Malkewitz, & I. Iurgel (Eds.), Technologies for interactive digital storytelling and enter- tainment: Proceedings of the third international conference, Tidse 2006. (Lecture Notes in Computer Science, Vol. 4326.) Berlin: Springer. Sweetser, E. (1987). The definition of lie: An examination of the folk theories underlying a seman- tic prototype. In D. Holland & N. Quinn (Eds.), Cultural models in language and thought (pp. 43–66). Chicago: University of Chicago Press. Sycara, K. (1989a). Argumentation: Planning other agents’ plans. In Proceedings of the eleventh International Joint Conference on Artificial Intelligence (IJCAI’89), Detroit, MI, pp. 517–523. http://ijcai.org/search.php Sycara, K. (1989b). Multiagent compromise via negotiation. In L. Gasser & M. Huhns (Eds.), Distributed artificial intelligence, 2 (pp. 119–138). San Mateo, CA: Morgan Kaufmann, and London: Pitman. Sycara, K. (1990). Persuasive argumentation in negotiation. Theory and Decision, 28, 203–242. Sycara, K. (1992). The PERSUADER. In D. Shapiro (Ed.), The encyclopedia of artificial intelligence. Chichester: Wiley. Sycara, K. P. (1998). Multiagent systems. AI Magazine, Summer 1998, pp. 79–92. Szilas, N. (1999). Interactive Drama on Computer: Beyond Linear Narrative. In AAAI Fall Symposium on Narrative Intelligence, Falmouth, MA: AAAI Press, pp. 150–156. Szilas, N., & Rety, J.-H. (2004). Minimal structure for stories. In Proceedings of the First ACM Workshop on Story Representation, Mechanism, and Context, 12th ACM International Conference on Multimedia. New York: ACM, pp. 25–32. Szymanski, B. K., & Chung, M.-S. (2001). A method for indexing Web pages using Web bots. In Proceedings of the International Conference on Info-Tech & Info-Net, ICII’2001, Beijing, China, November 2001, IEEE Computer Society Press, pp. 1–6. Szymanski, B., & Zhang, Y. (2004). Recursive data mining for masquerade detection and author identification. In Proceedings of the Fifth IEEE System, Man and Cybernetics Information Assurance (SMC IA) Workshop, West Point, NY, June 2004, pp. 424–431. Taddei Elmi, G. (1992). Cultura informatica e cultura giuridica. Informatica e diritto (Florence), Year 18, 2nd Series, 1(1/2), 111–124. Talukder, A. (2010). Event-centric multisource stream processing and multimedia assimilation for geospatiotemporal phenomena. In Proceedings of the 2nd ACM international workshop on Events in multimedia (EiMM’10).NewYork:ACM. Tan, X., & Bhanu, B. (2006). Fingerprint matching by genetic algorithms. Pattern Recognition, 29(3), 465–477. Tang, Y., & Daniels, T. E. (2005). A simple framework for distributed forensics. At the Second International Workshop on Security in Distributed Computing Systems (SDCS),in:Proceedings of the 25th International Conference on Distributed Computing Systems Workshops (ICDCS 2005 Workshops), 6–10 June 2005, Columbus, OH. IEEE Computer Society, pp. 163–169. Tapiero, I., den Broek, P. V., & Quintana, M.-P. (2002). The mental representation of narrative texts as networks: The role of necessity and sufficiency in the detection of different types of causal relations. Discourse Processes, 34(3), 237–258. Taroni, F., Aitken, C., Garbolino, P., & Biedermann, A. (2006). Bayesian networks and probabilis- tic inference in forensic science. (Statistics in Practice Series.) Chichester: Wiley. Taruffo, M. (1998). Judicial decisions and artificial intelligence. Artificial Intelligence and Law, 6, 311–324. Tata, C., Wilson, J. N., & Hutton, N. (1996). Representations of knowledge and discretionary decision-making by decision-support systems: The case of judicial sentencing. Journal of Information Law & Technology, 2 (http://elj.warwick.ac.uk/jilt/artifint/2tata/pr2tata.htm and in Ascii format: 2tata.TXT). Tatti, N. (2009). Significance of episodes based on minimal windows. In Proceedings of the Ninth IEEE International Conference on Data Mining (ICDM-2009), 2009, pp. 513–522. Tatti, N., & Cule, B. (2010). Mining closed strict episodes. In Proceedings of the Tenth IEEE International Conference on Data Mining (ICDM-2010), pp. 501–510. 1252 References

Taubes, G. (2002). An interview with Dr. Michael I. Miller. In-Cytes, ISI accessible at http://www. incites.com/scientists/DrMichaelIMiller.html Tavris, C. (2002). The high cost of skepticism. Skeptical Inquirer, 26(4), 41–44 (July/August 2002). Taylor, J. (1994a). A multi-agent planner for modelling dialogue. Ph.D. thesis, School of Cognitive and Computing Sciences, University of Edinburgh, Edinburgh, Scotland. Taylor, J. A. (1994b). Using hierarchical autoepistemic logic to model beliefs in dialogue. In J. R. Koza (Ed.), Artificial life at Stanford 1994. Stanford, CA: Stanford Bookstore. Also: Technical report HCRC/RP-60 (November), Human Communication Research Centre, University of Edinburgh, Edinburgh, Scotland. Tebbett, I. (1992). Gas chromatography in forensic science. (Ellis Horwood Series in Forensic Science.) London: Ellis Horwood. Templeman, Lord, & Reay, R. (1999). Evidence (2nd ed.). London: Old Bailey Press. The 1st edn. (1997) was by Lord Templeman & C. Bell. Terluin, D. (2008). From fabula to fabulous: Using discourse structure to separate paragraphs in automatically generated stories. Master’s thesis, supervised by R. Verbrugge, Institute of Artificial Intelligence, University of Groningen, Groningen, Netherlands. See at http://www.rinekeverbrugge.nl/PDF/Supervisor%20for%20Masters% 20Students/thesisDouweTerluin2008.pdf ter Meulen, A. G. B. (1995). Representing time in natural language: The dynamic interpretation of tense and aspect. Cambridge, MA: The MIT Press. Paperback, 1997. The paperback edition is augmented with an appendix. Tesauro, G., Kephart, J., & Sorkin, G. (1996). Neural networks for computer virus recognition. IEEE Expert, 11(4), 5–6. Thagard, P. (1989). Explanatory coherence. Behavioural and Brain Sciences, 12(3), 435–467. Commentaries and riposte up to p. 502. Thagard, P. (2000a). Coherence in thought and action. Cambridge, MA: The MIT Press. Thagard, P. (2000b). Probabilistic networks and explanatory coherence. Cognitive Science Quarterly, 1, 91–114. Thagard, P. (2004). Causal inference in legal decision making: Explanatory coherence vs. Bayesian networks. Applied Artificial Intelligence, 18(3/4), 231–249. Thagard, P. (2005). Testimony, credibility and explanatory coherence. Erkenntnis, 63, 295–316. Thali, M. J., Braun, M., & Dirnhofer, R. (2003). Optical 3D surface digitizing in forensic medicine. Forensic Science International, 137, 203–208. Thali, M. J., Braun, M., Wirth, J., Vock, P., & Dirnhofer, R. (2003). 3D surface and body documen- tation in forensic medicine: 3D/CAD photogrammetry merged with 3D radiological scanning. Journal of Forensic Science, 48(6), 1356–1365. Teran, J., Sifakis, E., Blemker, S., Ng-Thow-Hing, V., Lau, C., & Fedkiw, R. (2005). Creating and simulating skeletal muscle from the visible human data set. IEEE Transactions on Visualization and Computer Graphics, 11(3), 317–328. Teufel, S., & Moens, M. (2002). Summarising scientific articles: Experiments with relevance and rhetorical status. Computational Linguistics, 28(4), 409–445. Theune, M., Faas, S., Nijholt, A., & Heylen, D. (2003). The virtual storyteller: Story creation by intelligent agents. In S. Gömbel, N. Braun, U. Spierling, J. Dechau, & H. Diener (Eds.), Proceedings of TIDSE 2003: Technologies for interactive digital storytelling and entertainment (pp. 204–215). Fraunhofer IRB Verlag. Thomas, E. A. C., & Hogue, A. (1976). Apparent weight of evidence, decision criteria, and confidence ratings in juror decision-making. Psychological Review, 83, 442–465. Thomas, M. [but Anon.] (2004). Plot, story, screen: An introduction to narrativity. Deliverable (from the University of Cambridge) of NM2: New Media for a New Millennium (IST-004124), Version 1, 26 October 2004. Thompson, P. (2001). Automatic categorization of case law. In Proceedings of the Eighth International Conference on Artificial Intelligence and Law (ICAIL 2001), May 21–25, 2001, St. Louis, Missouri. New York: ACM Press, pp. 70–77. References 1253

Thorndyke, P. W. (1977). Cognitive structures in comprehension and memory of narrative discourse. Cognitive Psychology, 9, 111–191. Tidmarsh, J. (1992). Unattainable justice: The form of complex litigation and the limits of judicial power. George Washington University Law Review, 60, 1683. Tillers, P. (1983). Modern theories of relevancy. Boston: Little, Brown & Co. Tillers, P. (2005). If wishes were horses: Discursive comments on attempts to prevent individuals from being unfairly burdened by their reference classes. Law, Probability, and Risk, 4, 33–49. Tillers, P. (Ed.). (2007). Graphic and visual representations of evidence and inference in legal settings. Special issue. Law, Probability and Risk, 6(1–4). Oxford: Oxford University Press. Tillers, P., & Green, E. (Eds.). (1988). Probability and inference in the law of evidence: The uses and limits of bayesianism. (Boston Studies in the Philosophy of Science, 109). Boston & Dordrecht (Netherlands): Kluwer. Tillers, P., & Schum, D. (1992). Hearsay logic. Minnesota Law Review, 76, 813–858. Tillers, P., & Schum, D. (1998). A theory of preliminary fact investigation. In S. Brewer & R. Nozick (Eds.), The philosophy of legal reasoning: Scientific models of legal reasoning.New York: Garland. Tillers, P., & Schum, D. A. (1988). Charting new territory in judicial proof: Beyond Wigmore. Cardozo Law Review, 9(3), 907–966. Tilley, N. (2003). Community policing, problem-oriented policing and intelligence-led policing. In T. Newburn (Ed.), Handbook of policing (pp. 311–339). Cullompton: Willan Publishing. Tinsley, Y. (2001). Even better than the real thing? The case for reform of identification procedures. The International Journal of Evidence & Proof, 5(2), 99–110. Sark, Channel Islands: Vathek Publishing ([email protected]). Toland, J., & Rees, B. (2005). Applying case-based reasoning to law enforcement. International Association of Law Enforcement Intelligence Analysts Journal, 15. Tomberlin, J. E. (1981). Contrary-to-duty imperatives and conditional obligation. Noûs, 16, 357–375. Tonfoni, G. (Ed.). 1985. Artificial intelligence and text-understanding: Plot units and sum- marization procedures. (Quaderni di Ricerca Linguistica, Vol. 6.) Parma, Italy: Edizioni Zara. Tong, H., & Faloutsos, F. (2006). Center-piece subgraphs: Problem definition and fast solutions. Proceedings of KDD 2006, Philadelphia, PA. http://www.cs.cmu.edu/~christos/ PUBLICATIONS/kdd06CePS.pdf Tong, H., Faloutsos, C., & Jia-Yu Pan, J.-Y. (2006). Fast random walk with restart and its applications. In Proceedings of ICDM 2006, Hong Kong. http://www.cs.cmu.edu/~christos/ PUBLICATIONS/icdm06-rwr.pdf Toni, F., & Kowalski, R. (1995). Reduction of abductive logic programs to normal logic programs. In L. Sterling (Ed.), Proceedings of the 12th international conference on logic programming (pp. 367–381). Cambridge, MA: MIT Press. Toni, F., & Kowalski, R. (1996). An argumentation-theoretic approach to transformation of logic programs. In Proceedings of LOPSTR. (Lecture Notes in Computer Science, 1048.) Heidelberg, Germany: Springer, pp. 61–75. Tonini, P. (1997). La Prova Penale. Padua, Italy: CEDAM. Topolnicki, D. M., & MacDonald, E. M. (1991). How the IRS abuses taxpayers. Reader’s Digest (U.S. edn.), March 1991, pp. 83–86. Longer version in Money, October 1990. Toppano, E., Roberto, V., Giuffrida, R., & Buora, G. B. (2008). Ontology engineering: Reuse and integration. International Journal of Metadata, Semantics and Ontologies, 3(3), 233–247. Toulmin, S. E. (1958). The uses of argument. Cambridge, England: Cambridge University Press (reprints: 1974, 1999). Trankell, A. (1972). The reliability of evidence: Methods for analyzing and assessing witness statements. Stockholm: Beckmans. Travis, C. (2004). The silence of the senses. Mind, 113, 57–94. Tredoux, C. G., Nunez, D. T., Oxtoby, O., & Prag, B. (2006). An evaluation of ID: An eigenface based construction system. South African Computer Journal, 37, 1–9. 1254 References

Tribe, L. H. (1971). Trial by mathematics: Precision and ritual in the legal process. Harvard Law Review, 84, 1329–1393. Tribondeau, N. (accessed in 2006). Glossaire de la police Technique et Scientifique. Salon du Polar (accessible on the Web at: http://www.salondupolar.com/pages/texte/glossaire.htm). Trithemius, J. (1500s). Latin treatise Steganographia, hoc est, ars per occultam scripturam animi sui voluntatem absentibus aperiendi certa. [Partly completed in 1503.] Frankfurt/Main: “ex officina typographica Matthiae Beckeri, sumptibus Joannis Berneri”, 1605, 1608, and 1621. Partial English edition, A. McLean (Ed.), F. Tait, C. Upton, & J. W. H. Walden (trans.), The Steganographia of Johannes Trithemius, Edinburgh, Scotland: Magnum Opus Hermetic Sourceworks, 1982. Tsai, F. S., & Chan, K. L. (2007). Detecting cyber security threats in weblogs using probabilis- tic models. In C. C. Yang, D. Zeng, M. Chau, K. Chang, Q. Yang, X. Cheng, et al. (Eds.), Intelligence and security informatics: Proceedings of the Pacific Asia workshop, PAISI 2007, Chengdu, China, April 11–12, 2007 (pp. 46–57). Lecture Notes in Computer Science, Vol. 4430. Berlin: Springer. Tschudy, R. H. (1961). Palynomorphs as indicators of facies environments in Upper Cretaceous and Lower Tertiary strata, Colorado and Wyoming. Wyoming Geological Association Guidebook 16. In Annual Field Conference, pp. 53–59. Tsiamyrtzis, P., Dowdall, J., Shastri, D., Pavlidis, I. T., Frank, M. G., & Ekman, P. (2005). Imaging facial physiology for the detection of deceit. International Journal of Computer Vision, 71(2), 197–214. Tulving, E. (1972). Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.), Organization of memory (pp. 381–403). New York: Academic. Tupman, W. A. (1995). Cross-national criminal databases: The ongoing search for safeguards. Law, Computers and Artificial Intelligence, 4, 261–275. Turner, B. (1987). Forensic entomology: Insects against crime. Science Progress, 71(1) = #281, pp. 133–144. Abingdon, Oxfordshire: Carfax (Taylor & Francis). Turner, S. R. (1992). MINSTREL: A computer model of creativity and storytelling.Ph.D. dissertation, Computer Science, University of California, Los Angeles, December 1992, technical report CSD-920057/UCLA-AI-92-04. ftp://ftp.cs.ucla.edu/tech-report/1992-reports/ 920057.pdf Turner, S. R. (1994). The creative process: A computer model of storytelling and creativity., Mahwah, NJ: Erlbaum. Turney, P. D. (2002). Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proceedings of ACL-2002. Turvey, B. (1999). Criminal profiling: An introduction to behavioral evidence analysis. San Diego, CA: Academic. Twining, W. (1997). Freedom of proof and the reform of criminal evidence. In E. Harnon & A. Stein (Eds.), Rights of the accused, crime control and protection of victims, special volume of the Israel Law Review, 31(1–3), 439–463. Twining, W., & Miers, D. (1976). How to do things with rules: A primer of interpretation.(Lawin Context Series). London: Weidenfeld & Nicolson. Twining, W. L. (1984). Taking facts seriously. Journal of Legal Education, 34, 22–42. Twining, W. L. (1985). Theories of evidence: Bentham and Wigmore. London: Weidenfeld & Nicolson. Twining, W. L. (1989). Rationality and scepticism in judicial proof: Some signposts. International Journal for the Semiotics of Law, 2(4), 69–83. Twining, W. L. (1990). Rethinking evidence: Exploratory essays. Oxford: Blackwell; also, Evanston, IL: Northeastern University Press, 1994. Twining, W. L. (1999). Necessary but dangerous? Generalizations and narrative in argumentation about ‘facts’ in criminal process. Chapter 5 in M. Malsch & J. F. Nijboer (Eds.), Complex cases: Perspectives on the Netherlands criminal justice system (pp. 69–98). (Series Criminal Sciences). Amsterdam: THELA THESIS. References 1255

Ukpabi, P., & Peltron, W. (1995). Using the scanning electron microscope to identify the cause of fibre damage. Part I: A review of related literature. Journal of the Canadian Society of Forensic Science, 28(3), 181–187. Ulmer, S. (1969). The discriminant function and a theoretical context for the use in estimating the votes of judges. In J. N. Grossman & J. Tanenhaus (Eds.), Frontiers of judicial research: Shambaugh conference on judicial research, University of Iowa, October 1967 (pp. 335–369). New York: Wiley. Umaschi, M. (1996). SAGE storytellers: Learning about identity, language and technology. In Proceedings of the Second International Conference on the Learning Sciences (ICLS 96). Association for the Advancement of Computing in Education (AACE), 1996, pp. 526–531. Umaschi Bers, U. (2003). We are what we tell: Designing narrative environments for children. In M. Mateas & P. Sengers (Eds.), Narrative intelligence (pp. 113–128). Amsterdam: Benjamins. Undeutsch, U. (1982). Statement reality analysis. In A. Trankell (Ed.), Reconstructing the past: The role of psychologists in criminal trials (pp. 27–56). Deventer, The Netherlands: Kluwer (now Dordrecht & Berlin: Springer). Ursu, M. F., & Zimmer, R. (2002). On the notion of compliance in critiquing intelligent design assistants: Representing duty and contrary-to-duty statements. In Proceedings of the Sixth International Conference on Information Visualisation; Symposium on Computer Aided Design, London, 10–12 July 2002. IEEE Computer Society, pp. 644–649. Uschold, M. (2003). Where are the semantics in the Semantic Web? AI Magazine, 24(3), 25–36. Uschold, M. (2005). An ontology research pipeline. Applied Ontology, 1(1), 13–16. Amsterdam: IOS Press. Uschold, M., & Grüninger, M. (1996). Ontologies: Principles, methods and applications. Knowledge Engineering Review, 11(2), 93–136. We quoted from the previous version (accessed on the Web in 2009): Technical Report AIAI-TR-191. Edinburgh, Scotland: Artificial Intelligence Applications Institute (AIAI), University of Edinburgh, February 1996. Uschold, M., & Grüninger, M. (2004). Ontologies and semantics for seamless connectivity. In the special section on Semantic Integration. ACM SIGMOD Record, 33(4), 58–64. Uther, H.-J. (2004). The types of international folktales: A classification and bibliography. Based on the system of Antti Aarne and Stith Thompson.PartI:Animal Tales, Tales of Magic, Religious Tales, and Realistic Tales, with an Introduction. Part II: Tales of the Stupid Ogre, Anecdotes and Jokes, and Formula Tales. Part III: Appendices. (Folklore Fellows Communications, Vols. 284–286.) Helsinki, Finland: Suomalainen Tiedeakatemia = Academia Scientiarum Fennica. Uyttendaele, C., Moens, M.-F., & Dumortier, J. (1998). SALOMON: Automatic abstracting of legal cases for effective access to court decisions. Artificial Intelligence and Law, 6(1), 59–79. Vafaie, H., Abbott, D. W., Hutchins, M., & Matkovskly, I. P. (2000). Combining multiple models across algorithms and samples for improved results. At The Twelfth International Conference on Tools with Artificial Intelligence, Vancouver, BC, Canada, 13–15 November 2000. Valcour, L. (1997). Investigate B & E: Break & enter expert system. Technical Report TR-11-97, Canadian Police Research Centre. Valente, A. (1995). Legal knowledge engineering: A modeling approach. Amsterdam: IOS Press. Valente, A. (2005). Types and roles of legal ontologies. In V. R. Benajmins, P. Casanovas, J. Breuker, & A. Gangemi (Eds.), Proceedings of law and the semantic web [2005]: Legal ontologies, methodologies, legal information retrieval, and applications (pp. 65–76). (Lecture Notes in Computer Science, Vol. 3369.) Berlin: Springer. http://lib.dnu.dp. ua:8001/l/%D0%9A%D0%BE%D0%BF%D1%8C%D1%8E%D1%82%D0%B5%D1% 80%D1%8B%D0%98%D1%81%D0%B5%D1%82%D0%B8/_Lecture%20Notes%20in% 20Computer%20Science/semantic/Law%20and%20the%20Semantic%20Web..%20Legal% 20Ontologies,%20Methodologies,%20Legal%20Information%20Retrieval,%20and% 20Applications(LNCS3369,%20Springer,%202005)(ISBN%203540250638)(258s)_CsLn_. pdf#page=75 Valentine, T., Darling, S., & Memon, A. (2006). How can psychological science enhance the effectiveness of identification procedures? An international comparison. Public Interest Law Reporter, 11, 21–39. 1256 References

Valentine, T., Darling, S., & Memon, A. (2007). Do strict rules and moving images increase the reliability of sequential identification procedures? Applied Cognitive Psychology, 21, 933–949. Valentine, T., Pickering, A., & Darling, S. (2003). Characteristics of eyewitness identification that predict the outcome of real lineups. Applied Cognitive Psychology, 17, 969–993. http://www. valentinemoore.fsnet.co.uk/trv/ Valette, R., & Pradin-Chézalviel, B. (1998). Time Petri nets for modelling civil litigation. In A. A. Martino & E. Nissan (Eds.), Formal models of legal time, special issue, Information and Communications Technology Law, 7(3), 269–280. Valeur, F., Mutz, D., & Vigna, G. (2005). A learning-based approach to the detection of SQL attacks. In K. Julisch & C. Krügel (Eds.), Detection of Intrusions and Malware, and Vulnerability Assessment: Proceedings of the second international conference (DIMVA 2005), Vienna, Austria, July 7–8, 2005 (pp. 123–140). (Lecture Notes in Computer Science, Vol. 3548.) Berlin: Springer. Valitutti, A., Strapparava, C., & Stock, O. (2005). Developing affective lexical resources. Psychnology Journal, 2(1), 61–83. http://www.psychnology.org/File/PSYCHNOLOGY_ JOURNAL_2_1_VALITUTTI.pdf van Andel, P. (1994). Anatomy of the unsought finding. Serendipity: Origin, history, domains, traditions, appearances, patterns and programmability. British Journal of the Philosophy of Science, 45(2), 631–647. van Benthem, J. (1983). The logic of time (1st ed.). Dordrecht, The Netherlands: Kluwer. 2nd edition, 1991. van Benthem, J. (1995). Temporal logic. In D. M. Gabbay, C. J. Hogger, & J. A. Robinson (Eds.), Handbook of logic in artificial intelligence and logic programming (Vol. 4, pp. 241–350). Oxford: Clarendon Press. Van Cott, H. P., & Kinkade, R. G. (Eds.). (1972). Human engineering guide to equipment design. New York: McGraw-Hill. Vandenberghe, W., Schafer, B., & Kingston, J. (2003). Ontology modelling in the legal domain: Realism without revisionism. In P. Grenon, C. Menzel, & B. Smith (Eds.), Proceedings of the KI2003 Workshop on Reference Ontologies and Application Ontologies, Hamburg, Germany, September 16, 2003. (CEUR Workshop Proceedings, Vol. 94.) CEUR-WS.org. http://www. informatik.uni-trier.de/~ley/db/indices/a-tree/s/Schafer:Burkhard.html van den Braak, S. W., van Oostendorp, H., Prakken, H., & Vreeswijk, G. A. W. (2006). A crit- ical review of argument visualization tools: Do users become better reasoners? At the Sixth International Workshop on Computational Models of Natural Argument, held with ECAI’06, Riva del Garda, Italy, August 2006. van den Braak, S. W., & Vreeswijk, G. A. W. (2006). A knowledge representation architecture for the construction of stories based on interpretation and evidence. At the Sixth International Workshop on Computational Models of Natural Argument,heldwithECAI’06, Riva del Garda, Italy, August 2006. van der Schoor, J. (2004). Brains voor de recherche. (In Dutch.) Justitiële Verkenningen, 30, 96–99. van der Torre, L. W. N., & Tan, Y.-H. (1999). Contrary-to-duty reasoning with preference-based dyadic obligations. Annals of Mathematics and Artificial Intelligence, 27(1–4), 49–78. van der Vet, P. E., & Mars, N. J. I. (1995). Ontologies for very large knowledge bases in materials science: A case study. In N. J. I. Mars (Eds.), Towards very large knowledge bases: Knowledge building and knowledge sharing 1995 (pp. 73–83). Amsterdam: IOS Press. van Dijk, T. A. (1979). Relevance assignment in discourse comprehension. Discourse Processes, 2, 113–126. van Dijk, T. A. (1989). Relevance in logic and grammar. Chapter 2 In J. Norman & R. Sylvan (Eds.), Directions in relevant logic (pp. 25–57). Boston: Kluwer. http://www.discourses.org/ OldArticles/Relevance%20in%20logic%20and%20grammar.pdf van Eemeren, E. H., & Grootendorst, R. (1995). Argumentation theory. In J. Verschueren, J.-O. Östman, & J. Blommaert (Eds.), Handbook of pragmatics (pp. 55–61). Amsterdam: John Benjamins. References 1257 van Eemeren, E. H., Grootendorst, R., & Snoek Henkemans, F. (1996). Fundamentals of argumentation theory. Mahwah, NJ: Lawrence Erlbaum Associates. van Eemeren, E. H., Grootendorst, R., & Kruiger, T. (1987a). Handbook of argumentation theory: Pragmatics and discourse analysis. Amsterdam: Foris. van Engers, T. M., & Glasee, E. (2001). Facilitating the legislation process using a shared conceptual model. IEEE Intelligent Systems, 16, 50–57. van Gelder, T. J. (2002). Argument mapping with Reason!Able [sic]. The American Philosophical Association Newsletter on Philosophy and Computers, 2002, 85–90. van Gelder, T. J., & Rizzo, A. (2001). Reason!Able across the curriculum. In Is IT an Odyssey in Learning? Proceedings of the 2001 Conference of the Computing in Education Group of Victoria. Victoria, Australia. Van Koppen, P. J. (1995). Judges’ decision-making. Chapter 6.7 In R. Bull & D. Carson (Ed.), Handbook of psychology in legal contexts (pp. 581–610). Chichester: Wiley. van Kralingen, R. W. (1995). Frame based conceptual models of statute law. Dordrecht, The Nertherlands: Kluwer Law International (now Springer). Van Reenen, P. Th., & van Mulken, M. J. P. (Eds.). (1996). Studies in stemmatology. Amsterdam: Benjamins. Vapnik, V. N. (1995). The nature of statistical learning theory. New York & Berlin: Springer. Vapnik, V. N. (1998). Statistical learning theory: Adaptive and learning systems for signal processing, communications, and control.NewYork:Wiley. Vellani, K., & Nahoun, J. (2001). Applied criminal analysis. Boston: Butterworth-Heinemann. Veloso, M. (1994). Planning and learning by analogical reasoning. Berlin: Springer. Veloso, M., & Aamodt, A. (Eds.). (1995). Case-based reasoning research and development: Proceedings of the first international conference on case-based reasoning. Berlin: Springer. Vendler, Z. (1975a). On what we know. In K. Gunderson (Ed.), Language, mind, and knowl- edge (pp. 370–390). (Minnesota Studies in the Philosophy of Science, 7). Minneapolis, MN: University of Minnesota Press. Vendler, Z. (1975b). Reply to Professor Aune. In K. Gunderson (Ed.), Language, mind, and knowl- edge (pp. 400–402). (Minnesota Studies in the Philosophy of Science, 7). Minneapolis, MN: University of Minnesota Press. Verheij, B. (1999). Automated argument assistance for lawyers. In Proceedings of the Seventh International Conference on Artificial Intelligence and Law (ICAIL 1999).NewYork:ACM Press, pp. 43–52. Verheij, B. (2000). Dialectical argumentation as a heuristic for courtroom decision-making. In P. J. van Koppen & N. H. M. Roos (Eds.), Rationality, information and progress in law and psychology: Liber Amicorum Hans F. Crombag (pp. 203–226). Maastricht, The Netherlands: Metajuridica Publications. Verheij, B. (2002). Dialectical argumentation with argumentation schemes: Towards a methodol- ogy for the investigation of argumentation schemes. In Proceedings of the Fifth International Conference on Argumentation, ISSA 2002. Amsterdam. Verheij, B. (2003). Dialectical argumentation with argumentation schemes: An approach to legal logic. Artificial Intelligence and Law, 11, 167–195. Verheij, B. (2005). Virtual arguments: On the design of argument assistants for lawyers and other arguers. The Hague, The Netherlands: T. M. C. Asser Press. Vicard, P., & Dawid, A. P. (2004). A statistical treatment of biases affecting the estimation of mutation rates. Mutation Research, 547, 19–33. Vicard, P., & Dawid, A. P. (2006). Remarks on: ‘Paternity analysis in special fatherless cases without direct testing of alleged father’ [Forensic Science International 146S (2004) S159– S161]. Forensic Science International, 163(1–2), 158–160. http://tinyurl.com/pur8q Vicard, P., Dawid, A. P., Mortera, J., & Lauritzen S. L. (2008). Estimating muta- tion rates from paternity casework. Forensic Science International: Genetics, 2, 9–18. doi:10.1016/j.fsigen.2007.07.002 1258 References

Viegas, E., & Raskin, V. (1998). Computational semantic lexicon acquisition: Methodology and guidelines. Memoranda in Computer and Cognitive Science, MCCS-98-315. Las Cruces, NM: New Mexico State University, Computing Research Laboratory. Vila, L., & Yoshino, H. (1995). Temporal representation for legal reasoning. In: Proceedings of the Third International Workshop on Legal Expert Systems for the CISG, May 1995. Vila, L., & Yoshino, H. (1998). Time in automated legal reasoning. In A. A Martino & E. Nissan (Eds.), Law, computers and artificial intelligence: Special issue on Formal Models of Legal Time.InInformation and Communications Technology Law, 7(3), 173–197. Vila, L., & Yoshino, H. (2005). Time in automated legal reasoning. In M. Fisher, D. Gabbay, & L. Vila (Eds.), Handbook of temporal reasoning in artificial intelligence (electronic resource; Foundations of Artificial Intelligence, 1). Amsterdam: Elsevier. Visser, P. R. S. (1995). Knowledge specification for multiple legal tasks: A case study of the inter- action problem in the legal domain. Kluwer Computer/Law Series, Vol. 17. Dordrecht, The Netherlands: Kluwer (now Springer). Vossos, G., Zeleznikow, J., & Hunter, D. (1993). Building intelligent litigation support tools through the integration of rule based and case based reasoning. Law, Computers and Artificial Intelligence, 2(1), 77–93. Vreeswijk, G. (1993). Defeasible dialectics: A controversy-oriented approach towards defeasible argumentation. The Journal of Logic and Computation, 3(3), 3–27. Vreeswijk, G. A. W., Brewka, G., & Prakken, H. (2003). Special issue on computational dialectics: An introduction. Journal of Logic and Computation, 13(3). Vreeswijk, G. A. W., & Prakken, H. (2000). Credulous and sceptical argument games for pre- ferred semantics. In M. Ojeda-Aciego, I. P. de Guzman, G. Brewka, & L. Moniz Pereira (Eds.), Proceedings of JELIA’2000: The seventh European workshop on logic for artificial intelligence (pp. 239–253). Springer Lecture Notes in Artificial Intelligence, Vol. 1919. Berlin: Springer. Vrij, A. (1998a). Physiological parameters and credibility: The polygraph. Chapter 4 in Memon et al. Vrij, A. (1998b). Interviewing suspects. Chapter 6 in Memon et al. Vrij, A. (2000). Detecting lies and deceit: The psychology of lying and implications for profes- sional practice. Wiley Series on the Psychology of Crime, Policing and Law. Chichester, West Sussex, England: Wiley. Second edition: 2008. Vrij, A. (2001). Detecting the liars. Psychologist, 14, 596–598. Vrij, A. (2005). Co-operation of liars and truth-tellers. Applied Cognitive Psychology, 19, 39–50. Vrij, A., Akehurst, L., Soukara, S., & Bull, R. (2004). Let me inform you how to tell a convinc- ing story: CBCA and reality monitoring scores as a function of age, coaching, and deception. Canadian Journal of Behavioral Science, 36(2), 113–126. Vrij, A., Mann, S., Fisher, R., Leal, S., Milne, R., & Bull, R. (2008). Increasing cognitive load to facilitate lie detection: The benefit of recalling an event in reverse order. Law and Human Behavior, 32, 253–265. Vrij, A., & Semin, G. (1996). Lie experts’ beliefs about nonverbal indicators of deception. Journal of nonverbal behavior, 20(1), 65–80. Vrochidis, S., Doulaverakis, C., Gounaris, A., Nidelkou, E., Makris, L., & Kompatsiaris, I. (2008). A hybrid ontology and visual-based retrieval model for cultural heritage multimedia collections. International Journal of Metadata, Semantics and Ontologies, 3(3), 167–182. Wache, H., Voegele, T., Visser, U., Stuckenschmidt, H., Schuster, G., Neumann, H., et al. (2001). Ontology-based integration of information: A survey of existing approaches. In Proceedings of the IJCAI-01 Workshop on Ontologies and Information Sharing, Seattle, WA, August 4–5, 2001, pp. 108–118. Wade, K. A., Garry, M., Read, J. D., & Lindsay, D. S. (2002). A picture is worth a thousand lies: Using false photographs to create false childhood memories. Psychonomic Bulletin & Review, 9, 597–603. Wade, K. A., Sharman, S. J., Garry, M., Memon, A., Merckelbach, H., & Loftus, E. (2007). False claims about false memories. Consciousness and Cognition, 16, 18–28. References 1259

Waegel, W. B. (1981). Case routinization in investigative police work. Social Problems, 28, 263–275. Wagenaar, W. A. (1996). Anchored narratives: A theory of judicial reasoning and its consequences. In G. Davies, S. Lloyd-Bostock, M. McMurran, & C. Wilson (Eds.), Psychology, law, and criminal justice (pp. 267–285). Berlin: Walter de Gruyter. Wagenaar, W. A., van Koppen, P. J., & Crombag, H. F. M. (1993). Anchored narratives: The psychology of criminal evidence. Hemel Hempstead, Hertfordshire: Harvester Wheatsheaf, & New York: St. Martin’s Press. Wagenaar, W. A., & Veefkind, N. (1992). Comparison of one-person and many-person line ups: A warning against unsafe practices. In F. Lösel, D. Bender, & P. T. Bliesener (Eds.), Psychology and law: International perspectives. Berlin: De Gruyter. Wahab, M. S. (2004). E-commerce and internet auction fraud: The E-Bay community model. Computer Crime Research Center, 29 April 2004. http://www.crime-research.org/ Waismann, F. (1951). Verifiability. In A. Flew (Ed.), Logic and language. Oxford: Blackwell. Walker, C., & Starmer, K. (Eds.). (1999). Miscarriage of justice: A Review of justice in error (2nd ed.). London: Blackstone Press. Previously, C. Walker & K. Starmer (Eds.), Justice in error. London: Blackstone Press, 1993. Walker, D. P. (1958). Spiritual and demonic magic from Ficino to Campanella. London: Warburg Institute. Walker, R. F., Oskamp, A, Schrickx, J. A., Opdorp, G. J., & van den Berg P. H. (1991). PROLEXS: Creating law and order in a heterogeneous domain. International Journal of Man–Machine Studies, 35(1), 35–68. Wallsten, T. S., Budescu, D. V., Rapoport, A., Zwick, R. and Forsyth, B. (1986). Measuring the vague meanings of probability terms. Journal of Experimental Psychology: General, 115(4), 348–365. Walsh, W. F. (2001). Compstat: An analysis of an emerging police managerial paradigm. Policing: An International Journal of Police Strategies & Management, 24(3), 347–362. Walton, D. (1989). Informal logic. Cambridge: Cambridge University Press. Walton, D. (1996a). The witch hunt as a structure of argumentation. Argumentation, 10, 389–407. Walton, D. (2007). Character evidence: An abductive theory. Berlin: Springer. Walton, D. (2010). A dialogue model of belief. Argument & Computation, 1(1), 23–46. Walton, D. N. (1996b). Argumentation schemes for presumptive reasoning. Mahwah, NJ: Lawrence Erlbaum Associates. Walton, D. N. (1996c). Argument structure: A pragmatic theory. Toronto Studies in Philosophy. Toronto, ON: University of Toronto Press. Walton, D. N. (1997). Appeal to expert opinion. University Park, PA: Pennsylvania State University Press. Walton, D. N. (1998a). The new dialectic: Conversational contexts of argument. Toronto, ON: University of Toronto Press. Walton, D. N. (1998b). Ad Hominem arguments. Tuscaloosa: University of Alabama Press. Walton, D. N. (2002). Legal argumentation and evidence. University Park, PA: Pennsylvania State University Press. Walton, D. N. (2004). Abductive reasoning. Tuscaloosa, AL: University of Alabama Press. Walton, D. N. (2006a). Examination dialogue: An argumentation framework for critically ques- tioning an expert opinion. Journal of Pragmatics, 38, 745–777. Walton, D. N. (2006b). Character evidence: An abductive theory. Dordrecht, The Netherlands: Springer. Walton, D. N., & Krabbe, E. C. W. (1995). Commitment in dialogue: Basic concepts of interpersonal reasoning. Albany, NY: State University of New York Press. Walton, D., & Macagno, F. (2005). Common knowledge in legal reasoning about evidence. International Commentary on Evidence, 3(1), Article 1. http://www.bepress.com/ice/vol3/iss1/ art1 Walton, D., Reed, C., & Macagno, F. (2008). Argumentation schemes. Cambridge: Cambridge University Press. 1260 References

Walton, D., & Schafer, B. (2006). Arthur, George and the mystery of the missing motive: Towards a theory of evidentiary reasoning about motives. International Commentary on Evidence, 4(2), 1–47. Walton, K. (1978). Fearing fictions. The Journal of Philosophy, 75, 5–27. Walton, K. (1990). as make-believe. Cambridge, MA: Harvard University Press. Wang, J. (2004). Microchip devices for detecting terrorist weapons. Analytical Chimica Acta, 507, 3–10. Wang, P., & Gedeon, T. D. (1995). A new method to detect and remove the outliers in noisy data using neural networks: Error sign testing. Systems Research and Information Science, 7(1), 55–67. Wang, W., Guo, W., Luo, Y., Wang, X., & Xu, Z. (2005). The study and application of crime emer- gency ontology event model. In R. Khosla, R. J. Howlett, & L. C. Jain (Eds.), Knowledge-based intelligent information and engineering systems: 9th international conference, KES 2005, Melbourne, Australia, September 14–16, 2005, Proceedings, Part IV (pp. 806–812). Lecture Notes in Computer Science, Vol. 3684. Berlin: Springer. Wang, W., Wang, C., Zhu, Y., Shi, B., Pei, J., Yan, X., et al. (2005). GraphMiner: a struc- tural pattern-mining system for large disk-based graph databases and its applications. In Proceedings of the SIGMOD 2005 Conference: 24th ACM SIGMOD International Conference on Management of Data/Principles of Database Systems, Baltimore, MD, June 13–16, 2005. New York: ACM Press, pp. 879–881. Wansing, H. (2002). Diamonds are a philosopher’s best friends. Journal of Philosophical Logic, 31, 591–612. Ward, K. M., & Duffield, J. W. (1992). Natural resource damages: Law and economics. (Environmental Law Library, Wiley Law Publications.) New York: Wiley (with pocket supplements). Warner, D. (1994). A neural network-based law machine: The problem of legitimacy. Law, Computers & Artificial Intelligence, 2(2), 135–147. Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications. Cambridge: Cambridge University Press. Watson, J. A. F. (1975). Nothing but the truth: Expert evidence in principle and practice for surveyors, valuers and others (2nd ed.). London: Estates Gazette. Watson, J. G., & Chow, J. C. (2002). Particulate pattern recognition. Chapter 11 In B. L. Murphy & R. D. Morrison (Eds.), Introduction to environmental forensics (pp. 429–460). San Diego, CA & London: Academic. Wavish, P., & Connah, D. (1997). Virtual actors that can perform scripts and improvise roles. In W. L. Johnson (Ed.), Autonomous agents ’97, Marina del Rey, CA. New York: ACM Press, pp. 317–322. Weatherford, M. (2002). Mining for fraud. IEEE Intelligent Systems, 17, 4–6. Webb, G. I. (2000). MultiBoosting: A technique for combining boosting and wagging. Machine Learning, 40(2), 159–196. Weidensaul, S. (2002). The ghost with trembling wings. Science, wishful thinking, and the search for lost species. New York: North Point Press. Weiss, G. (1999). Multiagent systems: A modern approach to distributed artificial intelligence. Cambridge, MA: The MIT Press. Weiss, S., & Kulikowski, C. (1992). Computer systems that learn: Classification and prediction methods from statistics, neural nets, machine learning and expert systems. San Mateo, CA: Morgan Kaufmann Publishers Inc. Wells, G. L. (1978). Applied eyewitness testimony research: System variables and estimator variables. Journal of Personality and Social Psychology, 36, 1546–1557. Wells, G. L. (1984). The psychology of lineup identifications. Journal of Applied Social Psychology, 14, 89–103. Wells, G. L. (1985). Verbal descriptions of faces from memory: Are they diagnostic of identifica- tion accuracy? Journal of Applied Psychology, 70, 619–626. Wells, G. L. (1988). Eyewitness identification: A system handbook. Toronto, ON: Carswell Legal Publications. References 1261

Wells, G. L. (1993). What do we know about eyewitness identification? American Psychologist, 48, 553–571. Wells, G. L. (2000). From the lab to the police station: A successful application of eyewitness research. American Psychologist, 55, 581–598. Wells, G. L. (2006). Eyewitness identification: Systemic reforms. Wisconsin Law Review, 2006, 615–643. Wells, G. L., & Bradfield, A. L. (1998). ‘‘Good, you identified the suspect’’: Feedback to eyewit- nesses distorts their reports of the witnessing experience. Journal of Applied Psychology, 83, 360–376. Wells, G. L., & Bradfield, A. L. (1999). Distortions in eyewitnesses’ recollections: Can the postidentification feedback effect be moderated? Psychological Science, 10, 138–144. Wells, G. L., & Charman, S. D. (2005). Building composites can harm lineup identification performance. Journal of Experimental Psychology: Applied, 11, 147–156. Wells, G. L., Ferguson, T. J., & Lindsay, R. C. L. (1981). The tractability of eyewitness confidence and its implication for triers of fact. Journal of Applied Psychology, 66, 688–696. Wells, G. L., & Hryciw, B. (1984). Memory for faces: Encoding and retrieval operations. Memory and Cognition, 12, 338–344. Wells, G. L., & Leippe, M. R. (1981). How do triers of fact infer the accuracy of eyewitness identi- fications? Memory for peripheral detail can be misleading. Journal of Applied Psychology, 66, 682–687. Wells, G. L., Leippe, M. R., & Ostrom, T. M. (1979a). Guidelines for empirically assessing the fairness of a lineup. Law and Human Behavior, 3, 285–293. Wells, G. L., Lindsay, R. C. L., & Ferguson, T. J. (1979b). Accuracy, confidence, and juror perceptions in eyewitness identification. Journal of Applied Psychology, 64, 440–448. Wells, G. L., & Loftus, E. F. (1991). Commentary: Is this child fabricating? Reactions to a new assessment technique. In J. Doris (Ed.), The suggestibility of children’s recollections (pp. 168–171). Washington, DC: American Psychological Association. Wells, G. L., Malpass, R. S., Lindsay, R. C. L., Fisher, R. P., Turtle, J. W., & Fulero, S. (2000). From the lab to the police station: A successful application of eyewitness research. American Psychologist, 55, 581–598. Wells, G. L., Memon, A., & Penrod, S. (2006) Eyewitness evidence: Improving its probative value. Psychological Science in the Public Interest, 7, 45–75. Wells, G. L., & Murray, D. M. (1983). What can psychology say about the Neil vs. Biggers criteria for judging eyewitness identification accuracy? Journal of Applied Psychology, 68, 347–362. Wells, G. L., & Olson, E. A. (2001). The psychology of alibis or Why we are interested in the con- cept of alibi evidence. Ames, IA: Iowa State University, January 2001. http://www.psychology. iastate.edu/~glwells/alibiwebhtml.htm Wells, G. L., & Olson, E. A. (2003). Eyewitness testimony. Annual Review of Psychology, 54, 277–295. Wells, G. L., Olson, E. A., & Charman, S. (2003). Distorted retrospective eyewitness reports as functions of feedback and delay. Journal of Experimental Psychology: Applied, 9, 42–52. Wells, G. L., & Quinlivan, D. S. (2009). Suggestive eyewitness identification procedures and the Supreme Court’s reliability test in light of eyewitness science: 30 years later. Law and Human Behavior, 33, 1–24. http://www.psychology.iastate.edu/~glwells/Wells_articles_pdf/Manson_ article_in_LHB_Wells.pdf Wells, G. L., Rydell, S. M., & Seelau, E. P. (1993). On the selection of distractors for eyewitness lineups. Journal of Applied Psychology, 78, 835–844. Wells, G. L., Small, M., Penrod, S., Malpass, R. S., Fulero, S. M., & Brimacombe, C. A. E. (1998). Eyewitness identification procedures: Recommendations for lineups and photospreads. Law and Human Behavior, 22, 603–647. Werbos, P. (1974). Beyond regression: New tools for prediction and analysis in the behavioural sciences. Ph.D. dissertation. Cambridge, MA: Harvard University. Wertheim, K., Langenburg, G., & Moenssens, A. (2006). A report of latent print examiner accuracy during comparison training exercises. Journal of forensic identification, 56, 55–93. 1262 References

Wertheim, K., & Maceo, A. (2002). The critical stage of friction ridge pattern formation. Journal of Forensic Identification, 52, 35–85. Westermann, G. U., & Jain, R. (2006a). Events in multimedia electronic chronicles (E Chronicles). International Journal of Semantic Web and Information Systems, 2(2), 1–27. Westermann, G. U., & Jain, R. (2006b). A generic event model for event-centric multimedia data management in eChronicle applications. In Proceedings of the 2006 IEEE International Workshop on Electronic Chronicles (eChronicle 2006),atthe22nd International Conference on Data Engineering Workshops (ICDEW’06), Atlanta, GA, April 2006. Los Alamitos, California: IEEE Computer Society Press, 2006. Westermann, G. U., & Jain, R. (2007, January). Toward a common event model for multimedia applications. IEEE MultiMedia, 19–29. Weyhrauch, P. (1997). Guiding interactive drama. Ph.D. Dissertation, Technical report CMU-CS- 97-109. Pittsburgh, PA: Carnegie Mellon University. White, J., Kauer, J. S., Dickinson, T. A., & Walt, D. R. (1996). Rapid analyte recognition in a device based on optical sensors and the olfactory system. Analytical Chemistry, 68, 2191–2202. White, M. (1957). Social thought in America: The revolt against formalism. Edition cited, 1957. First published, New York: Viking Press, 1949. Extended edn., (Beacon paperback, 41), Boston, MA: Beacon Press, 1957; 4th printing, 1963. With a new foreword, (A Galaxy Book), Oxford: Oxford University Press, 1976. White, W. S. (1989). Police trickery in inducing confessions. University of Pennsylvania Law Review, 127, 581–629. White, W. S. (1997). False confessions and the constitution: Safeguards against untrustworthy confessions. Harvard Civil Rights-Civil Liberties Law Review, 32, 105–157. Whitely, R. (1993). [A joke. Last item under the rubric] Laughter, the best medicine. Reader’s Digest, U.S. edition, 143(859), November 1993, p. 86. Previously in Executive Speechwriter Newsletter. Wierzbicka, A. (2000). The semantics of human facial expressions. In I. E. Dror & S. V. Stevenage (Eds.), Facial information processing: A multidisciplinary perspective. Special issue of Pragmatics & Cognition, 8(1), 147–183. Wigmore, J. H. (1913). The problem of proof. Illinois Law Review, 8(2), 77–103. Wigmore, J. H. (1937). The science of judicial proof as given by logic, psychology, and general experience, and illustrated judicial trials (3rd ed.). Boston: Little, Brown & Co. Previously: The Principles of Judicial Proof; or, the Process of Proof as Given by Logic, Psychology, and General Experience, and Illustrated Judicial Trials, Boston, 1931, 1934, 2nd edn.; The Principles of Judicial Proof: As Given by Logic (etc.), Boston, 1913, 1st edn. Wilder, H. H., & Wentworth, B. (1918). Personal identification: methods for the identification of individuals, living or dead. Boston: Gorham. Wilensky, R. (1978). Understanding goal-based stories. Technical Report YALE/DCS/tr140. New Haven, CT: Computer Science Department, Yale University. Wilensky, R. (1981). PAM; Micro PAM. Chapters 7 and 8 In R. C. Schank & C. K. Riesbeck (Eds.), Inside computer understanding (pp. 136–179 & 180–196). Hillsdale, NJ: Erlbaum. Wilensky, R. (1982). Points: A theory of the structure of stories in memory. In W. G. Lehnert & M. H. Ringle (Eds.), Strategies for natural language processing (pp. 345–374). Hillsdale, NJ: Erlbaum. Wilensky, R. (1980). Understanding goal-based stories. New York: Garland. Wilensky, R. (1983a). Planning and understanding: A computational approach to human reason- ing. Reading, MA: Addison-Wesley. Wilensky, R. (1983b). Story grammar versus story points. The Behavioural and Brain Sciences, 6, 579–623. Wilkins, D., & Pillaipakkamnatt, K. (1997). The effectiveness of machine learning techniques for predicting time to case disposition. In Proceedings of Sixth International Conference on Artificial Intelligence and Law, Melbourne, Australia. New York: ACM Press, pp. 39–46. References 1263

Wilkinson, C. (2004). Forensic facial reconstruction. Cambridge: Cambridge University Press. Wilks, Y. (1975). A preferential, pattern-matching semantics for natural language understanding. Artificial Intelligence, 6, 53–74. Williams, D. R. (1996). Goodbye, my little ones (book review). New York Law Journal, April 30, Section “The Lawyer’s Bookshelf”, p. 2. Williams, K. D., & Dolnik, L. (2001). Revealing the worst first. In J. P. Forgas & K. D. Williams (Eds.), Social influence: Direct and indirect processes (pp. 213–231). Lillington, NC: Psychology Press. Williams, P., & Savona, E. (Eds.). (1995). Special issue on the united nations and transnational organized crime. Transnational Organized Crime, 1. Williams, S. (1992). Putting case-based learning into context: Examples from legal, business, and medical education. The Journal of the Learning Sciences, 2(4), 367–427. Williamson, T. (2007). Psychology and criminal investigations. In T. Newburn, T. Williamson, & A. Wright (Eds.), Handbook of criminal investigation (pp. 68–91). Cullompton: Willan Publishing. Willis, C. M., Church, S. M., Guest, C. M., Cook, W. A., McCarthy, N., Bransbury, A. J., et al. (2004). Olfactory detection of human bladder cancer by dogs: proof of principle study. British Medical Journal, 329, 712–715. Willmer, M. A. P. (1970). Crime and information theory. Edinburgh: Edinburgh University Press. Wilson, A. D., & Baietto, M. (2009). Applications and advances in electronic-nose technologies. Sensors, 9(7), 5099–5148. Open access. doi://10.3390/s90705099 http://www.mdpi.com/1424- 8220/9/7/5099/pdf Wilson, G., & Banzhaf, W. (2009). Discovery of email communication networks from the Enron corpus with a genetic algorithm using social network analysis. In Proceedings of the Eleventh Conference on Evolutionary Computation, May 18–21, 2009, Trondheim, Norway, pp. 3256–3263. Winer, D. (in press). Review of ontology based storytelling devices. In N. Dershowitz & E. Nissan (Eds.), Language, culture, computation: Essays in honour of Yaacov Choueka, Vol. 1: Theory, techniques, and applications to E-science, law, narratives, information retrieval, and the cultural heritage. Berlin: Springer. Winograd, T. (1972). Understanding natural language. New York: Academic. Winquist, F. (2008). Voltammetric electronic tongues: Basic principles and applications. Mikrochimica Acta, 163, 3–10. Winquist, F., Holmin, S., Krantz-Rülcker, C., Wide, P., & Lundström, I. (2000). A hybrid electronic tongue. Analytica Chimica Acta, 406, 147–157. Winston, P. H. (1984). Artificial intelligence (2nd ed.). Reading, MA: Addison-Wesley. Witten, I. H., & Frank, E. (2000). Data mining: Practical machine learning tools and techniques with java implementations. San Francisco: Morgan Kaufmann Publishers. Wogalter, M. S., & Marwitz, D. B. (1991). Face composite construction: In-view and from-memory quality and improvement with practice. Ergonomics, 34(4), 459–468. Wojtas, O. (1996). Forensics unmask dead poet. The Times Higher Education Supplement, London, August 23, p. 4. Woods, J. H. (1974). The logic of fiction. The Hague, The Netherlands: Mouton. Woods, W. A. (1975). What’s in a link: Foundations for semantic networks. In D. G. Bobrow & A. Collins (Eds.), Representation and understanding (pp. 35–82). New York: Academic. Wooldridge, M. (2000). Semantic issues in the verification of agent communication languages. Journal of Autonomous Agents and Multi-Agent Systems, 3(1), 9–31. Wooldridge, M. (2002). An introduction to multiagent systems. Chichester: Wiley 2nd edition, 2009. [Page numbers as referred to in this book are to the 1st edition.] Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2), 115–152. Wooldridge, M., & van der Hoek, W. (2005). On obligations and normative ability: Towards a logical analysis of the social contract. Journal of Applied Logic, 3, 396–420. Worboys, M. F., & Duckham, M. (2004). GIS: A computing perspective (2nd ed.). Boca Raton, FL: CRC Press. 1264 References

Wright, D. B., & Skagerberg, E. M. (2007). Post-identification feedback affects real eyewitnesses. Psychological Science, 18, 172–178. Wright, R. K. (2005). Investigation of traumatic deaths. In S. H. James & J. J. Nordby (Eds.), Forensic science: An introduction to scientific and investigative techniques (2nd ed.). Boca Raton, FL: CRC Press. Also in 3rd edition, 2009. Wu, D. (1992). Automatic inference: A probabilistic basis for natural language interpretation. Technical Report CSD-92-692, Computer Science Division. Berkeley, CA: University of California. ftp://sunsite.berkeley.edu/pub/techreps/CSD-92-692.html Wu, J., & Chung, A. C. S. (2005a). Cross entropy: A new solver for markov random field modeling and applications to medical image segmentation. In J. S. Duncan & G. Gerig (Eds.), Medical image computing and computer-assisted intervention: MICCAI 2005, 8th international con- ference, Palm Springs, CA, USA, October 26–29, 2005, Proceedings, Part I (pp. 229–237). Lecture Notes in Computer Science, Vol. 3749. Berlin: Springer. Wu, J., & Chung, A. C. S. (2005b). A segmentation method using compound Markov random fields based on a general boundary model. In Proceedings of the 2005 International Conference on Image Processing (ICIP 2005), Genoa, Italy, September 11–14, 2005. Vol. 2: Display Algorithms: Image Processing for New Flat Panel Displays. New York: IEEE, pp. 1182–1185. Wu, F., Ng-Thow-Hing, V., Singh, K., Agur, A., & McKee, N. H. (2007). Computational repre- sentation of the aponeuroses as NURBS surfaces in 3-D musculoskeletal models. Computer Methods and Programs in Biomedicine, 88(2), 112–122. Würzbach, N. (2005). Motif. In D. Herman, M. Jahn, & M.-L. Ryan (Eds.), Routledge encyclopedia of narrative theory (pp. 322–323). London: Routledge, 2005 (hbk), 2008 (pbk). Xiang, Y., Chau, M., Atabakhsh, H., & Chen, H. (2005). Visualizing criminal relationships: Comparisons of a hyperbolic tree and a hierachical list. Decision Support System, 41, 69–83. doi://10.1016/j.dss.2004.02.006 XML. (2002). XML and information retrieval: Proceedings of the 2nd workshop,Tampere, Finland, August 2002. ACM Special Interest Group on Information Retrieval. New York: ACM, 2002. Xu, J. J., & Chen, H. (2004). Fighting organized crimes: using shortest-path algorithms to identify associations in criminal networks. Decision Support Systems, 38, 473–487. doi://10.1016/S0167-9236(03)00117-9 Xu, M., Kaoru, H., & Yoshino, H. (1999). A fuzzy theoretical approach to case-based representa- tion and inference in CISG. Artificial Intelligence and Law, 7(2/3), 115–128. Yager, R. R., & Zadeh, L. A. (1994). Fuzzy sets, neural networks and soft computing.NewYork: Van Nostrand Reinhold. Yan, X., & Han, J. (2002). gSpan: Graph-based substructure pattern mining. In Proceedings of the 2002 International Conference on Data Mining (ICDM 2002), pp. 721–724. Expanded Version, UIUC Technical Report, UIUCDCS-R-2002-2296. Department of Computer Science, University of Illinois at Urbana-Champaign. Yan, X., Zhou, X. J., & Han, J. (2005). Mining closed relational graphs with connectivity con- straints. In Proceedings of the 2005 International Conference on Knowledge Discovery and Data Mining (KDD 2005), Chicago, IL, August 2005, pp. 324–333. Yan, X., Zhu, F., Yu, P. S., & Jan, J. (2006). Feature-based substructure similarity search. ACM Transactions on Database Systems, 31(4), 1418–1453. Pre-final version posted at: http://www. cs.ucsb.edu/~xyan/papers/tods06_similarity.pdf Yang, C. C. (2008). Knowledge discovery and information visualization for terrorist social net- works. In H. Chen & C. C. Yang (Eds.), Intelligence and security informatics (pp. 45–64). Studies in Computational Intelligence, Vol. 135. Berlin: Springer. doi://10.1007/978-3-540- 69209-6 Yang, C. C., & Wang, F. L. (2008). Hierarchical summarization of large documents. Journal of the American Society for Information Science and Technology (JASIST), 59(6), 887–902. Yarmey, A. D. (1995). Eyewitness and evidence obtained by other senses. Chapter 3.7 In R. Bull & D. Carson (Eds.), Handbook of psychology in legal contexts (pp. 216–273). Chichester: Wiley. References 1265

Yazdani, M. (1983). Generating events in a fictional world of stories. Research Report R-113. Exeter: Department of Computer Science, University of Exeter. Ybarra, L. M. R., & Lohr, S. L. (2002). Estimates of repeat victimization using the national crime victimization survey. Journal of Quantitative Criminology, 18(1), 1–21. Yea, B., Konishi, R., Osaki, T., & Sugahara, K. (1994). The discrimination of many kinds of odor species using fuzzy reasoning and neural networks. Sensors & Actuators, 45, 159–165. Yearwood, J. (1997). Case-based retrieval of refugee review tribunal text cases. In Legal knowledge and information systems (pp. 67–83). JURIX 1997: The Tenth Annual Conference. Amsterdam: IOS Press. Yearwood, J., & Stranieri, A. (1999). The integration of retrieval, reasoning and drafting for refugee law: A third generation legal knowledge based system. In Proceedings of the Seventh International Conference on Artificial Intelligence and Law (ICAIL’99).NewYork:ACM Press, pp. 117–137. Yearwood, J., & Stranieri, A. (2000). An argumentation shell for knowledge based systems. In Proceedings of the IASTED International Conference on Law and Technology (LawTech 2000), 30 October – 1 November 2000. Anaheim: ACTA Press, pp. 105–111. Yearwood, J., & Stranieri, A. (2006). The generic/actual argument model of practical reasoning. Decision Support Systems, 41(2), 358–379. Yearwood, J., & Stranieri, A. (2009). Deliberative discourse and reasoning from generic argument structures. AI & Society, 23(3), 353–377. Yearwood, J., Stranieri, A., & Anjaria, C. (1999). The use of argumentation to assist in the generation of legal documents. At the Fourth Australasian Document Computing Symposium (ADCS’99). New South Wales: Southern Cross University Press. Yearwood, J., & Wilkinson, R. (1997). Retrieving cases for treatment advice in nursing using text representation and structured text retrieval. Artificial Intelligence in Medicine, 9(1), 79–99. Yedidia, J. S., Freeman, W. T., & Weiss, Y. (2003). Understanding belief propagation and its gen- eralizations. In G. Lakemeyer & B. Nebel (Eds.), Exploring artificial intelligence in the new millennium (pp. 239–269). San Francisco: Morgan Kaufmann Publishers. Previously, Technical Report TR 2001 22, Mitsubishi Electric Research, 2001. Yim, H. S., Kibbey, C. E., Ma, S. C., Kliza, D. M., Liu, D., Park, S. B., et al. (1993). Polymer membrane-based ion-, gas-, and bio-selective potentiometric sensors. Biosensors and Bioelectronics, 8, 1–38. Yinon, J. (2003). Detection of explosives by electronic noses. Analytical Chemistry, 75(5), 99A–105A. Young, A. W., & Ellis, H. D. (Eds.). (1989). Handbook of research on face processing. Amsterdam: Elsevier. Young, P., & Holmes, R. (1974). The English civil war: A military history of the three civil wars 1642–1651. London: Eyre Methuen; Ware, Hertfordshire: Wordsworth Editions, 2000. Young, R. M. (2007). Story and discourse: A bipartite model of narrative generation in virtual worlds. Interaction Studies, 8(2), 177–208. Amsterdam: Benjamins. http://liquidnarrative.csc. ncsu.edu/papers.html Yovel, J. (2003). Two conceptions of relevance. In A. A. Martino & E. Nissan (Eds.), Formal approaches to legal evidence. Special issue, Cybernetics and Systems, 34(4/5), 283–315. Yovel, J. (2007). Quasi-checks: An apology for a mutation of negotiable instruments. DePaul Journal of Business and Commercial Law, 5, 579–603. http://works.bepress.com/cgi/ viewcontent.cgi?article=1004&context=jonathan_yovel Yovel, J. (2010). Relational formalism, linguistic theory and legal construction. Yale Law School Faculty Scholarship Series. Paper 33. http://digitalcommons.law.yale.edu/fss_papers/33 Yu, F.-R., Tang, T., Leung, V.-C.-M., Liu, J., & Lung, C-H. (2008). Biometric-based user authentication in mobile ad hoc networks. Security in Wireless Sensor Networks,75 1(1), 5–16.

75 The journal Security in Wireless Sensor Networks is published by Wiley. 1266 References

Yue, J., Raja, A., Liu, D., Wang, X., & Ribarsky, W. (2009). A blackboard-based approach towards predictive analytics. In Proceedings of AAAI Spring Symposium on Technosocial Predictive Analytics, Stanford University, CA, March 23–25, 2009, pp. 154–161. http://www.sis.uncc. edu/~anraja/PAPERS/TPA_JYue.pdf Yue, J., Raja, A., & Ribarsky, B. (2010). Predictive analytics using a blackboard-based reason- ing agent. Short Paper in Proceedings of 2010 IEEE/ WIC/ ACM International Conference on Intelligent Agent Technology (IAT-2010), Toronto, Canada, pp. 97–100. http://www.sis.uncc. edu/~anraja/PAPERS/IAT10Visual.pdf Yuille, J. C. (1993). We must study forensic eye-witnesses to know about them. American Psychologist, 48, 572–573. Yunhong, W., Tan, T., & Jain, A. K. (2003). Combining face and iris biometrics for identity ver- ification. In Proceedings of the Fourth International Conference on Audio and Video-Based Biometric Person Authentication (AVBPA), Guildford, UK, pp. 805–813. Zabell, S. L. (1988). The probabilistic analysis of testimony. Journal of Statistical Planning and Inference, 20, 327–354. Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8, 338–353. Zahedi, F. (1993). Intelligent systems for business. Belmont, CA: Wadsworth. Zander, M. (1979). The investigation of crime: A study of cases tried at the Old Bailey. Criminal Law Review, 1979, 203–219. Zarri, G. P. (1996). NKRL, a knowledge representation language for narrative natural language processing. In Proceedings of COLING 1996, pp. 1032–1035. http://acl.ldc.upenn.edu/C/C96/ C96-2181.pdf Zarri, G. P. (1998). Representation of temporal knowledge in events: The formalism, and its poten- tial for legal narratives. In A. A. Martino & E. Nissan (Eds.), Formal models of legal time, special issue, Information and Communications Technology Law, 7(3), 213–241. Zarri, G. P. (2009). Representation and management of narrative information: Theoretical princi- ples and implementation. (Series: Advanced Information and Knowledge Processing). Berlin: Springer. Zarri, G. P. (2011). Representation and management of complex ‘narrative’ information. In N. Dershowitz & E. Nissan (Eds.), Language, culture, computation: Studies in honour of Yaacov Choueka, Vol. 1: Theory, techniques, and applications to E-science, law, narratives, information retrieval, and the cultural heritage (in press). Berlin: Springer. Zeide, J. S., & Liebowitz, J. (1987). Using expert systems: The legal perspective. IEEE Expert, Spring issue, pp. 19–20. Zeleznikow, J. (2002a). Risk, negotiation and argumentation: A decision support system based approach. Law, Probability and Risk: A Journal for Reasoning Under Uncertainty, 1(1), 37–48. Oxford: Oxford University Press. Zeleznikow, J. (2002b). Using web-based legal decision support systems to improve access to justice. Information and Communications Technology Law, 11(1), 15–33. Zeleznikow, J. (2004). Building intelligent legal decision support systems: Past practice and future challenges. Chapter 7 In J. A. Fulcher & L. C. Jain (Eds.), Applied intelligent systems: New directions (pp. 201–254). Berlin: Springer. Zeleznikow, J., & Hunter, D. (1994). Building intelligent legal information systems: Knowledge representation and reasoning in law. Computer/Law Series, Vol. 13. Dordrecht: Kluwer. Zeleznikow, J., & Stranieri, A. (1995). The Split Up system: Integrating neural networks and rule based reasoning in the legal domain. In Proceedings of the Fifth International Conference on Artificial Intelligence & Law (ICAIL’95). New York: ACM Press, pp. 185–194. Zeleznikow, J., & Stranieri, A. (1998). Split Up: The use of an argument based knowledge represen- tation to meet expectations of different users for discretionary decision making. In Proceedings of IAAI’98: Tenth Annual Conference on Innovative Applications of Artificial Intelligence. Cambridge, MA: AAAI/MIT Press, pp. 1146–1151. Zeleznikow, J., Vossos, G., & Hunter, D. (1994). The IKBALS project: Multimodal reasoning in legal knowledge based systems. Artificial Intelligence and Law, 2(3), 169–203. References 1267

Zeng, Y., Wang, R., Zeleznikow, J., & Kemp, E. (2005). Knowledge representation for the intelli- gent legal case retrieval. In R. Khosla, R. J. Howlett, & L. C. Jain (Eds.), Knowledge-based intelligent information and engineering systems: 9th international conference, KES 2005, Melbourne, Australia, September 14–16, 2005, Proceedings, Part I (pp. 339–345). Lecture Notes in Computer Science, Vol. 3684. Berlin: Springer. Zeng, Z., Wang, J., Zhou, L., & Karypis, G. (2006). Coherent closed quasi-clique discovery from large dense graph databases. In Proceedings of the 12th ACM Conference on Knowledge Discovery and Data Mining (SIG KDD 2006), Philadelphia, PA, August 20–23, 2006, pp. 797–802. Zhang, B., & Srihari, S.N. (2004). Handwriting identification using multiscale features. Journal of Forensic Document Examination, 16, 1–20. Zhang, L., Zhu, Z., Jeffay, K., Marron, S., & Smith, F. D. (2008). Multi-resolution anomaly detection for the internet. In Proceedings of the IEEE Workshop on Network Management, Phoenix, AZ, 13–18 April 2008. IEEE INFOCOM Workshops, 2008, pp. 1–6. doi://10.1109/INFOCOM.2008.4544618 Zhang, X., & Hexmoor, H. (2002). Algorithms for utility-based role exchange. In M. Gini, W. Shen, C. Torras, & H. Yuasa (Eds.), Intelligent autonomous systems 7 (pp. 396–403). Amsterdam: IOS Press. Zhang, Z., & Shen, H. (2004). Online training of SVMs for real-time intrusion detection. In Proceedings of the 18th International Conference on Advanced Information Networking and Applications, March 29–31, 2004, p. 568. Zhao, J., Knight, B., Nissan, E., Petridis, M., & Soper, A. J. (1998). The FUELGEN alternative: An evolutionary approach. The architecture. In E. Nissan (Ed.), Forum on refuelling techniques for nuclear power plants,inNew Review of Applied Expert Systems, 4, 177–183. Zhong Ren, P., & Ming Hsiang, T. (2003). Internet GIS. New York: Wiley. Zhou, W., Liu, H., & Cheng, H. (2010). Mining closed episodes from event sequences efficiently. In Proceedings of the 14th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Vol. 1, pp. 310–318. Zhu, S. C., Wu, Y., & Mumford, D. (1996). FRAME: Filters, random fields, and minimax entropy towards a unified theory for texture modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 6, pp. 686–693. Zhu, S. C., Wu, Y., & Mumford, D. (1998). Filters, random fields and maximum entropy (FRAME): Towards a unified theory for texture modeling. International Journal of Computer Vision, 27(2), 107–126. Zier, J. (1993). The expert accountant in civil litigation. Toronto, ON: Butterworths. Zimmer, A. C. (1984). A Model for the interpretation of verbal predictions. International Journal of Man-Machine Studies, 20, 121–134. Ziv, A. & Zajdman, A. (Eds.). (1993). Semites and stereotypes. New York: Greenwood Press. Zuckerman, M., & Driver, R. E. (1985). Telling lies: Verbal and nonverbal correlates of deception. In A. W. Siegman & S. Feldstein (Eds.), Multi-channel integrations of non-verbal behavior (pp. 129–147). Hillsdale, NJ: Erlbaum. Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and non-verbal communication of deception. In L. Berkowitz (Ed.), Advances in experimental and social psychology (Vol. 14, pp. 1–59). New York: Academic. Zukerman, I., McConachy, R., & Korb, K. (1998). Bayesian reasoning in an abductive mechanism for argument generation and analysis. In Proceedings of the AAAI-98 Conference, Madison, WI, July 1998, pp. 833–838. Zuckermann, G. (2000). Camouflaged borrowing: Folk-etymological nativization in the service of puristic language engineering. D.Phil. Dissertation in Modern and Medieval Languages. Oxford, England: University of Oxford. Zuckermann, G. (2003). Language contact and lexical enrichment in Israeli Hebrew.(Palgrave Studies in Language History and Language Change.) London: Palgrave Macmillan. Zuckermann, G. (2006). “Etymythological othering” and the power of “lexical engineering” in Judaism, Islam and Christianity. A socio-philo(sopho)logical perspective. Chapter 16 In 1268 References

T. Omoniyi & J. A. Fishman (Eds.), Explorations in the sociology of language and religion (pp. 237–258). (Discourse Approaches to Politics, Society and Culture series.) Amsterdam: Benjamins. Zulueta Cebrián, C. (1996). Los Procuradores y su proyección informática en la justicia: presente y futuro. Actas (Volumen I), II Congreso Internacional de Informática y Derecho, Mérida, Spain, April 1995 (Mérida: UNED, Centro Regional de Extremadura). Published as: Informática y Derecho, Vol. 9/10/11, Part 1, 1996, pp. 621–628. Zurada, J. M. (1992). Introduction to artificial neural networks. New York: West Publishing. Author Index

A Alexy, R., 170 Aamodt, A., 1008, 1036 Alheit, K., 1086 Aarne, A., 346 Alker, H. R., Jr., 353 Abaci, T., 902 Alkiora, P., 282 Abbott, D. W., 752 Al-Kofahi, K., 607 Abelson, R., 350, 383, 437 Alla, H., 901 Abelson, R. P., 333, 437 Allan, K., 73 Abiteboul, S., 538 Allen, J. F., 81 Abraham, K. I., 703 Allen, R. J., ix, 6–7, 107–109, 127, 170, Abrahams, A. S., 550 246, 256, 281, 283, 325–328, 335, 367, Ackerman,M.J.,883 886–888, 898, 901, 946–947, 1024, 1034, Ackermann, W., 317 1039, 1055, 1057, 1074, 1099, 1102, 1106 Ackland, P. R., 885 Alm, J., 720 Ackley, D. H., 724 Almerigogna, J., 38 Aczel, P., 1110 Almirall, J., 890 Addario, S. M., 242 Alston, W. P., 92, 392 Adderley, R., 84, 154, 483–484, 517 Altman, A., 503 Adibi, J., 677 Alur, R., 898 Adler, J. R., 881–882 Alvesalo, A., 759 Adriaans, P., 484 Ambler, S., 182 Agamanolis, S., 342 Amgoud, L., 170, 259 Agbaria, A., 698 Amigoni, F., 524, 532 Aggarwal,C.C.,494 Amin, R., 949 Aghayev, E., 999 Amos, W., 417 Agrawal, R., 246, 623, 748, 1028 Anandaciva, S., 863 Agur, A., 904 Andersen, E., 677 Ahmed, B., 1065 Anderson, A., 611 Aikenhead, M., 662 Anderson, A. R., 304, 310–314, 317, 321 Aitken, C., vii, x, 7, 855, 887, 945 Anderson, F. C., 904 Aked,J.P.,614 Anderson, G. S., 884 Akehurst, L., 149 Anderson, M., 377 Akhan, M. B., 958 Anderson, T., ix, 131–132, 477 Akiba, T., 335 Anderson, T. J., 333 Akin,L.L.,973, 975, 977, 983–989 André, E., 396 Albonetti, C., 1103 Animali, S., 19, 111 Alchourrón, C. E., 17 Anjaria, C., 202 Alcoff, L. M., 329, 331–333 Anouncia, S. M., 548 Aleven, V., 169, 529 Antoniou, G., 169, 182, 480 Alexander, R., 200, 343, 430, 632, 875 Aoki, T., 968

1269 1270 Author Index

Appelbaum, P. S., 67 Balachandran, B., 742 Appelt, D. E., 379 Balding, D. J., 105, 947–948, 955 Appling, D. S., 377 Baldus, D., 888 Åqvist, L., 117–126 Ball, E., 150 Arbabi, E., 902 Ball, G. R., 616–617 Ardrey, B., 885 Ball, W. J., 182 Arecchi, A., 924 Ballabio, D., 924 Argamon, S., 600 Ballim, A., 92, 151–152, 938 Arnold, K., 690 Ballou, S., 886 Arnold, W., 744 Balsamo A., 216 Arrigoni Neri, M., 546, 548–549, 551 Banka, H., 490 Arthaber, A., 136 Banko, M., 590 Artikis, A., 481 Banzhaf, W., 678 Arumainayagam, Y., 602 Baranek, P. M., 242 Arutyunov, 884 Barb, A. A., 410 Asaro, C., 209, 212–213, 222–223, 231, 238, Barber, H., 385 241–242 Barbrook, A. C., 410 Asgeirsdottir, B. B., 281 Bargis, M., 216 Asher, N., 84 Barnden, J., 3, 151 Ashley,K.D.,169, 529, 602 Barnden, J. A., 8, 152, 938 Astrova, I., 546 Barnett, V., 519 Atabakhsh, H., 508, 511, 752 Baron, J., 88, 153–154 Atallah, M. J., 559 Barragán, J., 253 Atib, H., 511 Barrett, E. C., 842, 882 Atkinson, J. M., 334 Barron, J., 539 Atkinson, K., 31, 159, 161, 165, 170, 173 Barry, B., 342 Aubel, A., 902 Bartlett, J., 295, 917 Audi, R., 152 Bartlett, J. C., 264 August, S., 356 Bartlett, M. S., 961 Aulsebrook, W. A., 874–875 Barwise, J., 320 Aumann, R. J., 153 Barzilay, R., 590 Aune, B., 152 Basden, A., 150 Aussenac-Gilles, N., 547 Basta, S., 762, 765 Austerweil, J., 353 Batagelj, V., 498–499 Avcıba¸s, I.,˙ 872 Bates, J., 384, 386 Avery, J., 205 Bauer, E., 702 Aylett, R., 388 Bauer-Bernet, H., 901 Azuelos-Atias, S., 259 Baumes, J., 680 Baxendale, P. B., 596 B Bayles, M. D., 196 Baber, C., 842 Bayram, S., 872 Backstrom, L., 494 Bayse, W. A., 510, 1032 Backway, H., 290–291 Beal, M., 616, 957 Bacon, J. M., 550 Beatie, B. A., 344 Baddeley, A. D., 863 Becker, P., 874 Badiru,A.B.,513 Beecher-Monas, E., 1065, 1076 Baeza-Yates, R., 598 Begeman, M. L., 167 Baietto, M., 916–917, 923–924 Behrman, B. W., 283 Bain, W. M., 243, 245, 1036 Bekerian, D. A., 293 Bainbridge, D., 243–245 Belchior, M., 535 Baker, K. P., 512 Belfrage, H., 883 Bakken, D. E., 709 Belis, M., 16, 115–117, 1054 Bala, N., 149 Bell, B. E., 38 Author Index 1271

Bell, G., 391 Birks,J.D.S.,447 Bellucci, E., 629 Birmingham, W., 210, 414, 490, 511, 533, 770, Belnap, N., 439 842, 894 Belnap, N. D., 303, 313–314, 317 Birnbaum, L., 164 Bem, D. J., 282 Bistarelli, S., 950, 957, 963–967 Ben-Amos, D., 344, 346 Bivens, A., 525, 706 Bench-Capon, T. J. M., 31, 157, 159–161, 165, Black, H. C., 185 169–171, 173, 183, 186–187, 205, 246, Black, J. B., 347–348 248, 481, 554, 620, 657–660, 1048 Blackburn, C., 244 Benderly, B. L., 1050 Blackman, S. J., 30, 301, 1074 Benenson, I., 516, 534 Blair, D., 351 Benferhat, S., 17 Blascovich, J. J., 285 Ben-Menahem, Y., 49 Bleay, S., 949 Bennani, Y., 517 Block, A., 275 Bennell, C., 511 Blockeel, H., 553 Bennet, P., 863 Bloy, D., 1092 Bennett, B., 84, 899 Blueschke, A., 616 Bennett, K. P., 681 Blythe, J., 498, 503 Bennett, W. L., 32, 36, 324–325, 335 Boaer, D., 149 Bennun, M. E., 57 Boba, R., 280–281 Bergen, B., 340 Bobrow, D., 350 Bergman, P., 841 Bodard, F., 247 Bergslien, E., 905 Boddington, A., 884 Berlière, J.-M., 939–941 Bodenhausen, G. V., 332 Berners-Lee, T., 547 Bodziak, W. J., 890 Bernez, M. O., 903 Boer, A., 1096 Berry,M.W.,598–599, 676 Bohan, T. L., 890 Bertels, K., 533 Bolding, P. O., 117, 1034, 1084 Bertin, J., 498 Bolelli, T., 617 Bertino, E., 538 Bolettieri, P., 861 Besag, J., 724 Bolle, R., 959 Besnard, P., 170 Bolleand, R., 1033 Best, E., 901 Bolles, R. C., 392, 548 Bettini, C., 454–455, 899 Bologna, J., 892 Bevan, B. W., 909 Bond, C., 1064 Bevel, T., 841, 973, 978, 985 Bondarenko, A., 170 Bex, F. J., 31, 159–161, 165, 170, 174–175, Bondi, V. A., 111 327, 333, 336, 476–481, 525, 898, 1028 Bookspan, S., 928 Beyth, R., 32 Boone, K. B., 150 Bhanu, B., 961 Boongoen, T., 858 Biagioli, C., 550 Borchard, E. M., 1069 Bialousz, L. F., 973 Bordalejo, B., 410 Bie, R., 702 Borén, H., 924 Biedermann, A., 7 Borges, F., 657–658 Biermann, T. W., 886 Borges, R., 657–658 Billingham, S., 233 Borgulya, I., 667 Billington, D., 480 Bosse, T., 743 Binder, D. A., 841 Boster,F.J.,328 Binmore, K., 153 Boulic, R., 902 Binsted, K., 335, 340 Bourcier, D., ii, 170, 657–658 Biondani, P., 236 Bowers,M.C.,885 Birdwell, J. D., 511 Boyd, C., 721 Birkhoff, G., 532 Boykov, Y., 722, 725 1272 Author Index

Boyle, M. M., 73 Bryson, J., 396 Brace, N., 863 Buber, M., 400, 497 Bradac, J., 618 Buchanan, B. G., 106, 193, 877 Bradac, J. J., 618 Büchler, P., 904 Bradfield, A. L., 283, 292 Buck, U., 991 Bradley, B., 342 Buckingham Shum, S. J., 168 Bradley, J. N., 958, 1056, 1061, 1079 Buckley, C., 596 Brady, R., 320 Buckley, J. P., 282 Brainerd, C. J., 38 Budzikowska, M., 591 Braman, D., 315 Bugental, D. B., 285 Bramer, M., 949 Bull, R. H. C., 149–150, 282–283, 295, 859, Branch, J., 697, 706, 949 882–883 Brandenburger, A., 153 Buller,D.B.,149 Brandes, U., 494, 497–500, 503–504 Buora, G. B., 546 Brann, N. L., 690 Burgoon, J. K., 149 Branting, K. L., 183, 203 Burnett, D. G., 41, 561–565 Bratanic,´ M., 618 Burns, H. J., 38 Bratley, P., 84, 901, 1114 Burt,R.S.,494 Bratman, M., 155 Burton, A. M., 865 Braun, M., 994, 1007 Busch, J., 618 Braz, M. E., 11, 328 Butler,J.M.,943 Breck, E., 378 Butler, M., 842 Breeze, A., 409 By, T., 152 Bregman, N. J., 95 Byrne, M. D., 51 Breiger, R. L., 494 Breiman, L., 490–491 C Breimer, E., 697 Caballero, J., 709 Brenner, J. C., 880, 1021 Cabras, C., 24 Brereton, D., 273 Cacioppo, J. T., 28, 285 Bressan, S., 537 Cahlon, B., 859 Breuker, J., ii, 553–554, 556 Caldwell, C., 859, 863–864, 1068 Brewka, G., 172 Callan, R., 52, 488–489, 653 Bridges, T. E., 885 Callaway, C. B., 203, 381, 384 Brigham, J. C., 265, 270, 283 Callen, C. R., 30, 301, 328 Brigsjord, S., 175–176, 1110 Calzolari, N., 547, 550 Brimacombe, E., 292 Caminada, M., 170 Brin, S., 503, 721, 726–727, 731 Campbell, C., 681, 1032 Bringsjord, S., 150, 349, 384, 431 Camptepe, A., 680 Brislawn, C. M., 958 Camurri, A., 88 Brkic, J., 661 Candel, I., 38 Bromby, M., 263–269, 700, 1033 Canter, D., xi Bromby, M. C., 268–270, 1023 Capobianco, M. F., 495 Brooks, K., 342 Cappelli, A., 550 Brooks, K. M., 342, 384 Capstick, P. H., 444 Brooks, M., 911 Caputo, D., 264 Brown, C. T., 901 Carbogim, D., 181, 480, 1045 Brown, D. C., 14 Carbonell, J. G., Jr., 164, 353, 404 Browne, M., 599, 676 Carenini, G., 170, 172 Bruce, V., 863, 865 Carley, K., 678 Bruninghaus, S., 602 Carlsen, K. A., 38 Bruns, G., 901 Carmo, J., 490, 1042 Bryan, M., 384, 538 Carofiglio, V., 149–150 Bryant, V. M., Jr., 927 Carpenter, R., 952 Author Index 1273

Carr, C. S., 167–168, 1077, 1105 Chaudhary, M., 602 Carrier, B., 687 Chellas, B. F., 1042 Carroll, G., 353 Chen, C., 702 Carson, D., 864, 882–883 Chen, H., 484, 508, 511, 611, 723, 748, 759 Carter, A. L., 983 Chen, P., 620 Carter, D. L., 280 Chen, W., 490, 722 Caruso, S., 295 Chen, X., 490–491, 949 Casalinuovo,I.A.,923 Chen, Y.-C., 717, 722 Casanovas, P., 553 Chen, Z., 667, 960, 963 Casas-Garriga, G., 728 Cheng, H., 748 Case,T.I.,29 Cheng, T., 112 Casey, E., 560, 687 Cherry, M., 956 Cassel, J., 401–402 Cheswick, B., 706 Cassell, J., 396 Chilvers,D.R.,892 Cassinis, R., 61, 394 Chisholm, R. M., 304, 1042 Castelfranchi, C., 18, 79, 149–150, 173, Chiswick, D., 883 257–259 Chiu, C., 717, 722 Catania, B., 538 Cho, H., 717 Catarci, T., 546 Choi,B.H.,548 Catts, E. P., 884 Chong, F. T., 706 Cavazza, M., 342, 356, 381, 388 Choo, R. K. K., 714 Cawsey, A., 149 Choudhary, A. N., 483 Cayrol, C., 17, 170 Chow,J.C.,930 Ceci, S., 73, 1091 Chow,K.P.,260, 263, 692–694 Cellio, M. J., 588 Christie, D., 863 Ceserani, V., 539 Christie, G. C., 173, 191, 194, 411, 661, Céspedes, F., 924 1046–1047, 1090 Chadwick, D. W., 150 Christmann, A., 681 Chae, M., 717 Christopher, S., 280 Chaib-Draa, B., 172 Christou, G., 207 Chaiken, S., 28 Chua,C.H.,717 Chakrabarti, D., 741 Chung, A. C. S., 722–724 Champod, C., 948, 956 Chung, M.-S., 525 Chan, C.-Y., 706 Church,S.M.,317 Chan, H., 246, 273 Churchill, E., 396 Chan, J., 243, 490 Cialdini, R., 28 Chan,K.L.,536 Ciampolini, A., 45, 1023, 1085 Chance, J. E., 265, 283, 837–839, 859 Cina, S. J., 1007 Chaney, F. B., 506 Ciocoiu, M., 550 Chang, E., 246 Cios, K. J., 671 Chang, S.-F., 872 Clark, C., 864 Channell, R. C., 506 Clark, M., 885 Chaoji, V., 492, 678, 682–684 Clark,N.K.,282 Chapanis, A., 506 Clark, P., 182 Chapanond, A., 678 Clark,R.A.,28 Charles, F., 356, 381 Clark,S.E.,283, 293 Charles, J., 483 Clarke, P. H., 892 Charlton, D., 950, 1067 Clarke, R. V., 154, 281, 899, 1049 Charlton, K., 149 Clay, M., 92 Charman, S. D., 270, 283, 863 Clements, R. V., 885 Charniak, E., 117, 350, 352 Clifford, B., 292, 882 Chau,D.H.,720, 731, 742–745 Coady, W. F., 510 Chau, M., 508, 759 Cocker, M., 444, 446, 451–452 1274 Author Index

Coelho, A. L., 536 Coulthard, M., 618 Coenen, F., 246 Cox, M., 376, 884 Cogan, E., 419 Cox, M. T., 376–377 Cohen, D., 258 Cozman, F. J., 52 Cohen, F., 260–261 Crandall, D., 498 Cohen, F. S., 1085 Crandall, J. R., 706 Cohen, L. E., 154 Crawford, C., 386, 885 Cohen, P., 182 Creamer, G., 676 Cohen, P. R., 80 Crittendon, C., 340 Cohen, R. F., 503 Croft, D. J., 905 Cohen, W. W., 675, 678 Crombag, H. F. M., 292–293 Cohn, A. G., 84, 899 Cross, R., 298, 300 Colby, K. M., 87 Crowcroft, J., 709 Cole, D. J., 885 Crowley, K., 346 Cole,J.W.L.,888 Crump, D., 302 Cole,S.A.,940, 948, 952, 955–957, 1063, Cui, Z., 84, 899 1066 Cule, B., 728 Coleman, K. M., 611–612 Culhane, S. E., 95 Coletta, M., 923 Cullingford, R. E., 353, 359–361 Collins, J., 617 Cully, D., 95 Collins, J. F., 887, 935 Culshaw, M. G., 906 Colombetti, M., v, 546, 548–549, 551 Cummins, H., 957 Colwell, K., 18, 59, 150 Curzon, L. B., 1091–1092 Colwell, L. H., 150, 182 Cutler, B. L., 273, 283, 291, 293–294, Combrink-Kuiters, C. J. M., 245–246 882–883, 1120 Conan Doyle, A., vii, 340, 496, 781 Cybenko, G., 652 Conant, E., 876 Cone, E. J., 885 D Conklin, J., 167 D’Amico, A., 917 Conley, J. M., 334 Dagan, H., 306 Connah, D., 356 Dahlgren, K., 379 Console, L., 846, 853 Dalton, R., 450 Conte, R., 18 Dames, N., 495 Continanza, L., 524, 532 Danet, B., 618 Conway,J.V.P.,617 Daniels, J. J., 602 Conyers, L. B., 912 Daniels, T. E., 687 Cook, R., 855, 883 Darling, S., 292 Cook, T., 841 Darr, T., 533 Cooper, J., 514 Dauer, F. W., 341 Cope, N., 9 Daugherty, W. E., 150 Cope, R., 883 Dauglas, M., 341 Corcho, O., 546 Dave, K., 601 Corkill, D., 532 Davenport, G., 341–342, 384–385 Corley, J., 928 Davenport, G. C., 909, 915 Corney, M., 611 Davey, S. L., 283 Correira, A., 349–351 David, R., 901 Cortes, C., 608, 681, 729 Davide, F. A. M., 917 Cortiguera, H., 388 Davidson, R. J., 285 Cosio, S. M., 924 Davies, G. M., 859, 862–863, 882 Costa, M., 245 Davis, D., 367 Costello,B.D.,959 Davis, L., 675 Coull, S., 682, 697, 699, 703–705 Davis, O., 926–927 Coulson, S., 340 Davis, T., 149 Author Index 1275

Dawid, A. P., 887, 944–945, 1049 Dill, F., 1103 Dawson, L., 908 Dillon, T., 246 Daye,S.J.,413 Ding, Y., 550 De Antonellis, V., 513 Dintino, J. J., 279 de Cataldo Neuburger, L., 149 Dioso-Villa, R., 952 de Kleer, J., 20, 844, 853 Dirnhofer, R., 994, 1007 de la Motte, R., 343 Dix, J., 170 De Mulder, R. V., 246 Dixon, D., 884, 1069 De Nicola, A., 546 do Carmo Nicoletti, M., 490 de Rosis, F., 149–150 Dobosz, M., 944 de Vel, O., 611 Dodd, G., 917 De Vey Mestdagh, C. N. J., 525 Doering, N., 149 de Vey Mestdagh, K., 928 Dolan, C., 380 de Ville, B., 1090 Doležel, L., 344, 346 Debevec, P., 872–873 Dolnik, L., 29 Debreceny, R., 611, 675–678 Dolz, M. S., 1007 Dechter, R., 1030 Domike, S., 404 Deedrick, D. W., 886 Domshlak, C., 117 Deffenbacher, K. A., 38 Donders, K., 38 DeJong, G. F., 353, 594 Dong, M., 490, 949 Dekel, E., 153 Donikian, S., 342 Del Boca, A., 414 Donnelly, L. J., xiii, xiv, 406, 905, 908, Del Favero, B., 118 910–913 del Socorro Téllez-Silva, M., 1048 Donnelly, P., 947 del Valle, M., 924 Donos, G., 207 Delannoy, J. F., 173 Doob, A., 242, 252 Delaval, A., 550 Doob, A. N., 242 Delia, J. G., 28 Doran, S., 273 Demazeau, Y., 533 Dore, A., 446–447 Demelas-Bohy, M.-D., 593 Doutre, S., 157, 161, 170 den Broek, P. V., 378 Dowdall, J., 150 Denney, R. L., 150 Doyle, A. C., vii, 496, 781 Denning, D., 698 Doyle, J., 20 DePaulo, B. M., 149 Doyle, J. K., 15 Dershowitz, A. M., xiv, 50 Doyle, J. M., 38, 297 Dershowitz, N., 418 Dozier, C., 602 Desai, K., 616 Dragoni, A. F., xiii, 7, 16–31, 54, 111, 115, Devine, P. G., 294 403, 411, 416 Devlin, P., 264 Drechsler, T., 884 Dewey, J., 392 Dreger, H., 698, 709 Deyl, Z., 885 Drew, P., 334, 898 Di Battista, G., 498 Driver,R.E.,149 Di Francesco, P., 923 Drizin, S., 282 Di Natale, C., 917 Dror, I., 842, 949 Di Pierro, D., 923 Dror, I. E., 843, 871, 950, 1067 Diaz, R. M., 310 Droser, M. L., 448 Díaz-Agudo, B., 387 Drugge, J., 149 Dick, J. P., 182 Du, X., 723 Dickey, A., 631 Duan, N., 392 Dickinson, T. A., 924, 934 Dubois, D., 17 Diesner, J., 678 Duckham, M., 513 Dignum, F., 172 Duda, R. O., 489, 600 Dijkstra, P., 508, 525 Duff, P., 38 1276 Author Index

Duffield, J. W., 890 El-Sana, J., 538 Dulaunoy, A., 706 Elsayed, T., 682 Dumortier, J., 588, 596 Elsner, M., 353 Duncan, G. T., 306, 943 Elson, D. K., 385, 495–496 Dundes, A., 344, 346 Elvang-Goransson, M., 182 Dung, P. M., 161, 170–171, 182 Embrechts, M. J., 492, 706 Dunn, J. M., 317 Emiroglu, I., 958 Dunn, M. A., 1005–1006 Emslie, R., 949 Dunne, P. E., 157, 161, 170–172, 258, 660 Endo, Ts., 356 Dunning, D., 264 Endres-Niggermeyer, B., 589 Durandin, G., 150 Engel, M., 92 Durfee, E., 532–533 Engelmore, R., 528, 1034 Dworkin, R., 631, 1047 Epstein, J. A., 149 Dyer, M. G., 63–64, 81, 87, 349, 353, 356–357, Epstein, R., 955 359, 361–371, 374, 378, 380, 383, 397, Eraly, A., 431–432 399, 412, 415, 417, 429 Erickson, B., 618 Dysart,J.E.,264 Ericson, R. V., 176 Ernst, D. R., 305 E Erwin, D. H., 448 Eades, P., 498–499 Eshghi, K., 45 Eagly, A. H., 28 Eskin, E., 744 Earl,L.L.,596 Esmaili, M., 742 Earle, J., 149 Espar, T., 334 Easteal, S., 943 Esparza, J., 901 Ebbesen, E. B., 273 Espinosa-Duró, V., 1070, 1094 Ebert, J. I., 929 Esposito, A., 618 Ebert, L. C., 991, 993, 999 Esuli, A., 599, 861 Eck, J. E., 280 Etzioni, O., 601 Eckert, W. G., 973, 983, 985 Euliano, N. R., 488 Eco, U., 690, 779 Evangelista, P. F., 492, 939 Ecoff, N. L., 285 Evenchik, A., 164 Edelman, S., 489 Evett, I., 944 Edmundson, H. P., 596 Evett, I. W., 739, 855, 956 Edwards, D., 82 Ewart, B. W., ix, x, 479, 483, 620–621 Egashira, M., 918 Eyers, D. M., 550 Egeland, T., 944, 1097 Egeth, H. E., 293 F Egger, S. A., 276 Faas, S., 394 Eggert, K., 306 Faegri, K., 927 Eguchi, S., 1071 Faggiani, D. A., 280 Ehrlich, R., 931 Fagni, T., 599 Eigen, J. P., 883 Fahey, R., 324 Einhorn, H. J., 33 Fahlman, S. E., 653 Ekelöf, P. O., 117, 211, 1034, 1059, 1084 Fakher-Eldeen, F., 54, 62, 406 Ekholm, M., 282 Falcone, R., 18, 79 Ekman, P., 89, 150, 284–285, 882 Falkenhainer, B., 846 Elhadad, M., 590 Falkowski, M., 758 Elhag, A., 554 Faloutsos, C., xiii, 720, 722, 731, 735, Eliot, L. B., 629 740–742 Elisseeff, A., 492 Faloutsos, F., 741 Ellen, D., 617 Fan, G., 529 Ellis, H. D., 859, 862 Faratin, P., 205 Elphick, C. S., 447 Farber, P. L., 445 Author Index 1277

Farid, H., 872, 874, 1078 Fix, E., 674 Farina, A., 961 Fleischer, M., 533 Farley, A. M., 170, 174, 184–187, 1044 Flowe, H. D., 273, 1103 Farook, D. Y., 84, 901 Flowers, M., 164, 172, 371, 380, 403, Farrington, D., 882, 1055 1021–1022, 1098 Farzindar, A., 596–597 Flycht-Eriksson, A., 547 Fasel, I. R., 961 Follette, W. C., 367 Faught, W. S., 88 Fong, C. T., 282 Faulk, M., 883 Forbus, K., 846 Faust, K., 498, 593 Foresman, T. W., 515 Feeney, F., 1103 Forgas,J.P.,743 Fein, S., 332 Forrester, J. W., 1042 Feinbert, S., 285 Foskett, D., 539 Feldman, M. S., 32, 36, 324–325, 335 Foster, D., 614–615, 891 Feldman, R., 600–601 Foster, J. C., 687 Fellbaum, C., 547 Fowler, R. H., 861 Felson, M., 154 Fox, F., 68 Feng, Y., 490, 722 Fox, J., 182 Fenning, P. J., 906–908, 910–913 Fox, R., 40, 44–47 Fensel, D., 547, 550 Fox, S., 377 Fenton, N. E., 110, 1077, 1083 France, D. L., 909, 915 Ferber, J., 524 François, A. R. J., 392, 548 Ferguson, T. J., 283 Frank, E., 285, 491 Fernandez, C., 546, 901 Frank, M., 285 Fernández-López, M., 546 Frank, M. G., 150 Ferrario, R., 551 Frank, O., 496 Ferrentino, P., 88 Frank, P. B., 892 Ferrua, P., 216, 1075–1076 Frank, S. L., 380 Ferrucci, D. A., 150, 349, 384, 431 Franke, K., 879 Festinger, L., 27, 942 Frazzoni, M., 961 Feu Rosa, P. V., 486, 619 Freeman, J. B., 170, 177, 183–187 Fieguth, P., 722 Freeman, K., 170, 174, 185, 205, 1044 Field, D., 851 Freeman, L. C., 494–495, 498 Fikes, R. E., 83, 383 Freeman, W. T., 722 Fillmore, C. J., 62 Frémont, J., 84, 901, 1114 Findlay, M., 38 Fresko, M., 600 Findler,N.V.,545 Freund, M. S., 918 Finin, T. W., 379 Freund, Y., 702 Finkelstein, M. O., 886 Fridrich, J., 872, 1166, 1204 Finkelstein, M., 888 Friedman, E., 720 Finklea, K. M., 273 Friedman, H., 321 Fiorelli, P., 289 Friedman, J. H., 490 Fiorenza, E., 92 Friedman, M. T., 170, 622 Firestein, S., 916 Friedman,R.D.,367, 1039, 1119 Fischer, P. C., 538 Friedman, R., 698 Fischhoff, B., 32 Friesen, W. V., 285 Fisher, M., 898, 901, 1114 Frincke, D. A., 709 Fisher, R., 149 Frisch, A. M., 347 Fissell, M. E., 943–944 Frowd, C., xiii, 866–867, 1044, 1059 Fitts, P. M., 506 Frowd, C. D., 863–866 Fitzgerald, P. J., 615, 1093 Fu, C.-M., 112 Fitzmaurice, C., 214, 242, 245 Fu, X., 858 Fiume, E., 904 Fuhr, N., 537 1278 Author Index

Fujisawa, H., 616 Gemmell, J., 391 Fulford, R., 416 Gemperline, P. J., 934 Full, W. E., 931, 934 Geng, L., 490 Fung, T. H., 45 Genrich, H., 901 Furnam, A., 1114 Geradts, Z., 880 Furtado, V., 11, 516, 534–536, 1067 Gerard, S., 379 Gerasimov, M. M., 876–877 G Gerhard, D., 38 Gabbay, D., 259 Gerkey, B., 532–533 Gabbert, F., 898 Gerlofs, J.-M., 161 Gaensslen, R. E., 951 Gerritsen, C., 743 Gaes, G., 243 Gervás, P., 347, 380, 387–388 Gaffney, C., 909 Geserick, G., 885 Gagg, C., 936 Ghorbani, A. A., 701 Gaines, D. M., 14 Giannakopoulos, P., 207 Galindo, F., ii, 239 Giannotti, F., 762 Galitsky, B., 384, 404 Gibbons, B., 618 Gallagher, T., 450 Gibbons, J., 618 Galperin, A., 669 Gibson, J. J., 843 Galton, A. P., 898, 939–941 Gibson, S. J., 864 Gambini, R., 221, 1076 Gigerenzer, G., 259 Gan, H., 344 Gilbert, D. T., 99 Gangemi, A., 553 Gilbert, M., 161, 172 Gao, L., 525 Gilbreth,F.B.,274, 506, 1086 Garani, G., 538 Gilbreth,L.M.,274, 506, 1086 Garbolino, P., 7, 945 Gilles, B., 902 Garcia Landa, J. A., 344 Gillies, D., 1030, 1101–1102 Garcia-Rojas, A., 551 Gilman,S.L.,68, 332 Gärdenfors, P., 17, 92, 419 Gilmore, G., 306 Gardner, A. von der Lieth., 81 Gini, G., v Gardner, J. W., 917–918, 921 Giorgini, P., 17 Gardner, R. M., 841, 973, 978, 985 Giovagnoli, A., 414 Gargi, U., 391 Garland, A. N., 884 Giuffrida, R., 215, 546 Garnham, A., 347 Gladwell, M., 332 Garrett,R.E.,935–936 Glasee, E., 555 Garrioch, L., 292 Glass, R. T., 885 Garry, M., 38, 882 Glenn, C. G., 347–348 Garven, S., 38 Glover, T. E., 295–296 Gastwirth, J. L., 888 Goad, W., 704 Gater, J., 909 Göbel, J., 710 Gauthier, T. D., 930 Goble, C. A., 550 Gawler, M., 183, 620 Goble, L., 321 Gayler, R., 762 Goel, A., 377, 687 Gearey, A., 323 Goff, M. L., 884 Geddes, L., 8, 946–947 Golan, T., 561 Gedeon, T. D., 630, 633 Goldberg, H. G., 511, 1070 Geffner, H., 1030 Goldberg, M., 678–680, 682–685, Geiger, A., 408, 411–413 743 Geiselman, R. E., 294 Golden, R. M., 379 Gelbart, D., 596, 606 Goldenberg, J., 600 Gelfand, M., 704 Goldfarb, C. F., 538 Gellatt, D., 724 Goldin,H.E.,289 Author Index 1279

Goldman, A. I., 92, 250–252, 283, 300, 1057, Grootendorst, R., 170 1059, 1110 Grosz, B., 85, 91, 350 Goldman, N., 352 Grover, C., 596 Goldman, R. P., 353 Grubbs, F. E., 519 Goldman, S. R., 371–372 Gruber, T. R., 387, 545–546, 554 Goldsmith, M., 900 Grüninger, G., 545 Goldsmith, R. W., 53, 210, 212, 247 Grüninger, M., 545–546 Goldstein, A. G., 265, 283, 859 Gu, J., 718–719 Goldstein, C. J., 27, 283, 942 Gudjonsson, G. H., 281–283 Goljan, M., 872 Guest, A. G., 1087 Gómez-Gauchía, H., 388 Guha, R. V., 546, 754 Gómez-Pérez, A., 546, 550 Guidotti, P., 245 Gonçalves, T., 606 Gulotta, G., 28–29, 149, 174, 239, 1044 González Ballester, M. A., 904 Gunawardena, C. A., 959 González Císaro, S. E., 544, 550 Gunn, J., 883 González-Calero, P. A., 387 Guo, W., 550 Good, I. J., 302 Guo, X., 602 Gooday, J. M., 899 Gupta, H., 511 Goodchild, M. F., 515 Gupta, P., 700 Goode, G. C., 938 Gurr, C., 161 Goodman, G., 282 Gutebier, T., 445 Goodrich, P., 328, 336 Gutés, A., 924 Goranson, H. T., 547 Gutiérrez, M., 551, 902 Gordon, T. F., ii, xi, 165–166, 170, 182, 332, Güven, S., 390 1031, 1036, 1087, 1100 Guyon, I., 492 Gotts, N. M., 84 Gwadera, R., 728 Gouron, A., 1024 Gyongyi, Z., 727, 731 Governatori, G., ii, 1042 Grabherr, S., 999 H Granhag, P. A., 149 Haber, L., 956 Grant, J., 79, 306 Haber, R. N., 956 Grassano, R., 150 Habermas, J., 173, 303 Grasso, F., 149–150, 161, 170–172 Hachey, B., 596 Gravel,A.J.,928 HaCohen-Kerner, Y., 244–246, 405 Gray, C., 264 Hafstad, G., 292 Gray,G.L.,611, 675–678 Hagan, J., 1103 Green,A.D.P.,1071 Hage, J., 1042 Green, E., 6 Haggerty, T., 89 Greene, E., 38, 283 Haglund, W. D., 896 Greene, J. R., 1099 Hahn, U., 551, 594 Greening, C. P., 274, 506 Haïm, S., 338, 407 Greenwood [=Atkinson], K., 173 Hainsworth, S., 935 Greenwood, K., 161 Hall, B., 282 Greer, S., 845 Hall, D., 343 Gregg, D. G., 714–715, 717 Hall, J., 901 Gregory, F., 275 Hall,M.J.J.,268–270, 1023 Grey, T., 305–306 Hall, R. P., 1027 Grey,T.C.,305 Hall, S., 923 Grice, H. P., 159, 301, 303, 309 Halliwell, J., 104–106, 692–693 Grieve,M.C.,886 Halpern, J. Y., 1030 Grifantini,F.M.,1076 Halpin, H. R., 379 Griffiths, P. E., 89 Hamard, S., 842 Groarke, L., 161 Hamblin, C. L., 155–156 1280 Author Index

Hamill, J. T., 504 Heiser, J., 687 Hamilton, D. L., 332 Hektor, J., 710 Hamkins, J. D., 65 Hella, M., 247 Hamlin, C., 506 Hempelmann, C. F., 559 Hammersley, R., 883 Hemphill, M. R., 618 Hammon, W. S., 912 Henderson, Z., 865 Hamscher, W., 853 Hendler, J., ii, 547 Han, I., 100, 103, 600, 629, 638, 667, 673, 721, Hendrix,G.G.,350, 545 728, 748 Henkart, P. C., 912 Hanba,J.M.,292 Henrion, M., 118 Hancock P. J. B., 863–865 Henry, J., ix Hand, D. J., 483 Henry, R. C., 934–935, 940 Handler Miller, C., 342 Henzinger, T. A., 898 Hanlein, H., 613 Hepler, A. B., x Hanrahan, P., 872 Herold, J., 548 Hansel, M., 233 Herrero, S., 943 Hao, Y., 959, 962 Hershkop, S., 676 Harder, C., 515 Hervás, R., 387 Harley, E. M., 38 Heuston, R. F. V., 299, 1062 Harley, I., 994 Hewer, L., 1034, 1039–1040 Harman, G., 48, 173 Hewstone, M., 332, 478 Harman,G.H.,48 Heylen, D., 394 Harper, P., 1064 Heylighen, F., 305 Harper, W. R., 274, 505–506, 509–510 Hickey, L., 1060 Harris, D. H., 274, 505–506, 509–510 Hicks, P. J., 918 Harris, M. D., 62 Higuchi, T., 968, 972 Harris, R., 520, 687–688 Hildebrand, J. A., 912 Harrison, J. M., 617 Hill, C., 27, 911, 942, 1041 Hart,H.L.A.,302, 631 Hilton, J. L., 332 Hart, P. E., 489, 600 Hilton, O., 617 Hartley, J. R. M., 536, 1036 Hind, S.-L., 949 Hartwig, M., 149 Hines, D. C., 968 Hasan-Rokem, G., 408 Hinton, G. E., 346, 379, 489, 649, 724 Hasegawa, O., 335 Hinz, T., 291 Hasel, L. E., 283, 863 Hirschman, L., 378 Hashtroudi, S., 38 Hirst, G., 547 Haskell, N. H., 884 Hiscock-Anisman, C., 18, 150 Hastie, R., 14–15, 30–37, 51, 324–325, 335, Hitch, G. J., 73 1075, 1112 Hitchcock, D., 171 Hatfield, J. V., 419, 918 Ho, D., 89 Hauck, R. V., 511, 603, 748–750 Ho, Sh.-Sh., 392 Hauser, K., 904 Hoare, C. A. R., 899 Hawkins, K., 247 Hobbs, D., 275 Hayes, P. J., 588, 914 Hobbs, J., 379, 392, 548 Hayes-Roth, B., 82, 394, 396, 402, 526, 1034 Hobbs, J. R., 379 Haygood, R. C., 274, 506 Hobbs, P., 406 Haykin, S., 652 Hobson, J. B., 619–620, 626, 658 Hayvanovych, M., 680 Hobson, P., 548 Headrick A. M., 616–617 Hochberg, J., 560 Hearst, M., 621 Hodges, J. L., 674 Heather, M. A., 901 Hoen, P., 926 Heaton-Armstrong, A., 283, 1073 Hoffman, H. G., 38 Hecht-Nielson, R., 664 Hogarth, J., 242 Author Index 1281

Hogarth, R. M., 33 Hulstijn, J., 340 Hogue, A., 32 Humphreys, G., 873 Hohmann, H., 170 Hung, N. D., 171 Hohns, H. M., 935 Hunt, L., 1031 Holland, B., 1064 Hunter, A., 170 Holland, N., 356 Hunter, D., 170, 619–620, 626, 639, 656–657, Hollien, H. F., 883 659–660, 662 Holloway, R. T., 513 Hunter,J.R.,893 Holmes,O.W.,Jr.,1085, 1087 Hunter, J., 842 Holmes, R., 28 Huntley, C., 384 Holmin, S., 924 Hutchins, M., 752 Holmström-Hintikka, G., 171, 1063 Hutchinson, J. R., 904 Holstein, J. A., 32 Huttenlocher, D., 494 Holt, A. W., 901 Hutton, N., 242–243 Holz, T., 706, 710 Hwang, B.-Y., 544 Holzner, S., 537 Hwang, C. H., 379 Honbo, D., 483 Hwang, H. S., 958 Hong, L., 960 Hyde,H.A.,495, 926 Hoonlor, A., 492 Hope, L., 295, 859 I Hopke, P. K., 934 I¸˙scan, M. Y., 884 Hopper, T., 958 Iacoviello, F. M., 30 Horie, C. V., 445 Idika, N., 744 Horn, D., 489 Igaki, S., 1071 Horn, R., 511 Imbrie, J., 934 Horowitz, I., 328 Imielinski, T., 623 Horrocks, I., 547 Imwinkelried, E. J., 956, 1051–1052 Horry, R., 264, 283 Inbau, F. E., 282 Horsenlenberg, R., 282 Ineichen, M., 952 Horty, J. F., 439, 1042 Inman, K., 943 Horwich, P., 250 Ireland, C. G., 863 Hosch, H. M., 95 Ireson, J., 233 Houck, M. M., 889 Iscan, M. Y., 874 Hovy, E., 403, 601 Isenor, D. K., 960 Howe, C. J., 410 Ishikawa, H., 544, 723 Howe, M. L., 38 Ishizuka, M., 396 Howe, M., 38 Ishtiaq, F., 448–450 Howlett, J. B., 510 Ismail, R., 721 Hreinsdottir, H., 281 Isobe, Y., 1033, 1098 Hryciw, B., 283 Israel, D., 1110 Hu, M., 601 Israel,J.H.,1099 Hu, W., 607–610, 701, 1009 Ito, T., 717, 961, 967, 970–972 Huang, H., 529 Iversen, J., 927 Huang, R., 702 Ivkovic, S., 623 Huang, Y. P., 762 Izard, C. E., 88, 866 Huard, R. D., 402 Huber, R. A., 616–617 J Hubert, M., 519 Jackowski, C., 996 Hueske, E. E., 985 Jackson, B. S., 119, 259, 323–324, 328, 334, Hughes, P. A., 1071 1024, 1060, 1093, 1114, 1121 Hughson, I., 596 Jackson, G., 855 Huhns, M., 533 Jackson, P., 602, 607 Hulber, M. F., 525 Jacoby, J., 1103 1282 Author Index

Jacovides, M., 392–393 Jøsang, A., 111, 721 Jahnke, M., 696 Josephs, S., 282 Jain, A. K., 390, 1033 Josephson, J. R., 40, 44, 48, 1021 Jain, R., 390–391 Josephson, S. G., 40, 44, 48, 1021 Jajodia, S., 454, 899 Joshi, A., 490, 553 James, M., 503 Josselson, R., 351 James, S. H., 880, 935, 973, 978, 983, 985 Joyce, C., 884 Jameson, A., 29, 83, 384, 403 Juan, L., ii, 709, 940 Jamieson, A., 841, 884, 946 Judson, G., 1049 Jan, J., 76, 728 Julius, A., 331 Janaway, R. C., 884 Jung, C., 721 Janner, G., 1061 Junger, E. P., 885 Janowitz, M., 150 Junkin, T., 409 Jayne, B. C., 282 Jedrzejek, C., 758–761 K Jeffay, K., 697 Kadane, J., x, 7, 52 Jefferson, M., 1034–1036, 1092, 1096 Kahan, D. M., 315 Jenkins, R. V., 1026 Kahn, D., 689–690 Jenkinson, B., 687 Kakas, T., 45 Jennings, N., 205, 257 Kalender, W. A., 995 Jennings, N. R., 524, 592 Kalera, M. K., 616 Jensen, D., 727–728 Kalja, A., 546 Jerdon, T. C., 448 Kamber, M., 100, 103, 600, 629, 638, 667, 673 Jha, S. K., 921–923 Kamel, M., 961 Jiang, D., 728 Kamisar, Y., 1099 Jiang, X., 958, 1094 Kanehisa, M., 704 Jiao, Y., 692 Kanellis, P., 687 Jia-Yu Pan, J.-Y., 741 Kangas, L. J., 514, 517 Jin, F., 722 Kannai, R., 1048 Jin, Sh., 529 Kantrowitz, M., 356, 384 Jin, X., 702 Kaoru, H., 106, 668 Jing, H., 591 Kaptein, H., 8 Joachims, T., 605 Karasz,J.M.,513 Johannes Keizer, J., 551 Karchmer, C., 280 John, T., 274 Karttunen, L., 856 Johnson, B. C., 618 Karunatillake, N., 257 Johnson, G. W., 933–934 Karypis, G., 728 Johnson, M. K., 38, 183–184, 186–187, Kashy, D. A., 149 872–874 Kasiviswanathan, H., 616 Johnson, N. S., 347–348 Kass, A. M., 373 Johnson, P. E., 205 Kassin, S. M., 27, 65, 90, 281–283, 942, Johnson, P. N., 353 1005–1006, 1041 Johnson, S. L., 890 Katagiri, Y., 1110 Johnston, V. S., 859, 863–864, 1068 Katai, O., 428 Jones, A., 1042 Kataoka, M., 1033 Jones, A. J. I., 1046 Katayama, K., 544 Jones, C. A. G., 880, 1064 Kato, Z., 722 Jones, J. G., 927 Katz, L., 499–503, 721 Jones, P., 855 Kauer, J. S., 924 Jones, R., 149 Kaufman, L., 490, 877 Jones, R. E., 506 Kaufmann, M., 498, 622 Jones, S. S., 344 Kawai, T., 918 Jory, S., 983 Kawakami, H., 428 Author Index 1283

Kaye, B. H., 883 Klein, M. C. A., 382, 550, 743 Kaye,D.H.,367, 888, 955 Klein, S., 382 Kearey, P., 911 Kleinberg, J. M., 494, 498, 721, 726–727, 741 Keh, B., 884 Kleinman, S., 238 Keila, P. S., 678 Klimt, B., 677 Keller, E., 618 Klock, B. A., 921 Kelly, J., 675, 728, 794 Klotz, E., 995 Kelsen, H., 302, 304 Klovdahl, A. S., 498, 510 Kemp, K. N., 67 Klusch, M., 532–533 Kemp, R., 863 Knecht, L. E., 588 Kempe, D., 494 Kneller, W., 292 Kenis, P., 498 Kneubuehl, B. P., 890 Kenji, T., 968, 971 Knight, B., 84, 90, 669, 899, 1114 Kennedy, D., 306 Knott, A. W., 936 Kennedy, I., 1080, 1103 Knox, A. G., 445–446, 458 Kephart, J., 744 Knupfer, G. C., 883 Keppel, R. D., 514, 518–519 Kobayashi, K., 968, 972 Keppens, J., 49, 104, 844–848, 850–858, 1005, Kobayashi, M., 717 1044, 1079 Koehler, J., 955 Kerner, Y., 244 Koehler, J. J., 948, 952, 955 Kerr, N. L., 328–329 Kohavi, R., 702 Ketcham, K., 38 Köhnken, G., 294 Khemlani, S., 1110 Kohonen, T., 489–490, 517, 664 Khuwaja, G. A., 700, 958–959, 1033, Köller, N., 617 1070–1071 Kolmogorov, V., 725 Kibble, R., 173 Kolodner, J. L., 353, 383 Kibler,D.F.,1027 Komoroske, J., 728 Kiechel, K. L., 281 Kompatsiaris, Y., 548 Kieras, D., 694, 1040 Kong, A., 961 Killeen, J., 979 Konishi, R., 917 Kim, B. M., 934–935 Koo, Y. H., 548 Kim, Ch.-S., 958 Koppen, M., 380 Kim, D. S., 701 Koppenhaver, K., 617 Kim, P., 391 Korb, K., 149 Kim, R. K., 281 Korniss, G., 494 Kim, S.-M., 601 Kort, F., 657 Kim, S.-R., 503 Korycinski, C., 596 Kinder, J., 698 Kosala, R., 553 Kindermann, R., 722 Kothari, R., 490, 949 King, B. F., 450 Kou, Y., 762 King, N. J., 1099 Kovacs, D., 632 Kingsnorth, R., 1103 Kovács-Vajna, Z. M., 961 Kingston, J., 567, 755 Kowalski, K., 45 Kinkade, R. G., 506 Kowalski, R. A., 45, 170, 350, 899 Kinton, R., 539 Krabbe,E.C.W.,129–130 Kintsch, W., 349–350 Krackhardt, D., 498–499, 503–504 Kirkendol, S. E., 149 Krane, D., 943 Kirkpatrick, S., 724 Krantz-Rülcker, C., 924 Kirsch, I., 38 Kraus, S., 79, 85, 90, 164, 529 Kirschenbaum, A., 289 Krause, P., 182 Kirschner, P. A., 168 Krawczak, M., 943 Kitayama, Sh., 89 Kreibich, C., 696, 709 Kitchener, A. C., 447 Krishna, P. V., 703 1284 Author Index

Krishnamoorthy, M. S., 678, 680 Lapalme, G., 80, 596 Krishnapuram, R., 490, 553 Lara-Rosano, F., 1048 Kristensen, T., 959 Laskey,K.B.,115, 255, 1107, 1108 Krogman,W.M.,884 Lassila, O., 547 Kronman, A. T., 306 Lassiter, D., 282 Kruiger, T., 170 Lassiter, G. D., 149 Kruse, W., 687 Latendresse, M., 698 Kruskal, J. B., 495 Latour, B., 1027 Ku, Y., 717, 722 Latourette, K. S., 942 Kudenko, D., 385 Lau, W. H., 1071 Kuflik, T., 53, 62, 65, 406, 1023 Laufer, B., 942 Kuglin, C. D., 968 Laughery, K. R., 861 Kulikowski, C., 655 Laurel, B., 385 Kumar, A., 1070, 1097 Lauritsen, J. L., 244 Kumar, P., 483 Lauritzen,S.L.,943–944 Kuo, C. H., 960 Lautenbach, K., 901 Kupferman, O., 898 Law, F. Y. W., 260, 263, 692 Kurosawa, A., 342 Lawrence, S., 601 Kurzon, D., 324 Laxman, S., 728 Kuwabara, K., 720 Leach, A.-M., 148 Kvart, I., 118 Leake, D. B., 373, 377, 1036 Kwan,J.L.P.,379 Leal, S., 149 Kwan,M.Y.K.,260, 263, 692–693, 717–719, Leary, R., xi, xiii, 479, 511 736–740 Leary, R. M., 511, 550, 559, 566, 714, 767, Kyle, S., 994 1072 Lebbah, M., 517 L Lebowitz, M., 353, 383 La Moria, R. D., 514 Lee, B., 717 Labor, E., 409 Lee, C. H., 112 Lacis, A., 616 Lee, C.-J., 960 LaFave, W. R., 1099 Lee, G., 380 LaFond, T., 743 Lee, H. C., 883 Lagasquie-Schiex, M.-C., 170 Lee, H.-K., 503 Lagasquie-Schiex, S., 170 Lee, J., 181, 480, 546, 601 Lagerwerf, L., 48 Lee, J.-M., 544 Lai, K. Y., 260 Lee, K., 149 Lai, P., 263, 692 Lee, L., 601 Laifo, J., 386 Lee, M., 852 Lakoff, G. P., 344, 347, 614–615 Lee, R., 246, 755–756 Lam, S. C. J., 546 Lee, T.-K., 706 Lamarque, P. V., 340 Lee, V., 762 Lambert, J., 855 Leedham, G., 616 Lan, X., 494 Leedy, L. W., 511 Landman, F., 857 Lefebvre, W. C., 488 Lane, B., 883 Leff, L., 537 Lane, S. M., 38 Legary, M., 687 Lang, J., 17 Legosz, M., 273 Lang, R. R., 349 Legrand, J., 667 Langbein, J. H., 289 Lehmann, F., 545 Lange,T.E.,380 Lehnert, W. G., 353–354, 399 Langenburg, G. M., 956 Lehrer, K., 92 Langston, M. C., 380 Leijenhorst, H., 842 Lanier, K., 149 Leippe, M. R., 93, 283 Author Index 1285

Leith, P., 248, 662, 1048 Lind, A. E., 618 Lemar, C., 892 Linde, C., 82 LeMay, C. S., 149 Lindquist, R. J., 892 Lemons, J., 1034 Lindsay, D. F., 283 Lempert, R., 1119 Lindsay, D. S., 38 Lenat, D., 546, 754 Lindsay, J. L., 149 Lenci, A., 547 Lindsay,R.C.L.,95, 149, 264, 283, 295 Leng, P. H., 246 Linford, N., 907 Leng, R., 845 Lingras, P., 490 Lengers,R.J.C.,652 Lipske, M., 445–446, 449–451, 464, 468–470, Lennard, C., 948 474 Lenzi, V. B., 550, 559 Lipton, L., 48–49, 327 Leo, R. A., 282, 499, 1069 Lipton, P., 48 Leon, C., 27, 388 Lisetti, C. L., 871 Leone, A., 961 Liske, C., 152, 938 Leskovec, J., 721, 741 Liu, B., 601 Lesser, V., 532 Liu, D., 526 Lester, J. C., 203, 381, 384, 396 Liu, H., 385, 611 Leung, S. H., 1071 Liu, V., 687 Levene, M., 538 Lively, S., 109 Levesque, H. J., 80 Llewellyn, K. N., 323, 1085 Levi, J. N., 275 Lloyd, C., 883 Levi, M., 618 Lloyd-Bostock, S., 214 Levin, B., 886 Lo Piparo, A., 216 Levine, F. J., 283 Lobina, E., 343 Levine, T. R., 281 Locard, E., 407, 890, 905, 926, 941–942, 1061 Levinson, J., 617, 1073, 1104 Lockwood, K., 720 Levitt, T. S., 115, 255, 1107, 1108 Lodder, A. R., 168, 221, 252, 629 Levy, D., 88, 497 Loftus, E. F., 38, 63, 283, 297, 882 Lewis, B., 183, 620 Logie, R., 292 Lewis, C. M., 149, 161 Loh, W.-Y., 490 Lewis, C. W., 935 Lohr,S.L.,244 Lewis, D., 304, 310 Loiselle, C., 353 Lewis, N. S., 918 Loizou, G., 538 Lewis, P. R., 935–936 Lonergan, M. C., 917–918 Lewis, T., 519 Longley, P. A., 515–516 Leyton-Brown, K., 524 Lönneker, B., 380, 388–389 Li, Ch.-Ts., 689 Lönneker-Rodman, B., 388 Li, J., 611 Lonsdorf, R. G., 883 Li, M., 700 Lord, J., 444 Li, Q., 692, 949 Lord, P. W., 550 Li, S. Z., 722, 724, 1033 Louchart, S., 388 Li, Y., 723 Loui,R.P.,168, 170, 182 Li, Z., 391, 550 Louis, J.-H., 150 Liang, T., 629 Löwe, B., 65, 418–426 Liao, Y., 607 Lowe, G., 900 Liberman, A., 28 Lowes, D., 183 Lieblich, A., 351 Loyall, A. B., 384 Liebowitz, J., 1086 Loyek, C., 548 Light, M., 378 Loyka, S. A., 280 Light, R., 537 Lu,C.T.,762 Lim, R., 95 Lu, Q., 494 Lin, C. H., 709 Lucas, R., 483 1286 Author Index

Lucy, D., 887 Maley, Y., 324 Luger, G. F., 20, 25, 146, 169, 352, 480, 850, Malgireddy, M., 891 1045, 1095, 1119 Malinowski, E. R., 934 Luhmann, T., 994 Malone, B. E., 149 Luhn, H. P., 596 Malone, C., 38 Luk, A., 1071 Malone, D. S., 99 Lukáš, J., 872 Maloney, A., 982–983 Lundström, I., 924 Maloney, K., 976, 979–980, 983 Luo, Y., 550 Malpass, R. S., 38, 294–295, 863 Lutomski, L. S., 53 Malsch, M., 408 Luus,C.A.E.,73, 283, 1091 Maltoni, D., 948, 951 Lykken, D. T., 284 Manco, G., 762 Lynch, K. J., 749 Mandler, J. M., 347–348 Mani, I., 487, 588–594 M Mann, S., 149 Ma, J., 84, 90, 899 Mannila, H., 483, 728 Maas, A., 294 Manning, C., 38, 598 Macagno, F., 154, 156 Manning, K., 617 MacCormick, D. N., 1047 Mannino, S., 924 MacCormick, N., 129, 324, 1112 Manouselis, N., 551 MacCrimmon, M., 8, 30, 45, 115, 301, Manschreck, T. C., 87 323–324, 1074–1075 Manucy, G., 923 MacDonald, E. M., 1022 Marafioti, L., 219 MacDonald, T. K., 264 Marando, L., 95 MacDonell, H. L., 973, 975 Marcus, P., 237 MacDougall, K. A., 913 Mares, E. D., 317, 319–321 Maceo, A., 957 Margot, P., 371, 844, 948 Machado, I., 400 Marinai, S., 616 MacIntosh, R., 1103 Marinaro, M., 618 Mackaay [sic], E., 84, 901, 1114 Marineau, R. F., 497 MacKeith, J. A. C., 281 Markus, H. R., 89 Mackenzie, D. L., 1055 Marron, S., 697 MacLane, S., 532 Mars,N.J.I.,550 MacMillan, M., 263, 265, 298 Marshall, C. C., 182 Macneil, I., 306 Martens, F. T., 279 Macrae, C. N., 332 Martens, T. K., 294 Maedche, A., 547 Martin, A. W., xiv, 7, 32–33, 942 Magdon-Ismail, M., 680 Martin, P., 379 Mage, J. J., 285 Martino, A. A., xiii, 2–4, 8, 36, 54, 93, 152, Magerko, B., 386 207, 209, 212–213, 411, 1114 Magliano, J. P., 380 Martins, J. P., 20 Magnenat-Thalmann, N., 902 Marwitz, D. B., 862 Magnussen, S., 270, 283 Maslow,A.H.,383 Maguire, D. J., 515 Mataric, M., 532–533 Maguire, M., 274 Mateas, G., 380, 385–387, 400 Mahendra, B., 881 Mateas, M., 351, 356, 381, 386, 400–401, 404 Maher, M. J., 480 Mathur, A. P., 744 Mahesh, K., 560 Matkovskly, I. P., 752 Maida, A. S., 92, 152, 545 Mattern, R., 884 Maimon, O. Z., 702 Matthijssen, L. J., 182, 204 Maio, D., 951 Maudet, N., 259 Maji, P., 490 Maxion, R. A., 703 Makinson, D., 17 Maxwell-Scott, A., 283 Author Index 1287

Maybury, M. T., 594 Mellett, J. S., 909, 912 Maylin,M.I.S.,864 Mellon, L., 1103 Mays, S., 884 Melnik, M., 720 Mazzoni, G. A. L., 38 Melo, A., 535–536 McAllister, H. A., 95 Memon, A., 18, 27, 38–39, 73, 150, 264, 270, McBurney, P., 161, 170–171, 173, 928 282–283, 291–292, 295, 409, 859, 872, McCabe, S., 38 882, 1091 McCallum, A., 678 Memon, N., 73 McCann, D., 699, 906–907 Mena, J., xiv, 484, 486–489, 492–493, 505– McClelland, J. L., 42, 346, 379 509, 512, 514, 516, 519–521, 523, 525, McConachy, R., 149 534, 536–537, 547–548, 552, 601, 616, McConville, M., 845 682, 695–697, 699, 701, 708, 712–713, McCormick, E. J., 506 717, 752–753, 859–860, 1044, 1069, 1078, McCornack, S. A., 18, 150 1090, 1107, 1109 McCulloch, M., 923 Menard, V. S., 1006 McCulloch, W. S., 644 Mendelsohn, S., 289 McDermott, D., 898 Menezes, R., 535 McDowell, J., 379 Merckelbach, H., 38, 282 McEnery, A. M., 183 Merkl, D., 621, 664, 666 McGeorge, P., 27 Merlino, A., 594 McGrath, C., 498, 503 Merricks, T., 92 McGuinness, D. L., 547 Merrill, T. W., 305 McGuire, P. G., 9 Merton, R. K., 32 McGuire, R., 164, 172 Mertz, E., 334 McGuire, W. J., 28 Messo, J., 38 McHugh, J., 687, 695 Meudell, P. R., 73, 1091 McKee, N. H., 904 Meyer, R. K., 317, 320–321 McKellar, P., 263, 265 Meyer, T., 351, 546 McKeown, K. R., 495 Miao, W., 888 McKinnon, A. D., 709 Michie, D., 675 McLeod, M., 410–411 Michon, J. A., 32, 292 McLeod, N., 943 Midlo, C., 957 McMechan, G. A., 912 Miers, D., 1087–1088, 1093 McMenamin, G. R., 614 Miikkulainen, R., 380, 666 McNall, K., 65, 90, 282–283 Mildenhall, D. C., 927 McNally,R.J.,38 Miller,D.G.,38 McNeal, G. S., 30, 301 Miller, F., 1103 McQuiston-Surret, D., 863 Miller, L. S., 613 Me, G., 511–512 Miller, M. T., 690, 841, 883, 977 Mead,S.J.,381 Miller, P., 865 Meade, M. L., 73, 1091 Miller,R.E.,901 Meehan, J. R., 338–339, 376, 382–383 Milne, R., 149, 282, 927 Meester, R. W. J., 943 Milton, J. L., 506 Mégret, M., 150 Ming Hsiang, T., 513 Mehrotra, H., 700 Minh, T. T. H., 537 Meijer, E. H., 445 Minkov, E., 675 Meikl, T., 196, 248, 623, 662, 1048 Minsky, M., 24, 350, 394, 648 Meikle, T., 248, 662, 1048 Mironov, A., 704 Meissner, C. A., 27, 265, 281–283, 942, 1041 Mishler, E. G., 351 Meissner, K., 551 Misra, S., 483, 703 Meister, J. C., 380, 388–389 Missikoff, M., 546 Meldman, J. A., 81, 901 Mitchell, H. B., 722 Melinder, A., 270, 283 Mitchell, T. M., 379, 491, 853, 1069 1288 Author Index

Mitra, S., 490 Murtaugh, M., 341 Mitschick, A., 551 Musatti, C. L., 63 Mittag, D., 251, 1058 Musgrove, P. B., 483, 517 Mittal, V. O., 590 Musgrove, P., 84, 154, 483–484 Mizanur Rahman, S. M., 700–701 Myers, T., 537 Modgil, S., 170 Moens, M.-F., 588, 595–597, 602, 622 N Moenssens, A., 952, 956 Na, H.-J., 958 Moh, S.-K., 317 Nachenberg, C., 742 Mohamed, D. H., 559 Nadiadi, Y. M., 959 Mohay, G., 611 Naess, E., 709 Mokherjee, D., 153 Nafe, B., 884 Molina, H. G., 727 Nagano, T., 885 Molluzzo,J.C.,495 Nagel, I. H., 1103 Mommers, L., 1096 Nagel, S., 246, 657 Monachini, M., 547 Nagle, H. T., 917, 922 Monahan, J., 38 Nakhimovsky, A., 537 Monmonier, M. S., 516 Nambiar, P., 885 Moore, D. S., 921 Nance, D. A., 946 Moore, J., 172 Nanto, H., 918 Moorman, K., 376 Napier, M. R., 512 Mora, E., 334 Narayanan, A., 378, 616 Morelli, L. M., 61 Narchet, F. M., 281 Moreno, J. L., 497–498 Nattkemper, T. W., 548 Morey, D., 594 Nau, D. S., 550 Morgan, J. E., 150 Navarro, A., 388 Morgan, T., 528, 1034 Navigli, R., 546 Morris, C. G., 510, 1032 Neave, R., 875 Morris, J. R., 938 Neaves, P., 918 Morris, R. N., 617, 1073, 1104 Nebel, B., 17 Morris, S. B., 946 Neil, M., 110, 1077, 1083 Morrison, R. D., 928–933 Neill, A., 340–341 Morton, A., 7, 306 Neimark, J., 38 Mørup, M., 741 Nenov, V. I., 380 Morzy, M., 717 Nespor, S., 943 Moskowitz, H., 629 Netzer, O., 600 Mostad, P., 944, 1097 Neufeld, P., 282 Motoda, H., 611 Neukom, R., 952 Mott, B., 203 Neumann, K., 283 Moulin, B., 80, 82–84 Nevatia, R., 392, 548 Movellan, J. R., 961 Neves, J., 245 Mrvar, A., 498–499 Neville, J., 494, 727–730, 743 Mueller, E. T., 339, 378–379, 383 Newburn, T., 176 Muhlenbruck, L., 149 Newell, A., 528 Mukherjee, I., 702 Newman, C., 863 Müller, J., 396 Newman, M. E., 494 Mumford, D., 725 Ng,H.T.,379, 872 Munn, K., 545 Ng-Thow-Hing, V., 904 Murbach, R., 243 Nicolle, D., 876 Murphy, B. L., 928–933 Nicoloff, F., 65 Murphy, R. G., 445 Nicolson, D., 2, 1055, 1068 Murray, D. M., 283 Nielsen, L., 943 Murray, R. C., 905 Nielsen, T., 281 Author Index 1289

Niemi, P., 282 Okada, N., 356 Niesz, A. J., 356 Okada, T., 346 Nigro, H. O., 544, 550 Olaisen, B., 944, 1097 Nijboer, H., 408, 1062 Olderog, E.-R., 901 Nijboer, J. F., 237, 880 O’Looney, J., 516 Nijholt, A., 340, 394 Olsen, S. H., 340 Nilsson, N. J., 83, 383 Olshen, R. A., 490 Nirenburg, S., 560 Olson, E. A., 95–99, 270, 283 Nissan, E., ix, xii, xiii, 1–4, 6–8, 11, 16–19, Olson, S. L., 444 23–24, 26–28, 32, 36, 53–55, 57, 61, Oltramari, A., 551 67–68, 76–77, 80–84, 86, 89–91, 117, Omelayenko, B., 550 126, 137, 150, 152, 173–174, 207, 209, Onega, S., 344 212–213, 247, 289, 323, 331, 336–337, Ongvasith, P., 511 339–340, 343–344, 380, 394–395, Onyshczak, R. J., 958 403–404, 406–408, 411–413, 416, 418, Onyshkevych, B., 560 429–434, 458, 460–461, 483, 487, 494, Opdorp, G. J., 529, 1103 538, 553, 587, 603, 611, 668–670, 767, Orgun, M. A., 546 841, 859, 878, 881, 887, 899–901, 928, Ormerod, T. C., 842 1017, 1023, 1034, 1046, 1049, 1055, 1084, O’Rorke, P., 394 1114 Ortony, A., 340, 346, 394 Nissen, K., 617 Osaki, T., 917 Nitta, K., 335 Osborn, A. S., 617 Niu, X., 392, 692 Osborne, C., 282, 288, 294, 862, 1023, 1026, Noedel, M., 841 1029, 1037–1039, 1043, 1048, 1054, Nonn, E., 243 1056–1057, 1074, 1077–1079, 1100, 1104, Noon, R. K., 935–936 1109 Noordman, L. G. M., 380 O’Shea, C., 706 Nordby, J. J., 880, 935 Oskamp, A, ii, 529, 1103 Norman, D. A., 350 O’Sullivan, M., 150, 285 Norman, J., 170 Osuna, R. G., 922 Norman, T. J., 110 Otgaar,H.P.,38 Norvig, P., 353 Oudot, L., 706 Norwick, R. J., 281–282 Ouellette, J., 916–917 Notsu, A., 428 Overill, R. E., 260–263, 687, 692–694, 889 Novitz, D., 341 Owen, G., 253–255 Nowakowska, M., 86, 357 Owens, C. C., 373 Nunez, D. T., 864 Oxtoby, O., 864 Ozisikyilmaz, B., 483 O Özsoyoglu,˘ Z. M., 538 Oard,D.W.,682 Oatley, G. C., x, 479, 511, 620–621, 844 P Obaidat, M. S., 703 Pacuit, E., 418–421, 424 O’Barr, W. M., 316, 334, 618 Pacuit, P., 419 O’Brien, B., 328 Page, L., 503, 721, 726–727, 731 O’Brien, T. P., 149 Paglieri, F., 173, 257–259 Ochaeta, K. E., 717, 719 Paiva, A., 400 Oehler, D., 450 Pakes, F. J., 32, 392 Ofshe, R. J., 282 Pal, S. K., 490 Ogata, T., 388 Palagiri, C., 706 Ogden, J., 1068 Paley, B., 294 Ogston, E., 533 Pallotta, G., 96 Ohta, M., 544 Palmbach, T., 883 Oinonen, K., 394 Palmer,J.C.,38 1290 Author Index

Palmer, M. S., 379 Pennock, D. M., 601 Pamplin, C., 1064 Penrod, S. D., 38, 270, 283, 291, 293–294, Pamula, V. K., 921 297, 882–883, 1034, 1039–1040 Panangadan, A., 392 Penry, J., 862–863 Pandit, S., 720, 722–723, 725–728, 730–736 Perdisci, R., 697 Pang, B., 600–601 Perego, R., 961 Pankanti, S., 957, 960, 1033 Pérez y Pérez, R., 385 Pannu, A. S., 621, 674–675 Perlis, D., 79, 347, 377 Paolucci, M., 18 Perloff, M., 439 Papadimitriou, C. H., 694 Péron, A., 1067 Papageorgis, D., 28 Peron, C. S. J, 687, 949 Papapanagiotou-Leza, A., 207 Perrins, C., 449 Papert, S., 648 Persaud, K. C., 917–918 Papineau, D., 321 Petacco, A., 96 Pardo, M. S., 107–110, 256, 281, 288, 293, Peter, R., 445 326–327, 367, 886, 946, 1034, 1039, 1102, Peters,B.J.,890 1106 Peters, G., 490 Pardue,H.L.,885 Peters, S., 1110 Parent, X., 1042 Peterson, D, M., 8 Parikh, R., 418–419 Peterson, J. L., 901 Park,J.F.,935 Peterson, M., 280 Park,J.S.,701 Petkov, E., 554 Park, N., 242, 252 Petri, C. A., 901 Park, R. C., 367 Petridis, M., 669 Parkinson, B., 89 Petty, R. E., 28 Parry, A., 82 Peuquet, D. J., 392 Parsons, S., 170, 173, 182, 205, 928 Pevzner, P., 704 Parton, D. A., 233 Pezdek, K., 291 Partridge, R. E., 521 Pfeifer, R. L., 149 Parunak, H., 533 Pfeiffer III, J., 494 Pascali, V. L., 944 Pharr, M., 873 Passonneau, R. J., 379 Philipps, L., 104, 106, 659–660 Patel-Schneider, P. F., 547 Phillips, M., 384 Pattenden, R., 30, 301, 1073 Philp, R. P., 929 Pavlidis, I. T., 150 Phua, C., 762 Pawlak, Z., 490 Phuoc, N. Q., 503, 721 Paxson, V., 709 Pickel, D., 923 Pearce, T. C., 917 Pickering, A., 292 Pearl, J., 49, 102, 104, 118, 120, 127, 1030 Pickrell, J. E., 38 Pearman, D. A., 448 Pieprzyk, J., 742 Pease, K., 214, 233, 242, 245 Pietroski, P. M., 154 Pedersen, J., 727 Pike, G., 863 Pedreschi, D., 762 Pildes, R. H., 305, 317 Pedrycz, W., 490, 671 Pillaipakkamnatt, K., 618–619, 643 Pei, J., 104–105, 728 Pingali, G., 390 Peinado, F., 347, 380, 387–388 Pisanelli, A. M., 918 Peirce, C. S., 44, 48, 855 Pitts, W., 644 Pelosi, P., 917 Pizzi, D., 388 Peltron, W., 886 Plamper, J., 89 Pennec, X., 903–904 Plantinga, A., 7, 92 Pennington, D. C., 214, 245 Plewe, B., 515 Pennington, N., 32–33, 36–37, 51, 214, Plumwood, V., 320 324–325, 335, 1075, 1112 Podlaseck, M., 390 Author Index 1291

Poesio, M., 547 Pye, K., 889, 905 Poggi, I., 149 Pyle, D., 552 Politis, D., 207 Pollard, D. E. B., 340 Q Pollock, J. L., 25, 88, 146–148 Quaresma, P., 606 Pong, T.-C., 722 Quinlan, J. R., 490, 606, 639–640, 702, 949 Pons, S., 414 Quinlivan, D. S., 283, 295–297 Poole, D. L., 44, 112–115, 182, 1079 Quintana, M.-P., 378 Poosankam, P., 709 Quinteiro Uchõa, J., 490 Popescu, A. C., 872, 1078 Quochi, V., 547 Popescu, A.-M., 601 Qutob, A. A., 918 Popov, V., 494 Porat, A., 1057 R Porter, A., 286 Raab, J., 494, 498 Porter, S., 18, 149 Rabinovich, A., 452 Posner, R. A., 250 Rachel, A., 69, 150 Potter, J., 82 Radev, D. R., 591 Pouget, F., 706 Radford, C., 340–341 Poulin, D., 84, 901, 1114 Rahman, H., 700, 713 Poullet, Y., 247 Rahman, M., 701 Poulovassilis, A., 538 Rahwan, I., 171 Pound, R., 1090–1091 Raitt, F., 851 Pour Ebrahimi, B., 533 Raja, A., 270, 283, 376–377, 431, 526 Pozzi, G., 513 Rakover, S. S., 859 Prabhakar, S., 951, 960–961 Ram, A., 373–377 Prada, R., 400 Ramakrishnan, V., 617, 891 Prade, H., 17 Ramamoorthi, R., 872 Pradin-Chézalviel, B., 84, 901 Ramani, K., 550 Prag, B., 864 Ramkumar, M., 872 Prag, J., 875 Randell, D. A., 84, 899 Prakken, H., 8, 45, 131, 161, 165–166, Ranney, M., 166, 1042 168–170, 172, 174–175, 182, 185, 196, Rapaport, W. J., 379 208, 327, 333, 477–479, 525, 657, 843, Rarity, J., 863 1025, 1042, 1102 Raskin, J.-F., 901, 1042 Pregibon, D., 729 Raskin, V., 149, 559–561 Prendinger, H., 396 Rasmussen, P. C., 446, 448–451, 464–472, Preston, H., 865, 1044 474–475 Prevost, S., 396 Ratcliffe, J. H., 9, 280, 518 Price, C., 857 Ratledge, E., 1103 Priebe, C. E., 678 Rattani, A., 700, 1033 Prince, R., 286 Rattner, K., 287 Principe, G., 73, 1091 Read,J.D.,38, 283, 378, 883 Principe, J. C., 488, 523, 1097 Read, S., 317 Propp, V., 323, 344–349, 387, 424 Reddy,W.M.,90 Proth, J.-M., 901 Redlich, A., 281 Provan, G., 118 Redmayne, M., 6–7, 109, 246, 335, 858, Provos, N., 706 953–954, 1083 Prys-Jones,ˆ R. P., 446, 448, 451, 469, 471 Redmond, M. A., 244 Pu, D., 616 Redsicker, D. R., 935 Pueschel, J., 944 Reed, C. A., 156, 167–168, 170–171, 175, Pühretmair, F., 513 1025 Puni, G., 53, 62, 406 Reed,J.M.,447 Purchase,H.C.,503 Reed, K., 943 1292 Author Index

Rees, B., 844 Robson, S., 994 Reeves, J., 374, 379, 564 Roediger III, H. L., 73, 1091 Reichenbach, H., 108, 899 Rogers, M., 687 Reid,J.E.,36, 282 Rogers, S. P., 445 Reilly, W. S. N., 384 Rogovschi, N., 517 Reimers, N., 904 Rokach, L., 100, 490, 606, 702 Reiner, R., 279 Roscoe, A. W., 306, 899–900, 1085, 1090 Reis, D., 536 Roscoe,B.A.,934 Reiß, M., 617 Rose, R., 264 Ren, A., 701 Rose, T. L., 332 Renaud, M., 593 Rosenberg, N., 408 Rendell, K. W., 613 Rosenberg,S.T.,353, 1046–1047 Renooij, S., 45, 174, 208, 1102 Rosenblatt, F., 644, 647–648 Resnick, P., 720–721 Rosenthal, R., 149, 613, 950, 1067 Restall, G., 317, 320 Rosenwald, L. A., 38 Rety, J.-H., 341, 385 Rosoni, I., 407, 887 Reutenauer, C., 901 Ross, A., 317, 700, 961, 1033 Reyna, V. F., 38 Ross, A. A., 962–963 Reynolds, K., 936 Ross, D. F., 283, 882–883 Rhind, D. W., 515 Ross, S., 999 Rhodes, W., 243 Ross, T., 666 Ribarsky, B., 526 Rossiter, B. N., 901 Ribarsky, W., 526 Rotolo, A., 1042 Ribaux, O., 511, 844 Rousseau, D. M., 54, 67, 80, 82, 86, 91, Ribeiro-Neto, B., 598 394–395, 406 Rice, R. E., 496 Rousseeuw, P. J., 490, 519, 622 Richards, R. E., 283 Routley, R., 320 Richards, W. D., 496, 498 Rovatti, R., 961 Richman, K. D., 282 Rowe, G. W. A., 167, 175, 1025 Rickman, B., 400–401 Rowe, J. P., 385 Ricordel, P., 533 Rubinstein, R., 725 Riedl,M.O.,356, 377, 381, 385 Rudin, V., 943 Rieger, C., 352 Ruffell, A., xiii, xiv, 905 Riepert, T., 884 Rumble, W. E., Jr., 662, 1085 Riesbeck,C.K.,64, 352, 373, 428 Rumelhart, D. E., 42, 324, 346–348, 350, 379, Riloff, E., 379 649–650 Ringle, M., 1027 Ruppin, E., 489 Ripley, B. D., 674 Russano, M. B., 281 Ripley, S. D., 449–450, 468 Ryan, M.-L., 336–337, 349 Risinger, D. M., 613, 1066 Ryan,P.Y.A.,900 Rissland, E. L., 169–170, 529, 602, 622, 1036, Rydell, S. M., 283 1077 Ryokai, K., 401–402 Rist, T., 396 Ritchie, G., 340 S Ritterband, P., 592 Sabater, J., 18 Rizzo, A., 167 Sablayrolles, P., 84 Roberto, V., 546 Sadorf, E., 617 Roberts, A., 297 Safavi-Naini, R., 742 Roberts, D. L., 447 Saferstein, R. E., 883 Roberts, L., 943 Sagri, M. T., 553 Robertson, B., 7, 887, 1052 Sainsbury, R. M., 1118 Robertson, D., 181, 480 Saks, M. J., 613, 948, 952, 955 Robinson, P., 410 Sakurai, Y., 717 Author Index 1293

Salfati, R., 62, 406 Schoenlein, R. W., 914 Salmon, W. C., 108 Schonlau, M., 703 Salokhe, G., 551 Schooler, J. W., 38 Salton, G., 596 Schraagen, J. M., 842 Sammes, T., 687 Schreiber, F. A., 513, 898–899 Sanders, G., 118 Schreiber, T. J., 266, 554 Sanders, W. B., 176 Schrickx, J. A., 529, 1103 Sanger, J., 600–601 Schroeder, J., 511, 748, 759 Sankur, B., 872 Schubert, L. K., 379 Sansonnet, J.-P., 379 Schulten, E., 550 Santini, F., 950, 963 Schultz, M., 744 Santos, E., Jr., 117 Schulz, S., 551 Santtila, P., 282 Schum,D.A.,ix–x, 7, 19, 32–33, 52–53, 102, Sappington, D., 90 111, 131, 174–175, 208, 210, 240, 477, Saraf, S., 418, 420, 424 495, 782, 785, 800, 806–807, 855, 1076, Saravanan, R., 548 1090 Saretto,C.J.,381 Schunn, C. D., 346 Sartor, G., 104, 106, 168–170, 182, 667, Schutze, H., 598 1025 Schwartz, A., 305 Sartwell, C., 92, 392 Schweighofer, D., 621, 664, 666 Sasaki, Y., 968 Schweighofer, E., 621, 664 Sastry, P. S., 728 Schwikkard, P. J., 287 Sattler, U., 546 Scoboria, A., 38 Saukko, P. J., 883 Scott, J., 342 Saunders, A., 845 Scott, J. E., 714, 717 Sauter, J., 533 Scott, M. S., 9, 280, 498 Savage, L. J., 127 Scott, R. E., 305 Savitsky, K., 27, 283, 942 Seabrook, J., 450 Savona, E., 275 Searle, J., 80 Sawday, J., 902 Sebastiani, F., 599–600 Sawyer, A. G., 28 Sebeok, T. A., 409 Sbriccoli, M., 289 Sebok, A., 305 Scampicchio, M., 924 Seelau, E. P., 283 Schafer, B., 160, 566, 755, 844–848, 851, Segal, M., 491 855–857, 1005, 1044 Segal, U., 1023 Schank, P., 166 Seidmann, D. J., 171, 288 Schank, R., 350, 383 Seissler, W., 995 Schank, R. C., 352 Sejnowski, T. J., 489, 724 Schank, R. G., 33, 64 Selbak, J., 1006 Schapire, R. E., 702 Sellier, K. G., 890 Schartum, D. W., 238 Selten, R., 259 Schek, H.-J., 538 Seltzer, M., 408 Scherer, K. R., 285 Semin, G., 149 Schiano, D. J., 871 Sengers, P., 351, 356, 381 Schiffman, S. S., 917 Sennef, A., 237 Schild, H., 884 Ser, W., 958 Schild, U. J., 244, 633, 637, 1048 Sergot,M.J.,149, 1042, 1046 Schlesinger, P., 412 Seto, Y., 1033, 1098 Schmid, N., 1014–1015 Sgouros, N. M., 381 Schmidt, M. A., 445 Shafer, B., 850–851, 856 Schmidtke, J., 943 Shafer, G., 1045 Schneider, D. K., 353 Shannon, C. E., 642, 694 Schneider, V., 498, 900 Shapira, R. A., 30, 301 1294 Author Index

Shapira, R., 668 Sinai, J., 495 Shapiro, E., 360 Singh, A., 721 Shapiro, S. C., 20, 379, 545 Singh, K., 904 Sharkey, N., 380 Singh, M., 173, 533 Sharman, S. J., 38 Siroky, D. S., 949 Sharples, M., 385 Sirwongwattana, S., 762 Shastri, D., 150 Sjerps, M., 943 Shaul, Y., 62, 406 Skabar, A., 673 Shaw III, J., 38 Skagerberg, E. M., 73, 292, 1091 Shebelsky, R. C., 76 Skalak, D. B., 169–170, 529, 1034, 1036 Shemesh, A. O., 68 Skillicorn, D. B., 678 Shen,H.C.,104, 701, 856 Skulsky, H., 341 Shen, Q., 846, 852, 855, 857–858, 1070 Slabbert,J.H.,874 Shennum, W., 285 Slee, D., 619–620, 626, 658 Shepherd, J. W., 859, 862 Slosson, J. E., 890 Sheptycki, J., 9, 273–279 Smeets, T., 38 Shereshevsky, B.-Z., 289 Smith, A. S., 1049 Sherman, L., 1055 Smith, B., 545 Sherman, S. J., 38 Smith, F. D., 842 Shetty, J., 677 Smith, H. E., 305 Shetty, S., 617 Smith, J. C., 596, 606 Shilliday, A., 1110 Smith, J. M., 538 Shim, C.-B., 548 Smith, P. A., 842 Shim, S., 717 Smith, R. G., 529–532 Shimizu, Y., 918 Smith, R., 1007 Shimony, S. E., xiii, 28, 32, 86, 117, 173, 1034, Smith, S., 384 1084 Smith, T. C., 356, 384 Shinzaki, T., 1071 Smith-Miles, K., 762 Shiose, T., 428 Smolenski, M., 758 Shiraev, E., 88 Smolensky, P., 346, 379 Shirani, B., 687 Smullyan,R.M.,1053 Shirley, S. G., 917, 954–955 Smyth, P., 483 Shoham, Y., 524, 898 Snell, J. R., 722 Shortliffe, E. H., 106, 193 Snoek Henkemans, F., 171 Shuirman, G., 890 Snook, B., 511 Shurmer, H. V., 917–918 Snow, P., 16, 115–116, 272, 1054 Shuy, R. W., 618 Socia, C., 547 Shyu, C. H., 112 Söderström, C., 924 Siddiqui, M. A., 742 Sokooshi, H., 918 Siegel,J.A.,883 Solan, Z., 489 Sierra, C., 18, 205 Solka, J. L., 598–599 Sigdel, K., 533 Solomon, C. J., 864 Sigfusdottir, I. D., 281 Solon, M., 1064 Sigmund, W., 497, 890 Solow, A. R., 447 Sigurdsson, J. F., 281 Sommer, R., 709 Silberman, C. E., 1103 Song, C. H., 548 Sillitoe, T. J., 901 Song, Q., 701 Silomon, J. A. M., 260–261, 693–694, 889 Song, S. X., 709 Simari, G., 170 Soper, A. J., 669 Simari, G. R., 171 Soper, A., 669 Simhon, D., 152 Sopher, B., 153 Simmons, R. F., 382, 445 Sorg,M.H.,896 Simon, E., 243 Sörgel, D., 547 Author Index 1295

Sorkin, G., 744 Stevens, R., 409 Sosa, E., 92 Steyvers, M., 545 Sotomayor, S., 329, 332–333 Stickel, M. E., 379 Soukal, D., 872 Stiegler, B., 1026 Sousa, O., 245 Stiff, J. B., 29 Sowa, J., 83–84 Stilovic, M., 948 Sowa,J.F.,545 Stittmeyer, R., 616 Sparck Jones, K., 595 Stock, O., 340, 729 Sparrow, M. K., 494–497, 509–510, 735, 752, Stockmarr, A., 943 1110 Stolfo, S. J., 676 Spears, D., 243 Stolfo, S., 744 Specter, M. M., 935 Stollman, A., 408, 412 Spelman, W., 280 Stone,B.A.,396 Spencer, M., 410 Stone,C.J.,490 Sperber, D., 309, 521, 1088 Stone, J. I., 149 Spiegelhalter, D. J., 675 Stone, M., 271 Spiegelman,C.H.,935 Stoney, D. A., 948, 953, 957 Spisanti, L., 762 Stork, D. G., 489, 600 Spitzner, L., 706 Strange, D., 38 Spivak, J., 538 Stranieri, A., xiii, 6, 18, 99, 104, 112, 161, Spohn, W., 117–118, 1084 170, 177, 179, 183, 196–197, 202–205, Spooren, W., 48 321, 378, 484–485, 492, 529, 553, 595, Sprugnoli, R., 550 602–603, 618–620, 623, 628–629, 637, Srihari, R. K., 616–617, 879, 891, 957 643, 662, 666, 668, 673, 702, 1031, Srihari, S. N., 879, 943, 1041 1033–1034 Srikant, R., 246, 1028 Strapparava, C., 340 Srinivasan, H., 616–617, 957 Stratton, J. R., 233 St. John, M. F., 380 Stridbeck, U., 270, 283 Staab, S., 546–547 Strömwall, L. A., 149 Stabler, E. P., Jr., 379 Stubblefield, W. A., 20, 25, 146, 169, 352, 480, Stærkeby, M., 884 850, 1045, 1095 Stakhanova, N., 701 Studer, R., 546 Stangor, C., 332 Su, C., 943 Staniford, G., 170, 248, 1048 Su, X., 519 Staples, E. J., 918, 923 Sugahara, K., 917 Starmer, K., 1069 Suh, B., 721 Stauffer, E., 943 Sullivan, J. P., 396 Stearns, C. Z., 89 Sullivan, J., 150 Stearns, P. N., 89 Summers, R. S., 129 Stein, A., 349 Sun, J., 741 Stein, N. L., 2, 30–31, 137, 171, 250, 288, Sun, Sh. H., 692, 699 301, 347–348, 1023, 1072 Suprenant, B. A., 936 Steingrimsdottir, G., 281 Sutherland, R., 38 Steinwart, I., 681 Sutherland, S., 1103 Stenne, P., 247 Sutton, P. T., 795 Stenross, B., 238 Sutton, T. P., 985 Stephen, J. F., 137, 298, 317, 412, 858, Swami, A., 623 928 Swartjes, I., 342, 394 Stephenson, K., 495 Sweetser, E., 149 Sterling, L., 360, 754–756 Swenson-Wright, J., 1055 Stern, A., 386, 400 Swiniarski, R., 671 Stern, D. N., 90 Sycara, K., ix, 80, 149, 151, 161–164, Stevenage, S. V., 871 532–533, 546 1296 Author Index

Sycara, K. P., 164 Thalmann, D., 395, 551, 902 Szilas, N., 341, 385 Thang, P. M., 171 Szymanski, B. K., 492, 494, 525, 678, Thelen, M., 379 705–706 Theune, M., 342 Szymanski, B., 525 Thomas,E.A.C.,991 Szyszko, S., 918 Thomas, M., 615 Thompson, P., 602 T Thompson, S., 346 Tacey, M. L., 943 Thompson, W. C., 613 Taddei Elmi, G., 239 Thórisson, K. R., 396 Talukder, A., 392 Thorndyke, P. W., 347–348, 406 Talwar, V., 149 Thorpe, J., 233 Tamassia, R., 498 Tidmarsh, J., 1039 Tan, T., 959 Tillers, P., x, xiii, 6, 8, 45, 53, 108, 110–112, Tan, X., 961 115, 131, 167, 174, 208, 210, 240, 252, Tan, Y., 150 477, 1049–1051, 1076, 1090, 1106 Tan, Y.-H., 1042 Tilley, N., 9 Tanca, L., 513 Tingey, M., 883 Tang, J., 149 Tingey, R., 883 Tang, T., 687 Tinsley, Y., 954 Tang, Y., 687 Tiscornia, D., ii, 553 Tao, D., 741 Toglia, M. P., 283, 883 Tapiero, I., 378 Toland, J., 844 Tapp, J. L., 283 Tolcott, M. A., 506 Tardos, E., 494 Tollis, I. G., 498 Tardy, C. H., 618 Tomberlin, J. E., 1042 Taroni, F., 7, 855, 887, 945 Tonfoni, G., 353 Taruffo, M., 248 Tong, H., 741 Tata, C., 242–244, 248 Toni, F., 45, 170 Tattersall, A., 841 Tonini, P., 2 Tatti, N., 728 Topolnicki, D. M., 1022 Taubes, G., 903 Topp, L. D., 863 Tavers, P., 918 Toppano, E., 546 Tavris, C., 38 Toral, A., 547 Taylor, C. C., 149 Torasso, P., 846 Taylor, J., 149 Torrens, P. M., 516, 534 Taylor, J. A., 149 Torroni, P., 45, 1023, 1085 Taylor, P. J., 842 Tosi, L., 513 Tebbett, I., 885 Toulmin, S. E., 1118 Tedrow, J. C. F., 905 Townsend, T. N., 703 Teel, K. S., 274, 506 Trabasso, T., 380 Tenenbaum, J. B., 545 Trankell, A., 150 Tenneholtz, M., 503 Travers, P., 918 Teo, L. H., 379 Travis, C., 392 ter Meulen, A. G. B., 899 Tredoux, C. G., 864 Teran, J., 904 Tribe, L. H., 1119 Terluin, D., 385 Tribondeau, N., 862, 915, 939 Terrones, K. M., 514 Trithemius, J., 690 Tesauro, G., 744 Tsai, Ch.-L., 519 Teufel, S., 597 Tsai, F. S., 536 Thagard, P., 39–43, 47, 49–53, 167, 478, Tschudy, R. H., 927 659–660, 1042, 1054 Tsiamyrtzis, P., 285 Thali, M. J., 991, 994, 1007 Tukey, D., 183 Author Index 1297

Tulving, E., 383 Van Koppen, P. J., 170, 292–293, 325 Tupman,W.A.,240 van Kralingen, R. W., 554 Turner, B., 884 van Mulken, M. J. P., 410 Turner, S. R., 398 van Noortwijk, C., 246 Turney,P.D.,601 van Oostendorp, H., 166, 477 Turvey, B., 493 Van Reenen, P. Th., 410 Twining, W. L., ix, 2, 21, 36, 38, 131, 174, 333, Vandenberghe, W., 566–567, 714, 755 477, 569, 800, 1024, 1055–1056 Vanouse, P., 404 Tybjerg, K., 1055 Vapnik, V. N., 604–605, 608, 681 Tyree, A., 170 Varley, G., 536 Tyrner, J., 863 Vasconcelos, E., 11, 516, 534–535, 1067 Tyrrell, A., 607 Vassiliadis, S., 533 Vatner, A., 282 U Vecchi, M., 724 Uijlings, J., 394 Veefkind, N., 292–293 Ukpabi, P., 886 Veksler, O., 722, 725 Ulmer, S., 246 Vellani, S., 446–447 Umaschi Bers, U., 399 Veloso, M., 1036 Umaschi, M., 399–401 Vemuri,V.R.,607 Umiker-Sebeok, J., 409 Vendler, Z., 152 Ungar, L., 600 Verheij, B., 8, 168, 170–171, 325, 328, 476, Ursu, M. F., 1042 478–479 Uschold, M., 545–546 Verheij, H. B., 170 Uther, H.-J., 79, 346 Vicard, P., 944 Uyttendaele, C., 588, 596 Viegas, E., 560 Vignaux, G. A., 7, 887, 1052 V Vila, L., 84, 898, 901, 1114 Vaccarelli, A., 950 Visser, P. R. S., 554 Vachher, A., 607 Vock, P., 995, 1007 Vafaie, H., 752 Volinsky, C., 729 Vaithyanathan, S., 601 Vonk, W., 380 Valcour, L., 513, 1082 Vossos, G., 619, 642 Valente, A., 553, 555–556 Vreeswijk, G. A. W., 166, 170, 176, 182, 477, Valentine, T., 292 1108, 1112 Valette, R., 84, 901 Vreeswijk, G., 172 Valeur, F., 686, 688 Vrij, A., 18, 149–150, 282, 284–285, 288, Valitutti, A., 600 1050 van Andel, P., 408 Vrochidis, S., 548 van Benthem, J., 898 vanBoxel,D.W.,944 W Van Cott, H. P., 506 Wache, H., 547 van den Berg, P. H., 529 Wade, K. A., 38 van den Braak, S. W., 166, 176, 477–478, Waegel, W. B., 237 1042, 1112 Wagenaar,W.A.,38, 292–293, 325, 327, 477, van der Hoek, W., 249 1024 van der Schoor, J., 477 Wagner, D., 494, 498 vanderTorre,L.W.N.,901, 1042 Wagner, M. J., 892 van der Vet, P. E., 550 Wagner, S. A., 841 van Dijk, T. A., 191, 303, 314, 349–350, 1107 Wahab, M. S., 716, 1109, 1119 Van Eemeren, E. H., 170–171 Waismann, F., 196 van Gelder, T. J., 167, 1105 Walker, C., 1069 van Gent, R., 82, 394, 396, 402 Walker, D. P., 690 van Harmelen, F., 547 Walker, J., 923 1298 Author Index

Walker, K. J., 923 Westermann, G. U., 390–391 Wallace, W. A., 680, 882 Weyhrauch, P., 385–386 Walpole, J., 408 Wharton, C. M., 380 Walsh, W. F., 9 White, J., 28 Walt, D. R., 924 White, M., 282 Walton, D., 158 White,P.H.,924 Walton, D. N., 129, 155–158 White, W. S., 282 Walton, K., 341 Whitely, R., 322 Wang, C., 728 Wicke, C., 264 Wang, F. L., 526 Wide, P., 924 Wang, G., 550 Wiggins, S. M., 912 Wang, J., 728 Wigmore, J. H., x, 130–131, 299, 784 Wang, M., 490 Wilder, H. H., 957 Wang, P., 630 Wilensky, R., 347–348, 350, 353 Wang, R., 728 Wilhelm, J., 742 Wang, S., 720 Wilkins, D., 618–619, 643 Wang, S.-D., 960 Wilkinson, C., 602 Wang, W., 692 Wilkinson, R., 875 Wang, X., 587 Wilks, Y., 92, 151–152, 350, 938 Wang, X. S., 959 Williams, K. D., 29 Wang, Y., 962 Williams, P., 275 Wang, Y. A., 380 Williams, R. L., 289 Wang, Zh., 728 Williams, R., 649 Wansing, H., 321 Williams, S., 1036 Ward, A., 533 Williamson, T., 176, 283 Ward,K.M.,890 Willis, C. M., 923 Wareham, J., 717 Willmer, M. A. P., 273–274 Warner, D., 659 Wilson, A. D., 923 Wasserman, S., 498, 593 Wilson, D., 1088 Waterman, M. S., 704 Wilson, G., 923 Watson, J. A. F., 892 Wimmer, M. C., 38 Watson, J. G., 930 Wavish, P., 356 Winger, L., 722 Weatherford, M., 762 Winiwarter, W., 621 Weaver,W.G.,694 Winkels, R., 553–554, 556, 1096 Webb, G. I., 701–702 Winkler, J., 38, 283 Wechsler, H. S., 592 Winograd, T., 350–351 Wegener, D. T., 28 Winquist, F., 924 Weidensaul, S., 450 Wirth, J., 1007 Weil, R. L., 892 Wish, M., 495 Weingardt, J. W., 38 Witten, I. H., 356, 384, 491 Weir, A., 379 Wogalter, M. S., 862 Weir, C., 1103 Wolchover, D., 283 Weis,J.P.,518 Wong, D. C. M., 1070 Weiss, G., 524, 532–533 Wong, L., 538 Weiss, S., 655 Wong, R. W. H., 1070 Weiss, Y., 722 Wood, J., 38 Welling, M., 952 Wood, W., 28 Wells, G. L., 38, 73, 95–99, 270, 283, Woods, D., 18 291–297 Woods, J. H., 340 Welsh, B. C., 882, 1055 Woods, J., 259 Wentworth, B., 957 Woods, W. A., 545 Wertheim, K., 956–957 Woodworth, M., 149 Author Index 1299

Wooldridge, M., 80, 130, 155, 159, 173, 249, Yinon, J., 921 524, 528–529 Yokohama, S., 544 Worboys, M. F., 513 Yokoo, M., 717 Wöβ,W.,513 Yoo, S. J., 548 Wright, A., 742 Yoon, D.-H., 958 Wright,D.B.,283 Yoshino, H., 84, 106, 668 Wright, D., 73 Young, A. J., 150 Wright, R. K., 844 Young, A. W., 150 Wrightsman, L. S., 283 Young, P., 28 Wroe, C., 550 Yovel, J., 301, 305–306, 308, 317, 1106 Wu, D., 353 Yu, F.-R., 700 Wu, F., 717 Yu,P.S.,728 Wu, J., 723 Yuan, L. Y., 538 Wu,S.F.,706 Yue, J., 526 Wu, Y., 725 Yue, Y., 526 Wu., K.-C., 904 Yuille, J. C., 18, 293 Würzbach, N., 346 Yunhong, W., 700, 962, 1033 Wyzga, W., 511 Z X Zabell, S. L., 887 Xiang, Y., 508, 511, 748 Zabih, R., 722, 725 Xiao, Y., 491 Zadeh, L. A., 490, 666 Xie, W., 901 Zadok, E., 744 Xie, X., 608 Zahedi, F., 645 Xie, Y., 741 Zajdman, A., 332 Xin, D., 748 Zaky, S. G., 960 Xodo, D. H., 544, 550 Zander, M., 841 Xu, A., 752 Zanni, G., 283 Xu, C., 702 Zantinge, D., 484 Xu, J., 702 Zappalà, A., 174, 239, 1044 Xu,J.J.,752 Zaragoza, M. S., 38, 239, 292 Xu, M., 668 Zarri, G. P., 84, 389–390, 1114 Xu, Z., 550 Zeckhauser, R., 720 Zeide, J. S., 1086 Y Zelen, M., 495 Yadava, R. D. S., 921–923 Zeleznikow, J., 5–6, 18, 99–107, 112, 131, Yaeger, H., 18 161, 170, 174, 177–205, 221, 244, 252, Yager, R. R., 666 378, 479, 484–486, 491–492, 511, 529, Yamashita, A. B., 983 553–559, 566, 595, 602–604 Yamashita, B., 983 Zeng, D., 511 Yan, X., 728, 748 Zeng, X., 912 Yang, Y., 678 Zeng, Y., 657 Yao, D., 723 Zeng, Z., 728 Yarmey,A.D.,443, 883 Zhang, D., 961 Yau, W., 958 Zhao, J., 669 Yazdani, M., 383 Zheng, R., 611, 748 Ybarra, L. M. R., 244 Zhong Ren, P., 513 Yea, B., 917 Zhou, L., 728 Yearwood, J., 177–205 Zhou, W., 728 Yedidia, J. S., 722 Zhou, X. J., 728 Yener, B., 678 Zhu, F., 728 Yim, H. S., 918 Zhu, S. C., 728 Ying, Y., 681 Zhu, Z., 697 1300 Author Index

Zier, J., 892 Zuckerman, M., 149 Zigdon, N., 152 Zuckermann, G., 405 Zimmer, R., 105, 1042 Zukerman, I., 149 Ziv, A., 332 Zulueta Cebrián, C., 239 Zualkernan, I. A., 183–184, 186–187, 205 Zurada, J. M., 488 Subject Index

A Adjudicative fact-finding, 1022 Abducer, 44 Administrator privileges, 696 Abduction and logic programming, 45 Admissibility and sufficiency of eyewitness Abductive diagnosers, 846 evidence, 267 Abductive framework, 479 Admissibility of hearsay evidence, 29 Abductive inference process, 46 ADNET, see Anti-Drug Network Abductive reasoning, 40, 44–45, 48, 117, 155, ADR, see Alternative dispute resolution 409, 477–479, 818, 855, 857 Abductive inference, 1021 ECHO, 47–49 Adultery, 299–300, 367, 429 ABDUL/ILANA, 164, 172, 403, 1021 Advanced Light Source (ALS), 914 Ability, normative, 249 Adversary argument, 1020 ABL (A Behavior Language), 401 Advice network, 499, 504 Abstract model, 569–571, 573–576 ADVOKATE project, 266–270 Abstract schema, 131 directed graph techniques, 268 Acceptability of arguments, 161 discretionary decisions, 268 Accident reconstruction, 104, 890, 935, 937 taxonomies, 270–273 computer-aided, 890 meteorological phenomena, 272 Account hijacking, 688–689 physical hindrances to observation, Accrual of argument, principle of, 148 271–272 Accuracy of identification by eyewitness, 264 situational factors, 271 Acquisition, knowledge, 266, 1084 Turnbull rules, 263–266 Action-based Alternating Transition Systems witness compellability, 269–270 (AATS), 159, 898 Aetiological narrative, 343 Actions, formal theory of, 86, 357 Affect-state patterns, 353 Actuopalynology, 926 AFIS, 951 Actus reus, 1021 Agency of Industrial Science and Technology Acyclic graph, 102, 116, 375, 569 (AIST), 335 AdaBoost, 702 Agent beliefs, 1023 Adaptive resonance theory (ART), 489 Agents, autonomous, 524–536 Ad hominem argument, 158, 1022 abilities, 524 Adjacency matrix, 500–501, 678 ANITA, 525 Adjudication opinion-forming modelling, application of, 525, 534 13–39 bidding, 531 belief revision, 16–26 blackboard paradigm, 526–528 considerations and suggestions, 26–28 common internode language, 530 focus on juries, 36–39 contract net protocol (CNET), 530 manipulation devices, 28–29 control flow inside the blackboard procedures and jurisdictions, 29–31 architecture, 527 quantitative models, 31–36 DARPA, 534

1301 1302 Subject Index

Agents, autonomous (cont.) semantic memory, 65 digital signatures, 524 SKILL, 62, 406 distributed problem solving, 529 stemming, 66 FAIS (FinCEN Artificial Intelligence subjectiveness of perception, 63 System), 525 syntactical analysis, 62 goal assignment, 533 working and structure, 54–57 knowledge source activation record Alternative dispute resolution, 1023 (KSAR), 527 Ambiguity matchmaking problem, 533 Ambiguity aversion, 1023 node abstraction slot, 529 spreading disambiguation, 45 PROLEXS, 529 American Academy of Forensic Science sensor agents, 525 (AAFS), 1105 subplan assignment, 533 American Board of Forensic Document task announcement, 530 Examiners (ABFDE), 613 tools for training police officers, 534–536 American Society of Questioned Document CACTUS, 536 Examiners (ASQDE), 1105 ExpertCop, 534–535 American Uniform Commercial Code, 307 pedagogical agent, 534–535 Amicus curiae, 617, 1014, 1024 See also Multiagent systems AML (Argument Markup Language), 167 Agent Stories, 342, 384–385 Anacapa charting, 510 Age-progression software, 868–871, 1023 Analysis, microscopic, 812 APRIL, 868 Analysts Notebook, 477, 507 Smoking Simulation, 868 Anatomy, 607, 901–905 wrinkling/aging algorithms, 868–871 Anatomia humani corporis, 903 Air pollution, 930 collision detection algorithm, 902 AI, see Artificial intelligence (AI) EnVision, 903 Algebraic sequential averaging model, 33–35 forensic examinations of human hair, 613 Algorithm modules, SIGHTS text mining, 680 force actuators, 904 Algorithm of PEIRCE-IGTT, 45 hand grasp posture generation algorithm, Algorithms for fingerprint matching, 961–962, 902 967 MuscleBuilder, 902 ALIAS, 45, 1023 Anchored narratives, 325, 327–328, 477, 1024 ALIBI, 3, 39, 53–57, 59–65, 67, 73, 75–83, 86, Anglo-American legal system, 31, 213–216, 343, 384, 394, 404–406 221, 240, 250 “afterlife” of the project, 75–79 adversarial system, 413, 415 ALIBI1, 81 evidence law, 250 ALIBI2, 62–63, 81, 405 features, 286 ALIBI3, 62, 81, 405 Anglo-Saxon jurisdictions, 2, 30 alternative interpretations, 67 Animation, in medical imaging, 1005 AND/OR tree, 65 keyframing, 1005–1006 architecture and control flow, 55 polygon models, 1005 common-sense knowledge, 62 3D studio MAX, 1005 conceptual factors, 69–75 traffic accident reconstruction, 1006 accomplices, 93 ANITA (Administrative Normative dramatis personae, 79–95 Information Transaction Agents), idea simplification, 79 525 knowledge representation, 61–69 Announcement, task, 530 operation stages, 56 Anomaly output examples, 57–61 -based detection of malware, 744 a planner for exoneration, 53–99 intrusions, 695, 697 Prolog, 61, 66 strict anomaly detection, 698 self-exoneration, 404–406 Anthropology, forensic, 896–905 semantic analysis, 62 ante-mortem skeletal pathology, 897 Subject Index 1303

finite state automata, 901 argumentdeveloper, 201 interval-to-interval relations, 899 argument tree, 197–198 predicate/transition nets, 901 dialogical argumentation, 195 tokens, 900 domestic violence, 199–200 human anatomy tools, 901–904 eTourism, 203–205 Anatomia humani corporis, 903 GetAid System, 202–203 collision detection algorithm, 902 refugee review tribunal, 200 EnVision, 903 refugee status, 200–202 force actuators, 904 Split Up system, 195–200 hand grasp posture generation -based negotiation, 161, 257 algorithm, 902 Bench-Capon’s variation of the Toulmin MuscleBuilder, 902 layout, 186–187 interpretation, 896 dialogue game, 186 non-invasive imaging techniques, 895 presupposition component, 186 palynology, 893 benefits, costs, and dangers, 256–260 para-mortem trauma, 897 categories of concepts, 177 sequencing, 896 computational tools, 258 sharp-weapon trauma, 897–898 considerations, 187 skeletal trauma mapping, 896 cost–benefit analysis, 257–258 taphonomics, 895 DART, 174 Anti-Drug Network, 712–714 defeasible logic, 480 capabilities and services, 712 dialectical, 178 counterdrug consolidated database dialectical vs. non-dialectical, 179–181 (CCDB), 712 Freeman and Farley model, 184–186 MITRE, 713 burden of proof, 185 Anti-forensics, 1024 definitional relationships, 185 Anti-Semitism, 330 explanatory warrants, 185 Appeal to expert opinion, 1025 reasoning mechanisms, 185 Applicant, 1025 sign relationship, 185 Application protocol-based intrusion detection generic actual argument model (GAMM), system (APIDS), 709 178–179, 187–195 Approximation argument template, 187–190 order-of-magnitude, 120 generic argument, 188–189 APRIL, 868 inference procedure, 188–189 Aprilage Progression software, 869–871 non-dialectical template, 187–188 Åqvist’s grading, 117–120 human inferences, 193 Åqvist model, 126 in deceptive communication, 150 Araucaria, 167–168, 843, 1025 Johnson’s variation of Toulmin layout, Arbitration, 1025 183–184 Archaeology, forensic, 877, 893–894 axiomatic reasoning, 183 Archive/image bombs, 688 backings, 184 ArcIMS, 515 inquiry, 184 ArcInfo, 515 medical diagnosis, 183 ArcSDE, 515 supporting evidence, 184 ArcView, 515 legal arguments, four layers of, 168 AREST, 513, 1025 dialectical, 168 ArguMed, 168, 1025 logical, 168 Argument procedural, 168 accrual of argument, principle of, 148 strategic or heuristic, 168 Argumentation, 129, 150, 154–155, 166–172, limitations of, 175 174–175, 177–203, 205, 255, 483, modelling of reasoning, 175 767, 841, 843, 1025 MYCIN, 193 applications of, 195–205 non-dialectical, 178–181, 192, 203 1304 Subject Index

Argumentation (cont.) case-based reasoning (CBR), 170, slate (computer tool), 175–176 244–246, 347, 373, 387, 397, 721, Stevie, 176 742, 844, 1036–1037 I-node, 176 indexing, 245 S-node, 176 model-based, 1093 Tools, 166–168 challenges, 339–342 AML (Argument Markup Language), computational narrative processing, 167 351–356 Araucaria, 167–168 breadth-first search, 352 ArguMed, 168 marker-passing, 352 Convince Me, 166–167 conceptual dependencies, 357–358 QuestMap, 167 interpersonal themes, 366–367 Reason!Able, 167 knowledge dependency graph, 368 Room 5, 168 knowledge sources, 365 Toulmin’s structure, 181–187 MOPS (memory organization packages), 361 dialectical emphasis, 181 script, 359 non-dialectical emphasis, 181 thematic abstraction units (TAUs), 364 ratio decidendi, 183 explanation patterns (XPs), 375 theory of, 183 extended Horn clauses (EHC), 349 Wigmore chart, 130–145, 168, 175, 209, lambda abstraction, 376 434, 843, 1121 Artificial intelligence algorithms, 404 Argumentative instrumentalism, 259 Artificial Intelligence and Law, 2, 5, 53, Argument refutation, 315–316 169, 171 bias-pluralism, 316 Artificial neural networks (ANN), 51, 523, 920 cultural cognition, 316 Aspect tree, dynamic (DAT), 899 legal formalism, 317 Association of Forensic Document Examiners old syllogistic formalism, 317 (AFDE), 1104 relational formalism, 317 Association rules, 246, 623, 1028 rule-breach-remedy, 316 Assumption-based Truth Maintenance System skeletal relevance, 315 (ATMS), 20, 844, 846 Arguments, 129–130 See also Truth maintenance system definition, 129 ASSYST (Applied Sentencing systems), 243 dialogue, 130 Attack analysis in honeypots, 708 systemic, 129 automated monitoring tools, 710 typology, 130 botnets, 709 Aristotelian syllogism, 478 burglar alarms, 709 Armed threat, 59–61, 81, 405 classification, 707 Arraignment, 1025 fingeprinting tools, 709 Arterial gushing, 974 fishbowl analogy, 707 Artificial emotions, model, 88 high-interaction, 706–707 Artificial intelligence (AI), 1–2, 5, 7–8, 16, 19, honeycomb, 709 24, 48, 61, 80, 87, 109–110, 115, host-based intrusion detection, 709 117, 126, 151, 153–155, 171, 178, low-interaction, 706–707 207, 268, 333, 335–336, 338, 340, tarpits, 708 343, 346, 348, 351–352, 356, 379, See also Intrusion detection 388, 396, 403–405, 419, 428, 437, ATT-Meta, 152, 1028 478–479, 483–484, 505, 520–521, Attorney sensitivity, 293 524, 529, 541, 545, 547, 568, 596, Attribution, 99, 1028 638, 652, 698, 701, 722, 743, 754, Auction, 714–740 758, 844–845, 898, 891, 914, 917, accumulation fraud, 715 933, 942, 1026–1028 bid shielding, 717 behaviour specification language, 337 bid shilling, 717 Subject Index 1305

decreasing bid auctions, 714 application of, 525, 533 eBay, 714–718 bidding, 531 FADE (fraud and abuse detection engine), blackboard paradigm, 526–528 717 common internode language, 530 fraud by buyers, 718 contract net protocol (CNET), 530 fraud by sellers, 718 control flow inside the blackboard fraudulent bidding, 716 architecture, 527 graph mining algorithm, 728 DARPA, 534 increasing bid auctions, 714 digital signatures, 524 steps in a safe Internet auction, 719–720 distributed problem solving, 529 Audio and Video-Based Biometric Person FAIS (FinCEN Artificial Intelligence Authentication (AVBPA), 1028 System), 525 Audit, 836–839 goal assignment, 533 Barker, 837 knowledge source activation record links and hypothesis, 838 (KSAR), 527 links hierarchy, 837 matchmaking problem, 533 AURANGZEB project, 431 node abstraction slot, 531 Australasian Legal Information Institute PROLEXS, 529 (AustLII), 553 sensor agents, 525 Australasian Society of Forensic Document subplan assignment, 533 Examiners (ASFDE), 1105 task announcement, 530 Authentication, personal, 1097 tools for training police officers, 534–536 Authority propagation, 726 CACTUS, 536 Authorship determination, see Handwriting ExpertCop, 534–535 AutoCarto conference, 514 pedagogical agent, 534–535 Automated facial expression recognizer, 871 See also Multiagent systems Automated fingerprint identification systems Autopsy, see Virtopsy (AFIS), 956 Autoscaling, dimensional, 922 Automated story processing, 378–380 Auxiliary probative policy, 1028 construction-integration model, 380 AV E R s , 1028–1029 KERNEL text, 379 Machine learning, 379 B parallel distributed processing, 379 Background generalisations, 21, 1029 plot summarisation, 377 Backing, 164–165, 180, 182–184, 186–189, story recognition, 379 627, 660 See also Interactive story generation Backpropagation Automatic summarisation, 587–598 backpropagation algorithm, 658 automatic text abstracting and indexing, errors backpropagation, 650, 922–923 596 Back spatter of blood, 974 FLEXICON, 596 Backup evidence question, 1029 FRUMP, 594 Backward extrapolation, 929 LetSum, 596 Backward induction solution, 420, 426 Lexical chains, 590 Backward random walk, 929 news story categorisation, 588 Bagging method, 523 REJB, 597 Bail, 1029 SALOMON, 588, 595, 597 BAILII, 553 SOQUIJ, 597 Ballistic fingerprinting, 949 text mining, 587 Ballistics, forensic, 883, 890, 904, 949 text tiling algorithm, 590 Ballistic theory, 975 See also Natural language processing Basic Formal Ontology (BFO), 553 Autonomous agents, 524–536 BASKETBALL system, 152 abilities, 524 Battered wife syndrome, 107 ANITA, 525 Bayes/Belief Net software, 110 1306 Subject Index

Bayes’ theorem, 7, 22, 31, 100–102, 106, 111, Belief updating models, 33 117, 1029–1030 Belief vs. commitment, 154 Bayesian and Naive Bayesian approaches in Believability domain, 97 Law, 99–107 Believable embodied agents, 381, 396 Bayesian belief nets, 844 Bench trial, 36, 1031 Bayesian conditioning, 19 Beyond a reasonable doubt, 1031–1032 Bayesian controversy, 109 Bias, foil, 294 Bayesian debate, 1030 Bias in neural network Bayesian enthusiasts, 7, 111, 1030 momentum and bias, 654 Bayesianism, 6–7, 109–111, 115, 118, Bid 127–128, 256, 283, 326, 477, 693, shielding, 717, 1109 737, 739–740, 887, 890, 946–947, shilling, 717 1030 fraudulent bidding, 716 Bayesian model, 7, 32–33, 104, 292, 336, 852 Bifurcation, ridge (in fingerprints), 958 Bayesian network (BNs), 49, 52–53, 102, 110, Big Floyd, 510, 1032 127, 255, 263, 692–695, 737–739, Binding 809, 852–855, 858, 887, 945, 1030 shield bidding, 716 conditional probability table (CPT), 103 shill bidding, 716 credit card fraud, 102 Binning, 672 directed acyclic graph, 49, 102, 263 Biological and forensic methods, 269 forensic knowledge, 105 Biometric fusion, 700 forensic statistics, 105 Biometrics, 618, 700, 860, 957–958, 961, 963, proponents of, 53 1033 Bayesian probability, 20, 34–36, 107, 112 Biostratigraphic services, 928 Bayesian reasoning, 1030 Bipartite graph, 599, 731, 743 Bayesian theory, 104, 116 BirdLife International, 451, 470 Bayesio-skeptics, 7, 256, 1031 Bird-sighting, 466–467 Behavioural profiling, 512 Bird taxidermy, 445 Belief-desire-intention (BDI), 155 Biting, tongue, 284 Beliefs, 149–160 BitTorrent (BT), 693–694 artificial intelligence, 149–152 Blackboard systems, 528–529, 1033 BASKETBALL system, 152 Blog-steganography, 692 common knowledge, and consequentialism, Blood pattern analysis, 984, 989 153–154 Bloodstain analysis, 841, 973–989 dispositional beliefs vs. dispositions to arterial spurting (or gushing), 974 believe, 152–153 back spatter, 974 antecedent, 152 blood spatter flight characteristics, 975 implicit, 153 expiratory blood, 974 tacit, 153 high velocity impact spatter (HVIS), 974 impression management, 149 loop viewing, 977 information management, 149 low velocity impact spatter (LVIS), 974 information manipulation theory, 150 medium velocity impact spatter (MVIS), nested, 149, 151–153, 422, 520 974 propagation, 724–725 passive bloodstains, 974 revision, 16, 23, 26 point or area of origin, 978–981 semantics of lying, 149 projected bloodstains, 974 suspicion, 149 software, 977–978 Walton’s approach, 154–160 BackTrack, 982–983 ad hominem arguments, 158 Crime Scene Command, 977, 981–983 critical questions, 156–157 FORident, 982–983 evasiveness, 159 HemoSpat, 982–983 examination dialogues, 157 on scene blood spatter calculator, 977 questioning, 157 pattern, 974 Subject Index 1307

transfer/contact bloodstains, 974 CACTUS, 536, 1036 upward moving bloodstain, 976 CAD, 1007 velocity effects on spatter and drops, Cadaver dogs, 893, 915–916 983–989 Calculation-intensive subsurface models, 929 angle of impact (AOI), 989 Canadian Legal Information Institute project blood spatter interpretation, 985 (CANLII), 553 characteristics, 987 Carneades, 164–166, 170, 1036 photography, 985 notation, 165–166 point of convergence (POC), 988 proof standards, 165 principles and procedures, 984 vs. Toulmin, 164–165 velocity impact stains, 974 Cartoon, gag, 75–76 wipe pattern, 974 Case-based reasoning (CBR), 170, 244–246, Blue ribbon jury, 1034 347, 373, 387, 396, 721, 742, 844, Bogus burglaries, 483 1036–1037 Bolding-Ekelöf degrees of evidential strength, indexing, 247 117, 1034 model-based, 1094 Boltzmann machine, 724 Case disposition, 1037, 1091 Bomb, archive/image bombs, 688 CaseMap, 208, 240, 1037 Bombay Natural History Society, 448 Case preparation, costs and benefits, 249–250 Boolean algebra, 7 bargaining and game theory, 253–256 Boosting non-cooperative bimatrix game, 254 multi-boosting, 701 payoff matrix, 254, 255 multiclass boosting, 702 two-person cooperative games, 254 BORIS, 63, 87, 353, 361, 363–370, 374, 397, two-person general-sum game, 254 399, 412, 415, 417, 428 economic rationality, 249 Bots, 710 epistemic paternalism, 250–252 See also Web bots litigation risk analysis, 252–253 Brain structure, 644 Causality, 48, 127, 321, 378, 572, 621 Britain’s Natural History Museum, 451, CATCH (Computer Aided Tracking and 469–470 Characterization of Homicides), British and Irish Legal Information Institute 517, 859 (BAILII), 553 CATO, 169, 529, 1037 British Geological Survey (BGS), 906 Causal attribution, 478 British Museum, 445, 451, 456–457, 470 Causal hypergraph, 848, 853 British Psychological Society, 285 CDIS, (Counter-drug intelligence system), 755 Broadcast News Navigator (BNN), 594 Centrality, Katz, 499, 721 Brokers, higher-risk, 729 Centrality, network, 510 BRUTUS, 150, 348, 384, 431 Central registration depository, 729 Bullet hole image classification, 701 Chaffing and winnowing, 691 Bullet time, 913 Chain of custody, 686 Burden, evidential, 1034–1035 Chain of evidence, 211, 686, 800–801 Burden of proof, 1034–1036 Character evidence, 1038–1039 Burglar alarms, 709 Chemical contaminant sources, 931 Burglary, 479, 802, 820 Child pornography, 511–512 burglary scene, 803 Child testimony, 28 fingerprint evidence at, 777, 789 Choice logic, independent, 111, 1079 investigation of, 802 Choice matrix, Katz, 502 likelihood of, 621 Chromatographic analysis, 885 tool mark, 814 Chromatography, gas, 885, 916, 920, 923, 929 C Chronological sequence of events, 805 C++ or Java, 546 Ciphertext, 691–692 CABARET, 169, 529, 1034 Claimant, 1039 1308 Subject Index

Classification, 618–621 Compositional modellers, 846 algorithm ID3, 619 Compositional modelling, 846, 856 electronic judge, 619 Compression, fingerprint, 1071 IF-THEN rules, 618 Compusketch, 863, 1041 KDD techniques, 619–620 Computational forensics, 879–880, 1041 neural networks tools, 619 Computational linguistics, 495 OVER project, 620 Computational learning theory, 604–605 Split Up project, 620 See also Machine learning Classification and association rules, 844 Computational models of argument, 161, Classificatory reasoning, 81 169–173, 258, 481 Clear-Best hypotheses, 47 Computational palaeontology, 904 Climatology, 927 Computational techniques, 4 CLIME, 555 Computer aided design (CAD), 1007 Close range photogrammetry, 994 Computer-assisted dispute resolution, 221, Clothing bias, 294 252, 629 Clustering techniques of legal databases, See also Split up 621–622 Computer crime, 262, 685, 687, 1041 self organising maps (SOM), 621–622 Computer forensics, 520, 685–687, 695, 880, text-mining, 621 890 Cocain (drug), see Narcotics, Drugs Computer investigation, 1041 Cocain (project), 728 Computerised sentencing, 243, 252 Code of Criminal Procedure, Italy, 213 Computer Security Cognitive dissonance, 27, 845, 942 malware, 708, 710–711, 742–745 Cognitive performance, noncredible, 150 anomaly-based detection, 744 Cognitive process belief revision, 17 densification, 741 Coherence, 15, 39–41, 51, 167, 660 detection, 740–747 Collar, Nigel, 451, 470 errors possibility, 744 COLUMBUS model, 343 homophilic machine–file relation, 742, Combined gas chromatography–isotope ratio 745–746 mass spectrometry (GCIRMS), 929 network forensics, 741 Command and Control Data, 783 node potential function, 746 CommonKADS, 266, 554 overview of the Polonium technology, Common knowledge, 153–154 747 Common-sense rules, 328 PeGaSus, 741 Communicating sequential processes (CSP), Polonium, 740 899 randomisation tests, 743 Communication reputation computation, 743 divide in policing, 271 scalability of Polonium, 747 Habermas’ theory of communicative signature-based detection, 744 action, 173 virus signatures, 744 National Law Enforcement Computing, forensic, 685, 1072 Telecommunications System Concentration, risk of too high data, 240 (NLETS), 536 Concept counting, 594 pragmatics of, 65 Concept space analysis, 749 supermaxims of, 309 Conceptual dependency theory, 64, 357, 428 Compatible evidence, 248 Conceptual graphs, 83, 91 Compelling motives, 50 Conditional probability, 40, 52, 103, 105–106, Complexity 112, 193, 246, 590, 616, 854, evidence layer, 570 857–858, 1101 identification of complex links, 799 Confabulation, 73, 1041 operational complexity model (OCM), 694 Confirmation bias, 27, 942 Complex litigation, 1039–1040 Confirmationism, 27, 283, 845, 942–943, 1041 Composites, 1040–1041 Conflicting evidence, 248 Subject Index 1309

Confrontation right, 1041 Credit Act Advisory System (CAAS), 619, 642 Connectionist algorithms, 52 Credit card fraud, 102 Connectivity criterion, 590 Crime analysis, 280 Consistency question, 1041–1042 Crime emergency ontology event model, 550 Constative inscriptions, 568 Crime Link, 508 Constructionism, 89 Crime scenario modelling, 841–849 Consumer preference test, 919 abductive diagnosers, 844 Contaminant Bayesian belief nets, 844 chemical, 930–931 bloodstain analysis, 841 retardation, 930 case-based reasoning, 844 transport in groundwater, 929 classification and association rules, 844 Context-free truths, 24 compositional modellers, 846 ConTour, 341–342 crime scene examination, 841 Contracting, incentive, 79, 90, 93, 529 crime scene protocols, 841 Contrary-to-duty obligations, 1042 forensic pathology, 841 Control question test, 284–285 model fragments, 846 Conversation disentanglement, 352 neural network clustering, 844 Conversion by limitation, 94 resources-for-action, 843 Convince Me, 166–167, 1042 role of crime scene examiner (CSE), 842 COPLINK Criminal Relationship Visualizer, survival analysis, 844 508, 511, 603, 748–752, 1043 Wigmore charts analysis, 843 Coplink data indexing, 748–752 Crime scene examination, 842 concept-space approach, 751 Crime scene examiner (CSE), 842 criminal relationship visualisation, 752 Crime scene investigation (CSI), 418–427, data mining, kinds of, 750 690, 841–842, 906, 951 WeightingFactor, 750 Crime scene protocols, 841 Coreference resolution, 856 Crime scene reconstruction (CSR), 841 Corpus, 678, 1043 Crime series analysis, 276 Correlation Crimes mapping, geographic information band-limited phase-only correlation systems for, 513–518 function, 970 ArcIMS, 515 eigenvectors, temporal and spatial, 931 ArcInfo, 515 phase-only correlation (POC), 967–968 ArcSDE, 515 Corroborative evidence, 800, 1043 AREST, 513 Costs and benefits, 249–263 AutoCarto conference, 514 argumentation, dangers of, 256–260 CATCH (Computer Aided Tracking and bargaining, 253–255 Characterization of Homicides), benefits, 256–260 517 costs, 256–260 DIME (Dual Independent Map Encoding, digital forensic investigation, 260–263 515 epistemic paternalism, 250–252 FLINTS, 513 evaluation while case preparation, 249–254 geo-mapping tools, 513 game theory, 253–255 geosimulation, 516 litigating risk analysis, 252–253 HITS (Homicide Investigation Tracking Cotton, Ronald, 96 System), 517 Counter-drug intelligence system (CDIS), 755 MapQuest, 515 Countermeasures, physical, 284 multi-agent systems (MAS), 516 Counter-terrorism analysts, 756 TIGER (Topologically Integrated Court of Cassation Geographic Encoding and Italy, 30, 207, 236 Referencing), 515 CQT, 284 Crime risk assessment, 513 CRD, see Central registration depository Crime stain, 772 Credibility, 115 CRIME-VUs, 862, 864–865, 868, 1044 1310 Subject Index

Crime Workbench, 508 General data table, 226 Criminal Evidence Act, 817 inquiry criteria, 218–221 Criminal forensic neuropsychology, 150 inquiry in Italy, 215–218 Criminal intelligence, 8, 176, 275, 280, 496, appeal, 215 506, 511, 741, 748 debate in court, 215 Criminality, pyramid, 277 preliminary inquiry, 215 Criminal justice information systems, see Italian Procedural Law, 212–215 Justice Information Systems jurisdiction effects, 236–237 Criminal link analysis, 9 Laboratory, 228–229, 235 Criminal profiling, see Offender profiling pattern for extortion, 229, 231 Criminal suspects, 237, 864 person window, 232 Criminal trial, 1043–1044 plea bargaining, 221 Criminology plea bargaining table, 225 agent-based simulation, 743 pop-up, 227 environmental, 281 project’s background, 212–213 forensic psychology and, 621 self-composition, 230 Cross-border issues of policing, 275 start menu, 223 Cross-cultural psychology, 88 storehouse, 232 Cross entropy, 722–723 strategic communication, 239 Cross-examination, 27, 126, 155, 158, 171, subjects window, 226 175, 214, 272, 564, 881, 1044 supplementary data, 225 Crown Prosecution Service, 10, 218, 291, 894 suspect’s record, 227 CSI, 418–427, 690, 841–842, 906, 951 text formatting, 229 CSP, 899 DAG, see Directed acyclic graph CSR, 841 Dalal’s principle, 26 Cube marching, 1007 Damning, 75–76, 135, 160, 286, 406, 446, 463, CuProS (customization production systems), 883 537–538 Damping factor, 503 Custodial interrogation, 282 DART, 174, 1044 Custodial interrogation of suspects, 282 Database Custody, chain of, 686 nested relations, 761 Custody Records System, 824 of emails, 487, 677–679, 892 Customization production systems (CuProS), of fingerprints, 956, 959, 968 537 relational, 512, 517, 537–538, 540, 546, CyberCafe, 82, 86, 395 569, 676–677, 755–756, 761, 927 Cyber security threats, detection, 536 Database merging, 17 See also Computer security Data encapsulation, 688 Data fusion, 17, 487 D See also Database, Database merging Dactyloscopy, 939 Data handling, 536–587 See also Fingerprint identification data warehousing, 536 Daedalus, 207–210, 212–243, 248, 1044 financial fraud ontology, 566–587 advantage of, 209, 238 CONSOB, 567, 572, 580, 586 argumentation structure, 239 evaluation of the abstract model, 573 battery of validation checks, 222 FF POIROT, 567–569 Bindi extortion case, 222 generalizations, 571 case history, 229 inferences, 572 circumstantial evidence, 232 investment scam online, 573–575 citazione a giudizio, 222 online fraud detection, 579 crime table, 222, 226 posteriori fraud detection, 579 cultural prosthesis, 231 unified modeling language (UML), embedded validation, 238 584–586 general annulment, 215 user requirements analysis, 576 Subject Index 1311

VAT @ , 567 distance-based, 630 legal ontologies, 553–559 statistical-based, 629 benefits, 554 reasons, 628 CLIME, 555 See also Outlier CommonKADS, 554 Data mining, 9, 100, 106, 154, 179, 195, e-COURT, 555 200, 205, 240, 246, 274, 277, FF-POIROT, 557–558 390, 483–493, 506–507, 511–513, KDE, 556 516, 519–524, 534–537, 544, MARPOL, 556 547, 550–552, 583, 598–600, 602, MILE, 555 606, 611, 619–621, 623, 625–628, POWER program, 555 630, 632, 638, 663–664, 667–668, RDF-mapped Semantic Web resource, 671–673, 675–676, 679, 682–685, 557 697–698, 701–702, 710, 713, 717, Ontologies, 544–552 720–722, 728, 737–738, 740–742, C++ or Java, 546 744, 748–754, 759–760, 762–763, crime emergency ontology event model, 844, 892–893, 916 550 algorithm, 627, 632 data mining, 544 application, 486, 491 domain ontology, 546 bagging, 491 financial fraud ontology, 550 behavioural profiles, 491 The mosquito anatomy morphology, classification trees, 490 552 clustering analysis, 490 ontology engineering, 546 data preparation, 489 OWL, 546 data warehousing, 489 partitioned semantic networks, 545 decision trees, 489 philosophical ontology, 545 feature extraction, 492 Prolog or Lisp, 546 goals for financial institution, 519 semantic networks, 544–545 graph mining, 728 SIMPLE, 547 link analysis, 486 SQL3, 546 machine learning, 491 upper ontologies, 546 multiagent technology, 492 WordNet, 547 multivariate image mining, 548 XML, 537–539 NETMAP, 486–487 CuProS (customization production offender profiling, 493 systems), 537 in policing, 274, 277 document type definition, (DTD), 539 pattern recognition, 488 nested universal relation model, 538 performance of, 632 RAFFAELLO, 537 precrime, 512 Data inconsistency, 628–637 predictive, 484 -based algorithm, 628 recursive, 682–685 contradictory data, 633–634 advantages, 684 incomplete knowledge, 634 algorithm, 683–684 local stare decisis, 634 author identification, 682 sample, 634 InFilter, 682 sampling error, 634 intrusion detection, 682 dealing with, 636 masquerade detection, 682 index-based algorithm, 630 mimicking, 682 judicial error, 630 processing stages, 684 nested-loop algorithm, 630 spoofed internet protocol (IP), 682 new legislation or precedents, 635 spoofing, 682 noise, 628 segmentation, 484 outlier detection self-organising maps, 489 deviation-based, 630 statistical modelling, 488 1312 Subject Index

Data inconsistency (cont.) tools vulnerability, 520 statistical prediction, 488 WizRule, 521 summarisation, 485 WizSoft, 521 See also Summarisation, automatic Detection of malware, 740–747 Data storage, in medical imaging, 1002 Detectors, lie, 285–286 Data warehousing, 487, 536 DGPS, 912 DAT, see Dynamic aspect tree (DAT) Diagnostic reasoning, 118 Dead Bodies project, 841, 844–845, 1044 Differential emotion theory, 88 Deception Digital anti-forensics, 1046 detection, 18 Digital divide in policing, 275 indicators of lying, 283 Digital forensic investigations, costs and Decision-support systems, 5 benefits of, 260–263 architecture, 846 cost-benefit ratio (CBR), 261 Decision trees, 100, 202, 253, 490–492, 606, digital metaforensics, 260 639–640, 642, 1044–1045 Decision tree software, see TreeAge Pro return-on-investment (ROI), 260 Deductibility theorem, 313 Digital forensics, 263, 685–687, 692–695, 872, Deep Read, 378 880, 889, 1046 Defamation Act, 417 and Bayesian networks, 692–695 Defeasibility, 1045 GOMS-KLM model, 694 Defeasible reasoning, theory, 148 operational complexity model (OCM), Defeater, node, 147 694 Defence hypotheses, 39, 51 posterior probability, 694 Defendant, 1045 Digital forgeries, 871, 1046 Deformation, elastic (in fingerprints), 960 Digital metaforensics, 260, 889 Delta function, Kronecker, 970, 971 Digital Steganography, see Steganography Dempster-Shafer theory, 18, 20, 22–23, 103, Dijkstra algorithm, 508 112, 1045 Dilution olfactometerics, dynamic, 918 DENEG algorithm, 934 DIME, 515 Densification of malware, 741 Directed acyclic graph, 728 Dental radiology, in medical imaging, 1009 Disabling logs, 688 Dental reconstruction, in medical imaging, Disambiguation 1004 spreading disambiguation, 45 Dentistry Discourse coherence, 352 orthopantomograms, 1002, 1009 Discovery, knowledge, 485, 554, 618, 628, Deontic connotations, 54 658, 1084 Deontic logic, 1046 Discretion, 1046–1047 Deontology, 1045 judicial, 194, 246–249 Dependency networks, relational, 727 prosecutorial, 246–249 Depository central registration, 729 Discretionality, 194 Dermal ridges, 949 Desiderata, 27 Discretionary, 1047–1049 Detecting Association Rules, 623–625 Discriminant function analysis, 949 hypotheses, 624 Disposition, 1048 KDD techniques, 623–625 evidence of, 1056–1057 visualisation, 624 Dissonance, cognitive, 27, 845, 942 Detection, 518–524 Distraction burglaries, see Bogus official CRISP-DM data mining, 519 burglaries digital anti-forensics, 520 Distributed situation space (DSS) model, intelligence-led policing, 518 380 link analysis, 519 Divorce Court, 299 modus operandi (MO), 519 DNA outlier detection tools, 519 database, 536, 773 Subject Index 1313

evidence, 109, 409, 772–773, 778, 780, splits, 753 786, 789, 814, 825, 858, 863, 887, Efforts duplication in policing, 278 889, 943, 945–948, 953–954, 1048 Eigenvalues, 501 fingerprinting, 943, 945, 951, 1071 Eigenvectors, temporal and spatial correlation, index system, combined, 945 931 low copy number DNA, 786–787 E-Justice, 207–208 markers, 810 Elastic deformation in fingerprints, 960 mixtures, 945 e-Learning management, 551 profile, 775, 947 Electrical resistivity method, 911 profiling, 269, 775–787, 809, 887, Electrogustometer, 918 944–945, 947, 973 Electronic aroma detection (EAD), 918 swabbing, 785, 802, 812 Electronic commerce (E-commerce), 550 testing, 297, 945 Electronic judge, 486, 619 typing, 945 See also Traffic accident disputes Dock identification, 1048–1049 Electronic nose, 916–924 Doctrine of chances, 1049–1052 computing system, 920 Document Examiners, National Association detection system, 920 of, 1105 fast gas chromatography (FGC), 923 Document type definition, (DTD), 539 gas chromatography, 916 Dogs, cadaver, 893, 915 sample delivery system, 920 Domain name service (DNS), 695 trace vapour detection, 921 Domain ontology, 546 vapour concentration, 921 Domestic violence, 196–200, 635–636 Electronic Privacy Information Center (EPIC), Double-counting the evidence, 1052–1053 240 Doxastically simple, 421 Electronic sensing (e-sensing), 917 Doxastic attitude, 1053 Electronic tongues, 923–924 Doxastic logic, 251, 1053–1054 Electrostatic lifting devices, 802 Dramatis personae, 90 Elvis taxa, 448 DTD, 539 Email mining, 675–677, 685 Dynamic aspect tree (DAT), 899 email mining toolkit (EMT), 676 Dynamic dilution olfactometers, 918 Enron email database, 677–679 Dynamic uncertain inference, 1054 framework for, 676 link and network monitoring, 676 E rolling histogram, 676 Ease-of-fabrication, 99 temporal mining, 676 eBay, 714–717 EMBRACE, 248, 662, 1054 ECHO, 15, 39–53, 660, 1054 Emotion abductive reasoning, 47–49 transactional theories of, 89 greedy algorithm, 43–44 vitality affects, 89 neural computing, 40 Encapsulation, data, 688 PEIRCE-IGTT, 44–47 Engineering, forensic, 935–937 technical aspects, 40 failure analysis, 937 Thagard’s neural network algorithm, 41–43 fractography, 937 Thagard’s principles of explanatory structural risk evaluation, 937 coherence, 40–41 Enron email database, 677–679 von Bülow trials, 49–51 Entanglement, 1054 EChronicle Systems, 389–393 Entomology, forensic, 884–885 Economic rationality, 30, 250, 301 Entrapment, 282, 1054 e-COURT, 555 Entropy EDS Project, 752–754 cross entropy, 724–725 classification algorithms, 753 Enumerative induction, 48 data balancing, 753 Environmental criminology, 281 data mining, 754 Environmental damage assessment, 890 1314 Subject Index

Environmental forensics, 928–935 privatisation and graft, 430 air pollution, 930 bird taxidermy, 445 backward extrapolation, 929 bird-sighting, 467 backward random walk, 929 Elvis taxa, 448 calculation-intensive subsurface models, 929 forest spotted owlet, 450, 462 chemical contaminant sources, 931 Meinertzhagen bird collection combined gas chromatography–isotope ratio controversy, 451–452 mass spectrometry (GCIRMS), 929 Epistemic paternalism, 250–251, 300, 1049 DENEG algorithm, 934 E-Policing, 273 exploratory data analysis, 932 See also Policing hindcasting, 929 Error backpropagation, 922–923 multiple linear regression, 931 Errors of labels, interpretation, 447 polytopic vector analysis (PVA), 933–934 Errors possibility, malware, 744 receptor modeling problem, 933 E-sensing, 917 SAFER method, 933 Esuli’s MiPai algorithm, 861 self-training receptor modeling, 933 Ethnoreligious identity, 331 target transformation factor analysis See also Badge, Jewish (TTFA), 934 eTourism, 203–205 temporal and spatial correlation agent-oriented approach, 203 eigenvectors, 931 generic argument, 206 Environmental palynology, 924 generic argument tree, 203 EnVision, 903 negotiation, 203 EPIC, Electronic Privacy Information Center, European Court of Human Rights, 288, 880 240 European Pollen Database (EPD), 927 Episodic formulae, 81, 343, 428–476, 877–878 Evidence application -based crime prevention, 1055 Cardiff Giant case, 429 discourse, 1055 WaterTime project, 430 of disposition, 1056 stuffed birds forensic testing, 444–476 free judicial evaluation, 289 notation of, 435–444 law of, 1055 ability, permissibility, and agency, 439 layer, complexity, 571 agent symbol for, 437 of opinion, 1057, 1063 belief, 437, 440 theory of, 1057 binary infix operators, 437 theory of juridical, 1055 characters’ goals, 438 Evidential burden, 1057 crumpled relations, 437, 439 See also Burdens of proof dread, 442 Evidential computing, 685, 1057 hope, dread, despair, relief, 441 Evidential damage doctrine, 1057–1058 kinds of giving testimony, 443 Evidentialism, 251, 1058 logical operators, 436 Evidential reasoning, 1059 mathematical notation, 435 Evidential strength, 1059 perception, 442 Bolding-Ekelöf degrees of, 117, 1034 realisation, 441 Evidentiary value model, 210, 212 set theory, 436 See also Lund procedure standard operators, 435 EvoFIT, 864–868, 1053 stit operator, 439 Examination in chief, 1059–1061 symbols, 438 Excavations temporal relations, 435–436 forensic, 909 representation method, 428–435 trial-and-error, 906 Cardiff Giant case, 430 Exchange principle, 1061 computational representation, 428 Exclusionary principle, 1061 event calculus, 433 Exclusionary rules, 136, 212, 252, 299–300, feveroles case, 430 1061–1062 Subject Index 1315

Expected action, 423 mug shots, 860 ExpertCop, 11, 534–535, 1067 Photofit, 862 Expert evidence, 1062 portrait robot informatisé, 860 Expertise question, 1067 prefix-permutation indexing, 861 Expert opinion, appeal to, 1062–1063 image forensics, 871–874 Expert witness, 109, 171, 522, 602, 881–892, socio-cultural factors, 877–879 928, 935, 1063–1067, 1081–1082 Face Recognition by Similarity (FRBS), 859 on-line directory of expert witnesses, 602 Face reconstructing from verbal descriptions, linguist’s role, 618 861 reliance on, 881 Facial expressions, 335, 394–395, 859, 871, role of, 894 1006 Explanation-generation techniques, 109 Facial reconstruction, 874–875, 877, 1068 Explanationism, 110 Facial resemblance, 452 Exploratory data analysis, 932 Fact-driven models, 32–33 Explosives Detector, 921 Factfinders, 18, 29, 36, 144, 174, 215, 249, External feature blurring, 866 412, 415, 1068 Extraction tool, ontology, 586 Fact-finding errors, 31 Extrapolation, backward, 829 Facticity, 1068 Extrinsic policy, 1067–1068 Factor analysis, target transformation, 934 Eyewitness Factorial taxonomy, 98 identification, 38, 95, 263–265, 270, 273, Fact pattern analysis, 246 283, 293, 295–297, 882–883 Fact positivism, 1068 psychology, 415 Factual truth, 1068 reliability of suspects, 283 See also Truth testimony, 50, 266, 270, 273, 415, 611, Factum probandum, 1069 859, 1068 Factum probans, 1069 reliability of, 267, 270, 283 FADE, see Fraud and abuse detection engine Failure analysis, 937 F Fair trial, characteristic, 249 FacePrints, 860, 864, 1068 FAIS (FinCEN Artificial Intelligence System), Face processing, 858–879 525 age-progression software, 868–871 Faked kidnapping, 425–426 APRIL, 868 False confessions, 281 Smoking Simulation, 868 False confessions of suspects, 281 wrinkling/aging algorithms, 868–871 False positive, 1069–1070 computer tools, 858–859 Falsificationism, 27, 943 Face Recognition by Similarity (FRBS), Fast gas chromatography (FGC), 923 859 Feedback loop, 784 facial expression recognition, 871 Feed forward networks, 645–649 facial reconstruction from skeletal remains, Feigned cognitive impairment, 150 874–877 FF-POIROT, 557–558 categories, 875 FGC, 923 computer-graphic techniques, 875 FIDEL GASTRO, 539 identification tools, 859–868 Fido Explosives Detector, 921 CRIME-VUs and EvoFIT Projects, 864 Field portable X-ray fluorescence (FPXRF) external feature blurring, 866 spectrometry, 905 FacePrints, 860, 864 Field question, 1070 face reconstructing from verbal File system, new technology, 688 descriptions, 861 Fillers, 292, 296 holistic-CI, 866 Filters holistic tools, 866 Gabor filter, 960–962 identity kit, 862 low-pass filter, 971 ImageFinder interface, 860 summarisation filter, 487 1316 Subject Index

Financial fraud ontology, 550, 557–559, verification, 1071–1072 566–578, 914, 955 See also Individual identification FinCEN (Financial Crimes Enforcement Finger-tip searches, 906 Network), 511, 1070 Firearm databases, 536 Fingeprinting in honeypots, 809 Firewalls, 699 Fingerprinting Fiscal fraud detection ballistic, 949 criteria, 764–765 genetic, 945 objective functions, 764 soil fingerprinting, 908, 948 rule-based classification, 765 Fingerprints, 1070–1071 sample selection bias, 763 computational techniques, 957–964 Fishbowl analogy of Honeypots, 707 algorithm for fingerprint matching, 961 Flexnotes, see Headnotes automatic fingerprint identification, 959 FLINTS 2, 815–819 automatic detection of facial landmarks, command and control, 815 961 desktop environment, 519 convolution theorem, 960 enhanced features, 814 elastic deformation, 960 functions, 814 fingerprint matching algorithm, 959 geographical analysis, 814 fingerprint verification competitions actioning forensic matches, 814 (FVC), 959 advanced mapping software, 814 fusion code, 961 tailor made system, 814 Gabor filters, 960 toolbar, 819 graph matching, 960 FLINTS (Force Linked Intelligence System), iterative closest point algorithm (ICP), 8, 209, 511–512, 767–769, 771, 963 778–785, 787–791, 793–802, 810, Match-on-Card (MoC) technology, 963 813–819, 821–826, 828, 830, minutiae-based matching algorithms, 834–836, 839, 1072 961 audit trails, 836–839 mosaicking, 963 Barker, 837 personal authentication systems, 958 links and hypothesis, 838 phase-only correlation, 961 links hierarchy, 837 ridge bifurcation, 958 expansion to police areas, 783–785 ridge ending, 958 benefits, 784 support vector machine (SVM) feedback loop, 784 algorithm, 959 incident handling, 783 tree matching, 960 first generation, 780–781 compression, 1071 footwear marks, 804 from dead bodies, 952–957 geographical analysis, 824–826 automated fingerprint identification graphical results, 821–824 systems (AFIS), 956 DNA match, 823 latent print examiners (LPEs), 956 evidence checking, 822 latent print individualization, 956–957 validity of links, 823 uniqueness claim, 955 hot spot searches, 833–834 global, 919–920 results based on incidents, 834 identification, 949, 1071 search function, 834 inquiry, 954–955 integration, linking and analysis tools, palmprints, 1097 781–783 matching, 952, 955, 957, 958–963, intellectual foundations, 800–801 970–971 burglary scene, 803 matching algorithm, 959, 961–964, 1071 chain of evidence, 800–801 recognition, 1071 chronological sequence of events, 805 scanning, 1071 micro-level analysis of events, 801 sensors, 1071 multiple-case analysis, 801 Subject Index 1317

link detection, 778–780 vehicle searching, 735–736 fingerprint, 778 options, 834 footwear impressions, 778 results, 836 tool mark, 778 vehicle search dialog box, 835 neo-Wigmorean approach, 802 volume crimes and suspects, 785 performance monitoring, 785–787 Fluid dynamics, computational, 895 DNA Low Copy Number (DNA LCN), Foil bias, 292 786 Foligno, 25 DNA profiling, 786 Food processing, 539 DNA swabbing, 785 See also FIDEL GASTRO evidence generation, 785 Foot tensing, 284 fingerprint collection, 785 Footwear Low Copy Number DNA, 786–787 evidence, 890–891 substance-blind evidence, 785 impressions, 778 prolific offenders, 828–833 marks, 804 crime results and suspects photograph, Force Linked Intelligence System, see FLINTS 833 (Force Linked Intelligence System) graphical depiction, 832 Forensic anthropology, 893–904 offence types, 829 ante-mortem skeletal pathology, 897 operational command units (OCUs), finite state automata, 901 828 interval-to-interval relations, 899 query definition dialog box, 829 predicate/transition nets, 901 search, 827–828 tokens, 902 systemising the identification, 774–778 human anatomy tools, 901–905 DNA database, 775 Anatomia humani corporis, 903 DNA profile, 775 collision detection algorithm, 902 indicators of identity, 774 EnVision, 902 multidimensional identification index, force actuators, 904 774, 776 hand grasp posture generation persistent offenders, 774 algorithm, 902 tattoo, 777 MuscleBuilder, 902 virtual persons record, 775 interpretation, 896 virtual suspect, 775, 777 non-invasive imaging techniques, 895 temporal analysis, 826–827 palynology, 893 unknown offenders, 773–774 para-mortem trauma, 897 DNA evidence, 773 sequencing, 896 virtual offender, 773 sharp-weapon trauma, 898 use, 787–799 skeletal trauma mapping, 896 additional links, 798 taphonomics, 895–896 analysis of evidence, 795 Forensic archaeology, 877, 893–894 clustering events, 790 Forensic ballistics, 883, 890, 904, 949 conclusion generation, 795 Forensic computing, 684, 687, 1072 confirmed and rejected links, 791 Forensic copy, 686 fingerprint evidence, 797 Forensic dentistry, 885 forensic links, 792 Forensic disciplines, 890–893 identification of complex links, 799 computational fluid dynamics, 892 identification of illegal drugs market, computer-aided accident reconstruction, 796 890 intelligence picture, 799 environmental damage assessment, 890 linking of suspect and evidence, 797 footwear evidence, 890–891 pattern of incidents, 791 forensic accounting, 890 potential links, 791 Forensic engineering, 935–937 sample performance reports, 793 failure analysis, 937 1318 Subject Index

Forensic engineering (cont.) Forensic psychology, 149, 621, 880–883, 890 fractography, 937 Forensic rhetoric, 335 structural risk evaluation, 937 Forensics, 685–689 Forensic entomology, 884–885 account hijacking, 688 Forensic examination, 262–263, 471, 694, 956 anti-forensic activities, 688 of human hair, 613 Archive/image bombs, 688 Forensic Expertise Profiling Laboratory, 613 chain of custody, 686 Forensic geology, 905–908 chain of evidence, 686 differential global positioning systems computational, 879–880, 1041 (DGPS), 912 computer crime, 685 earth resistivity, 910 data encapsulation, 688 electrical resistivity method, 911 disabling logs, 688 electromagnetic methods, 912 evidential computing, 685 features, 911 forensic computing, 685 forensic excavations, 909 forensic copy, 686 geological maps, 906 hash value, 686 geological profile, 908 MACE alterations, 688 Geomorphological observations, 907 removing/wiping files, 688 gravity method, 911 traitor tracing, 689 ground-penetrating radar (GPR), 910, 912 Forensic sciences, 1072 induced polarisation method, 911–912 See also DNA; Forensic disciplines magnetic method, 911 Forensic Science Service (FSS), 786, 891 magnetometry, 910 Forensic test, 2, 431, 451, 470, 472, 1072 metal detector methods, 912 Forest spotted owlet, 448–450, 462 physical properties, 908 ForEx, see Forensic examination self-potential method, 912 Forged trademarks, 737 soil fingerprinting, 908 Forgeries in handwritten petitions, 617 SoilFit project, 908 Formal theory of actions, 86, 357 time slicing, 910 Forming an opinion in judicial factfinding, topsoil magnetic susceptibility, 910 13–39 Forensic geoscientists, 906 belief revision, 16–26 Forensic laboratory, 261 considerations and suggestions, 26–28 Forensic-Led Intelligence System (FLINTS), focus on juries, 36–39 767 manipulation devices, 28–29 See also FLINTS procedures and jurisdictions, 29–31 Forensic linguistics, 618 quantitative models, 31–36 Forensic matches, 780 Forrester paradox, 1042 Forensic odontology, 885 Forward chaining, 845 Forensic palynology, 924–928 FPXRF, Field portable X-ray fluorescence actuopalynology, 926 spectrometry, 905 biostratigraphic services, 928 Fractography, 937 environmental palynology, 926 Fraud, 714–739 NAPDToSpread, 928 accumulation fraud, 715 palaeoecological pollen, 925 bid shielding, 717 Palynodata Table Maker 1.0, 928 bid shilling, 717 PARADOX software, 928 by buyers, 718 PAZ Software, 928 credit card, 102 pollen data search engine, 928 criteria, 764–762 stratigraphic palynology, 926 decreasing bid auctions, 714 Webmapper, 928 eBay, 715–717 Web tools, 927–928 FADE (fraud and abuse detection engine), Forensic pathology, 841, 884–885 717 Forensic pedology, 512 financial fraud ontology, 755 Subject Index 1319

Fraud and abuse detection engine, 717 Generalised delta learning rule, 650 Fraud detection algorithm, 730 Generic Actual Argument Model (GAMM), Fraudulent actions, temporal awareness, 178–179, 181, 187–195, 201, 205 578 argument template, 187–190 fraudulent bidding, 716 generic argument, 188–189 graph mining algorithm, 728 inference procedure, 188–189 increasing bid auctions, 714 non-dialectical template, 187–188 information extraction (IE), 755 Generic argument, 178–179, 188–195, 197, investment and securities, 578 199, 202–205 NLToolset, 756 representation of, 190 objective functions, 764 Genetic algorithm (GA), 488, 536, 668–671, Ornithological fraud, 449 673–675, 678, 724, 864, 866, 961 rule-based classification, 765 data transformation, 671–672 sample selection bias, 763 evolutionary algorithms, 668 by sellers, 717 FUELCON, 669–671 VAT fraud, 578, 581–583, 765 k-NN algorithm, 674 See also Fiscal fraud detection MacroGA, 669 Fraudulent bidding, 716 nearest neighbours approaches, 674 shielding, bid, 717 NetProble, 724 FRBS, Face Recognition by Similarity, 859 ring or line topology, 669 Free proof, 1072 Genetic fingerprinting, 945 FSS, Forensic Science Service, 786, 891 Genetic testing, 945 FUELCON, 669–671 Gentzen sequent calculi, 318 Fuel laundering scam, 758–759, 761 Geoforensics, see Forensic geology Full-page taxonomy, 98 Geographic Information System (GIS), 281, Fusion 513–518, 534–535, 621, 906 biometric fusion, 700 Geographic information systems for mapping Fuzzy logic, 104, 193, 497, 582–583, 666–668, crime, 512–518 920 ArcIMS, 515 better tolerance, 667 ArcInfo, 515 data classification, 667 ArcSDE, 515 indirect contribution to data mining, 667 AREST, 513 knowledge discovery, 667 AutoCarto conference, 514 knowledge granulation, 667 CATCH (Computer Aided Tracking and Fuzzy sets, 104, 106, 490, 497, 666–667, 858 Characterization of Homicides), 517 G DIME (Dual Independent Map Encoding, GAAM, see Generic Actual Argument Model 515 Gag cartoon, 75–76 FLINTS, 513 GALLURA project, 405 geo-mapping tools, 513 Galton details in fingerprints, 951 geosimulation, 516 Game Master (GM) paradigm, 388 HITS (Homicide Investigation Tracking Game-theory, 171, 288, 420 System), 517 Game tree, 254 MapQuest, 515 Gas multi-agent systems (MAS), 516 chromatography, 885, 916, 920, 923, 929 TIGER (Topologically Integrated soil gas surveying, 915–916 Geographic Encoding and Gastronomy, 539 Referencing), 515 See also FIDEL GASTRO Geohazards, investigation of, 906 GCIRMS, 929 Geoinformatics or geomatics, 1073 Gender test, 810 Geology, forensic, 905–913 Generalisations, 1072–1073 differential global positioning systems common-sense, 21 (DGPS), 912 1320 Subject Index

Geology, forensic (cont.) Gunshot to the head, 1004 earth resistivity, 910 Gushing, arterial, 974 electrical resistivity method, 911 electromagnetic methods, 912 H features, 911 Habermas’ theory of communicative action, forensic excavations, 909 173 geological maps, 906 Hair geological profile, 908 forensic examinations, 613 Geomorphological observations, 907 Handwriting, 611–617 gravity method, 911 ASTM Standard E2290–03, 613 ground-penetrating radar (GPR), CEDAR-FOX, 616 910, 912 forensic linguistics, 617 induced polarisation method, 911–912 forensic stylistics, 614, 617 magnetic method, 911 identification, 613, 617, 1073 magnetometry, 910 testimony, 613 metal detector methods, 912 forgeries in handwritten petitions, 617 physical properties, 908 principle, 612 self-potential method, 912 stylometric analysis, 611 soil fingerprinting, 908 Video Spectral Comparator (VSC), 612 SoilFit project, 908 writeprint characteristics, 611 time slicing, 910 Hash value, 686 topsoil magnetic susceptibility, 910 Headnotes, 606 Geo-mapping tools, 513 Hearsay, 300, 1073–1076 Geosimulation, 516 Hearsay rule, 30, 301, 862 GetAid, 202–203 Hearsay testimony, 216 GIS, 281, 513–518, 534–535, 621, 906 Henry classification system, 950 GKT, 285 Hidden Markov Models (HMM), 378, 703 Global fingerprint, 919–920 Hidden nodes, 41, 644, 650–651 Global narrative agency, 386 Hierarchy of goals, 164 Goal-trees (goal hierarchies), 164 Hierarchy (tree) of beliefs, 163 Golden triangle, 860 Higher-risk brokers (HRB), 729 GOMS-KLM model, 694 Hijacking, account, 688 Google, 502, 726 Hindcasting, 929 Grading mechanisms, 121–124 HITS, 517, 726 Grammars, graph-rewriting, 83 HMM, 378, 704 Granularity, 899 Holistic-CI, 766 Graph Holistic tools, 866 acyclic graph, 102, 116, 375, 569 Home Office Large Major Enquiry System conceptual graph, 83 (HOLMES 2), 477 inference-graph, 146 Homicide Laplacian graph, 599 Monster of Foligno-serial killer, 25 Graph matching for fingerprints, 960 HITS (Homicide Investigation Tracking Graph mining, 728 System), 517 Graph mining algorithm, 728 Homiletics, 340 Graphometrics, 941 Homophilic machine–file relation, malware, Graph-rewriting grammars, 83 742, 745–746 Graph theoretical and spectral analysis, 678 Homunculus problem, 376 Graph theory, 65, 253, 494, 497 Honeycomb, 709 Greedy algorithms, 43, 51–52 Host-based intrusion detection, 698 Guerrilla war, 432 Hostnames, 695 Guessing, 47 Hot-tubbing, 1076–1077, 1082 Guilty knowledge test (GKT), 285 HUGIN, 110, 1077 Guilty plea, 1073 Human faces processing, see Face processing Subject Index 1321

Hybrid matching algorithm, 961 IMT, 18 Hypergraph grammar, 83 Incentive contracting, 79, 90, 93, 529 HYPO, 169, 529, 1077 Incident handling, 7 Hypothesis Inclusionary principle, 1079 Hypothesis formation, 472 Inconsistency, data, 628–637 cell-based algorithm, 630 I contradictory data, 633–634 IBIS, 167, 1077 incomplete knowledge, 634 ICL, 112, 1079 local stare decisis, 634 ICP, 65 sample, 634 ICVS, 342 sampling error, 634 ID3 algorithm, 606, 642 dealing with, 636 Identification index-based algorithm, 630 eyewitness, 38, 95, 263–265, 270, 273, judicial error, 630 283, 293, 295–297, 882–883 nested-loop algorithm, 630 radio-frequency tags, 539 new legislation or precedents, 635 statistics of, 8 noise, 628 steps, 962–963 outlier detection fingerprint, 949–950, 1071 deviation-based, 630 of potential suspects, 810 distance-based, 630 See also Individual identification; Identity statistical-based, 629 parade reasons, 628 Identikit, 861–863, 1077 Independent Choice Logic (ICL), 111, 1079 Identity parade, 8, 290–297, 1077, 1086 Indexing, knowledge, 375 diagnostic value, 292 Indicators of lying, 283 facial composites, 290 Indispensability criterion, 590 innocents, called foils or look-alikes, 290 Individual identification, 937–971 lineup instruction bias, 294 confirmationism, 942 mistaken identification, 296 dactyloscopy, 939 multiple-witness identifications, 293 DNA, 937–972 suggestive eyewitness identification DNA mixtures, 945 procedures, 295 DNA profile evidence, 947 video identification parade electronic genetic fingerprinting, 945 recording (VIPER), 291 genetic testing, 945 See also Lineup legal reasoning, 944 ID parade disc, 290 maternal imagination theory, 944 IDS, 388 paternity claims, 943 IDtension project, 341, 385 PATER software, 944 IEC, 756 reference-class problem, 946 IFS, 1078 falsificationism, 943 IF-THEN rule, 618 Fingerprints, 948–972 ImageFinder interface, 860 assessment problem, 952 Image forensics, 871–874, 1078 ballistic fingerprinting, 949 Image fusion, in medical imaging, 1007 band-limited phase-only correlation Image mining function, 970 multivariate image mining, 548 computational techniques, 957–964 Images, light probe, 872–873 database (IAFIS), 951 Imaging examination of, 951 3D imaging, 1003–1005 fingerprints from dead bodies, 952–957 multifluorescence, 548–549 Galton details, 951 IMP, 29, 80, 384, 403 Henry classification system, 950 Impressions, footwear, 778 identification steps, 948–949 Imputation, 1078 inverse discrete Fourier transform, 969 1322 Subject Index

Individual identification (cont.) Inquest, 1079 Kronecker delta function, 970–971 Inquiry, fingerprint, 954–955 live-scan, 950 Inquisitorial, 1080–1082 normalised cross-phase spectrum, 969 Inquisitorial criminal procedure system, 412, phase-only correlation (POC), 967–968 415 scanning Kelvin probe (SKP), 951 Insecurity management, 689, 1082 Tohoku algorithm, 967 Institutional friction in policing, 278–279 graphometrics, 941 Insurance crimes, 521 history of, 937 Intelligence gaps in policing, 277 Purkinje cells, 939 Intelligence-led policing, 9, 273, 278, 518 Indo-European invasion of Europe (supposed), See also Policing 340 Intelligent legal decision-support systems, 5 Induced polarisation method, 911–912 Interactive digital storytelling (IDS), 388 Induction, 637–643 Interactive story generation, 380–389 algorithm, 619, 627, 639–640, 642–643, adaptive dilemma-based narratives, 385 673 agent stories, 384–385 benefits of, 639 author/story book, 384 data mining techniques, 638 automated novel writer, 382 decision tree, 639 believable embodied agents, 381 difficulties, 639 BRUTUS, 384 enumerative, 48 character-and author centric techniques, examples, 642 381 inductive reasoning, 637, 1079 DAYDREAMER, 383 pattern interestingness, 638 Dramatica project, 384 Inference, 31, 47–49, 115, 127, 146–147, 165, Façade, 386 167, 188–189, 200–202, 336, 801, GADIN, 385 850, 944, 1079 IDtension project, 385 Inference engine, 1079 Machinima, 385 Inference-graph, 25–26, 146–149 MEXICA, 385 defeat-links diagram, 147 MINSTREL, 384–385, 397–398 inference/defeat loops, 147 narrative mediation, 381 support-links, 146 ProtoPropp, 387 Inference network, 49, 263, 1079 ReQUEST, 385 Inference to the best explanation (IBE), 49, scenario synthesizer, 384 127, 326 StoryBook, 381 Inference, uncertain dynamic, 1054 TAILOR, 384 Information Extraction Component (IEC), 756 TALE-SPIN, 382 See also Sterling Software UNIVERSE, 383 Information extraction (IE), 755 Interactive visualisation, SIGHTS text mining, Information Extraction Tools, 754–758 680 financial fraud ontology, 755 Interesting case, 1082 information extraction (IE), 755 Interestingness, concept, 627 NLToolset, 754 Interesting pattern, 1082 Information manipulation theory (IMT), 18 Internal forum, 214 Information processing theory, 31 International Conferences on Virtual Information retrieval, 525, 603 Storytelling (ICVS), 342 See also Text mining International tribunals, legacy of, 30, 301 Information retrieval models, 1101 Internet auction fraud, 714–739 Inland Revenue, 275 accumulation fraud, 715 Inland Revenue Service (IRS), 1022 bid shielding, 717 Innocence, presumption of, 15, 33 bid shilling, 717 Inoculation (in psychology of juries), 29 decreasing bid auctions, 714 tactics of, 28 eBay, 715–717 Subject Index 1323

FADE (fraud and abuse detection engine), network-based IDSs, 699 717 signature, 697 fraud by buyers, 718 strict anomaly detection, 698 fraud by sellers, 718 TCPdump, 699 fraudulent bidding, 716 domain name service (DNS), 695 graph mining algorithm, 728 hostnames, 695 increasing bid auctions, 714 learning techniques, 701–703 NetProbe, 721–734 AdaBoost, 702 authority propagation, 726 bullet hole image classification, 701 belief propagation, 724–725 multi-boosting, 701 cross entropy, 724–725 multiclass boosting, 702 genetic algorithm, 724 robust support vector machines, 701 HITS, 726 wagging, 702 Markov random field (MRF), 722, 725 wireless sensor networks (WSNs), 703 overview, 723 mapping addresses, 695 PageRank, 726 masquerading, 703–706 propagation matrix, 724, 731–734 modus operandi, 695–696 relational dependency networks Nmap, 696 (RDNs), 727 ping sweeps, 696 simulated annealing, 724 port scans, 696 trust propagation, 726 probing, 696 TrustRank, 727 scanning, 696 workings of, 730 Intrusion prevention system, 700 non-mining model, 736–739 Inverse discrete Fourier transform, 969 Bayesian network model, 737 InvestigAide B&, 513, 1082 evidential traces, 738 Investigation, 207, 406, 451, 470, 472, 476, forged trademarks, 737 483, 508, 511, 517, 560, 767, 841, investigation model, 737 882, 892, 905, 992 prosecution hypotheses, 738–739 Investigative analysis software, 477 PayPal fraud, 718 Investment and securities fraud, 578 price quantity pair auction, 715 Ip-cycle, 367 reputation systems, 720–721 IRS, 1022 shield bidding, 716 Itaca, 209, 236, 241, 1083 shill bidding, 716 Italian criminal procedure code, 216, 221, 246 steps in a safe Internet auction, 719–720 Italian judiciary, 236, 340 TradeMe, 715 Internet child pornography, 511 J Interpersonal relations, theory, 497 Java Card, 963 Interpretation problem, 52 JTMS, 20 Interrogations, 238, 282 Judge for the preliminary inquiry, 218 Introspective Meta-eXplanation Pattern The Judges Apprentice, 243–247 (IMXP), 377 Judges on Wheels, 486, 619 Intrusion detection, 695–706 See also Traffic accident disputes administrator privileges, 696 Judicial opinion formation, 13–39 classification, 697–701 belief revision, 16–26 anomaly intrusions, 697–698 considerations and suggestions, 26–28 artificial intelligence, 698 focus on juries, 36–39 Biometric fusion, 700 manipulation devices, 28–29 firewalls, 699 procedures and jurisdictions, 29–31 host-based, 698 quantitative models, 31–36 malicious intermediate nodes, 700 Judicial sentencing, 214, 242, 245, 248 Malicious traffic analysis, 700 Juridical fact-finding, 111 misuse intrusions, 697 Juridical proof, probabilistic account, 107–111 1324 Subject Index

Jurimetrics, 246 Kidnapping, faked, 425–426 Jury Killer, serial, 25 blue ribbon, 1034 K-NN algorithm, 674 decision making, formal analysis of, 50 Knowledge acquisition, 266, 1084 jurimetrics, 246 See also CommonKADS observation fallacy, 110, 1083–1087 Knowledge-based system, 1084 research, 13, 1077 See also Expert system; Artificial Justice information systems, 239–249 intellifence CaseMap, 240 Knowledge discovery, 485, 554, 618, 628, 658, Daedalus, 240 1084 high data concentration, 240 Knowledge discovery from databases (KDD), cross-border criminal databases, 240 658 data mining techniques, 240 Knowledge engineering, 1084 MarshalPlan, 240 Knowledge interchange format (KIF), 569 meta-documentary automation, 239 Knowledge, misindexing, 275 past and new cases, 243–246 Knowledge representation, 1085 prosecutorial and judicial discretion, Kohonen networks, 489, 517, 664 246–249 Kripke frames, 319–320 evidentiary value prescriptive model, Kronecker delta function, 970–971 247 Kvart’s theory, 118 judicial sentencing, 248 legal discretion, 247 L plea bargaining, 247 Labelling algorithm, 427 polarisation test, 247 LAILA, 45, 1085 theme probability mode, 247 Landmark case, 625, 1085 user communities, 241–243 Language for AbductIve Logic Agents Justification based truth maintenance system (LAILA), 45, 1085 (JTMS), 20 Laplacian graph, 599 See also Truth maintenance system Laplacian matrix, 499 Justification simpliciter, 25, 146 Latent print examiners (LPEs), 956 Latent print individualization, 956–957 K Latent semantic analysis (LSA), 379 Kappa calculus, 117–119, 120–121, 126–127, Layered-abduction machine, 46 1084 Lay factfinders, 1083 Åqvist’s scheme, 119–120 Laypersons comparison of schemes, 121–125 lay factfinder, 1083 considerations, 117–119 medical laypersons, 1010–1011 contextual assessment, 126–127 Learning equivalence to grading mechanisms, machine learning, 1090 121–124 Learning statistical pattern, 598 probabilities reintroduction, 124–125 Legal argumentation, see Argumentation relative plausibility, 126–128 Legal arguments, four layers of, 168–174 review of, 119–120 dialectical, 168 suggested solution, 125 logical, 168 Katz centrality, 499, 721 procedural, 168 Katz choice matrix, 502 strategic or heuristic, 168 Katz status index, 499–500, 502–503, 721 Legal database, classification technique, KDE, 556 618–621 Kelvin probe, scanning, 951 algorithm ID3, 619 Keyframing, in medical imaging, 1005–1006 electronic judge, 619 polygon models, 1005 IF-THEN rules, 618 3D studio MAX, 1005 KDD techniques, 619, 620 traffic accident reconstruction, 1006 neural networks tools, 619 Subject Index 1325

OVER project, 620 Linguistics, forensic, 618 Split Up project, 620 Linkage blindness in policing, 276 Legal evidence, theory, 880 Link analysis, 483, 493–494, 504, 508, 675, Legal formalism, 305–308 748, 754, 758, 1086 classical formulation, 305 algorithms, 621 aconsequential morality, 305 application of, 506 Negotiable instruments, 305–306 call detail records, 507 New formalism, 306 child pornography, 511 purposive rule-following, 305 counter-drug analysts, 507 relational contract theory, 307 dialed-digit analysis, 507 relational formalism, 307 entity-to-event associations, 505 Legal formalists, see Proceduralists network, 8, 511 Legal Information Network for Scotland tools applied to criminal intelligence, 511 (LINETS), 242 Coplink, 511 Legal knowledge discovery algorithms, FLINTS, 511 625–628 tools for criminal investigation, 508 Legal positivism, 1085 Anacapa charting, 510 Legal realism, 662, 1085 Big Floyd, 510 Legal reasoning, 47, 50, 944 COPLINK Criminal Relationship See also Argumentation; Relevance; Visualizer, 508 Split up Crime Link, 508 LegalXML, 537 Crime Workbench, 508 Leibnizian spatio-temporal representation, 428 ORIONLink, 509 Lexons, 569 use of, 506–507 Lex posterior, 1086 visualising, 505 Lex superior, 1086 what-if scenarios, 506 Liability, 1086 Link analysis in policing, 274 Liber spectaculorum, 612 Link detection, 778–780 Lie detectors, 285–286 fingerprint, 778 See also Polygraph Tests footwear impressions, 778 Light probe images, 872–873 tool mark, 778 Likelihood ratio, 86, 294 Link Discovery Tool, 511 LINDA, 615 Liquids, ontology for, 914 Linearly separable, 605 LISP programming language, 39, 546 Linear regression, 1086 Litigation risk analysis, 252, 1087 LINETS, 242 Liverpool, 246 Lineup, 8, 290–292, 294–295, 1077, 1086 Live-scan, 950 diagnostic value, 292 Löb’s theorem, 1053 facial composites, 290 Local stare decisis, 626, 634, 1087 innocents, called foils or look-alikes, 290 Locard’s Principle, 926 lineup instruction bias, 294 Location errors, labeling of, 447 mistaken identification, 296 Logic, 1087–1088 multiple-witness identifications, 293 independent choice, 111, 1079 suggestive eyewitness identification problem solvers, 850 procedures, 295 Loose talk, 1088–1089 video identification parade electronic Low Copy Number DNA, 784–787 recording (VIPER), 291 Wigmorean, neo- (approach), 802 Lineup instructions, 1086 LPEs, 956 Line-ups, 863, 1086 Lunacy, 68–69 See also Identity parade Morgan Hinchman case, 69 Linguistic probabilities, 106 Warder Cresson case, 67–68 Linguistics, computational, 62, 406, 547, Lund Procedure, 210–212 598–599 evidentiary relationships, 211 1326 Subject Index

Lund Procedure (cont.) Map evidentiary value, 211 geo-mapping tools, 513 list of evidentiary facts, 211 Mapping addresses, 695 structured list of themes, 211 Mapping, crime subordinate decisions, 212 geographical analysis, 816 Lying, 149, 155, 282 geographic information systems for, indicators of, 280 513–518 See also Deception; Fraud ArcIMS, 515 ArcInfo, 515 M ArcSDE, 515 Mac-a-Mug Pro, 1089 AREST, 512 MACE alterations, 688 AutoCarto conference, 514 Machine learning, 244, 356, 377, 379, 487, CATCH (Computer Aided Tracking 489–493, 507, 598–599, 606, 626, and Characterization of Homicides), 632, 667–678, 681, 684, 701–703, 517 713, 729, 743–744, 753, 853, 933, DIME (Dual Independent Map 1085, 1090 Encoding, 515 EDS Project, 752–754 FLINTS, 513 classification algorithms, 753 geo-mapping tools, 513 data balancing, 753 geosimulation, 517 data mining, 754 HITS (Homicide Investigation Tracking splits, 753 System), 517 Machine learning algorithm, 606 MapQuest, 515 Machine olfaction, 917, 921 multi-agent systems (MAS), 516 MacroGA, 669 TIGER (Topologically Integrated Magnetic method, 911 Geographic Encoding and Magnetic resonance imaging (MRI), 996 Referencing), 515 Magnetometry, 910 MapQuest, 515 Malicious intermediate nodes, 700 Maps Malicious traffic analysis, 700 rarth resistivity, 910 Malingered neurocognitive dysfunction, geological, 906 150 Marching cubes algorithm, 1007 Malingering actors, 150 Marker passing, 352–353 Malware, 708, 710–711, 742–745 Markov Chain Monte Carlo methods, 109 anomaly-based detection, 744 Markov models, 517 densification, 741 Markov random field (MRF), 378, 722–725, detection, 740–747 730 errors possibility, 744 MARPOL, 556 homophilic machine–file relation, 742, MarshalPlan, 53, 131, 207–210, 240, 1090 745–746 Masking, 519 network forensics, 741 Masquerading, 703–706 node potential function, 746 Mass spectrometry, 885, 920, 929 overview of the Polonium technology, 747 Matches, forensic, 780 PeGaSus, 741 Matching, fingerprint, 952, 957–963, 971 Polonium, 742 Maternal imagination theory, 944 randomisation tests, 743 Mathematical logic, 169, 318 reputation computation, 743 Mathematical modelling, 13 scalability of Polonium, 747 Matrix signature-based detection, 744 adjacency matrix, 500–501 virus signatures, 744 Laplacian matrix, 499 Manipulation theory, information, 18 payoff matrix, 254–255 Mano Nera, 96 propagation matrix, 378, 724–725, Manson test, 296 731–732, 734 Subject Index 1327

Katz choice matrix, 502 682–685, 697–698, 701–702, 710, weight matrix, 653 713, 717, 720–722, 728, 737–738, MAVERICK, 769 740–743, 744, 748–754, 759–760, Maximum intensity projection (MIP), 762–763, 844, 892–893, 916 1005 algorithm, 627, 632 MDITDS, 755 application, 486, 491 Mechanical Jurisprudence, 306, 1090 bagging, 491 Mediation, 406, 1091 behavioural profiles, 491 Medical imaging, 1002–1011 classification trees, 490 animation, 1005 clustering analysis, 490 keyframing, 1005–1006 data preparation, 487 polygon models, 1005 data warehousing, 487 3D studio MAX, 1005 decision trees, 490 traffic accident reconstruction, 1006 feature extraction, 492 data storage, 1002 goals for financial institution, 519 dental radiology, 1009 link analysis, 486 image fusion, 1007 machine learning, 491 2D imaging, 1002–1003 multiagent technology, 495 3D imaging, 1003–1004 NETMAP, 486–487 dental reconstruction, 1004 offender profiling, 493 gunshot to the head, 1004 pattern recognition, 488 maximum intensity projection (MIP), performance of, 633 1005 precrime, 512 volume rendering, 1005 predictive, 484 Polygon models, 1005 recursive, 682–685 Medical laypersons, 1010–1011 advantages, 684 Meinertzhagen bird collection controversy, algorithm 1, 683 451–454 author identification, 682 Memories, false, 38 InFilter, 682 Memory conformity, 27, 73, 150, 1091 intrusion detection, 682 Memory organisation packages (MOPs), 362 masquerade detection, 682 Mens rea, 1091–1093 mimicking, 682 Message-based persuasion, 28 processing stages, 684 Meta-documentary automation, 239 spoofed internet protocol (IP), 682 Metaforensics, digital, 260, 889 spoofing, 682 Metal detector methods, 912 segmentation, 485 Meter-models, 13, 1093 self-organising maps, 489 Micro-level analysis of events, 801 statistical modelling, 488 Microscopic analysis, 812 statistical prediction, 488 Middleware-level IDSs, 709 summarisation, 485 Migration Defense Intelligence, 755 Mining, graph, 728 Migration Defense Intelligence Threat Data Minority opinion, 215 System (MDITDS), 755 Minutiae detection, 1094 MILE, 555 MIP, 1005 Military espionage, 475 MiPai algorithm, 861 MIMIC, 863, 1094 Miscarriages of justice, 263 Mining, data, 9, 100, 106, 154, 179, 195, 200, Misindexing, knowledge, 375 205, 240, 274, 277, 390, 483–493, Misuse intrusions, 697 506–507, 511–513, 516, 519–524, Mock-trial, 335 534–537, 544, 547, 550, 552, Modal logic of forcing, 65 582, 598–600, 602, 606, 619–621, Modal operators for political action, 93 623, 625–628, 630, 632, 638, 664, Model fragments, 846 667–668, 671–673, 675–676, 679, Modus ponens, 185, 478, 1094 1328 Subject Index

Momentum in neural network Nash equilibrium, 153 momentum and bias, 654 National Association of Document Examiners Monster of Foligno-serial killer, 25 (NADE), 1105 Monte Carlo methods, 109 National Association of Securities Dealers Moorov doctrine, 1121 (NASD), 728–730, 743 MOPs, 362 National Crime Squad, 275, 277 Moral luck, 86 National Crime Victimization Survey (U.S.), Mosquito anatomy morphology, 552 244 Motivational calculus, 86 National Criminal Intelligence Service (NCIS), MRF, 724–725 275, 769, 783 Mug shots, 860 National DNA Database, 773, 775, 780, 786, Multi-agent, 28, 77, 173, 516, 524, 534, 1094 809, 810–811, 813–824, 945 Multi-boosting, 701 National Intelligence Model (NIM), 274–275 Multiclass boosting, 702 National Law Enforcement Multidimensional scaling, 116 Telecommunications System Multifluorescence imaging, 548–549 (NLETS), 536 See also Virtopsy National Museum of Natural History (U.S.), Multimedia forensics, 689, 1094 450, 462 Multimedia, semantic, 548 National Science Foundation (U.S.), 208 Multimedia units, theory, 86 Natural-language analysis, 593 Multiple-case analysis, 801 See also Story-understanding Multiple Image-Maker and Identification Natural-language processing (NLP), 63, 81, Compositor (MIMIC), 863 87, 257, 415, 487, 521, 588, 598, Multiple linear regression, 931 756 Multivariate image mining, 548 See also Linguistics, computational; MURAD subproject of the Aurangzeb model, Computational linguistics 431 NCIS, 275, 769, 783 Murder Nearest neighbour algorithm, 1095 HITS (Homicide Investigation Tracking Nearest neighbours approaches, 674 System), 517 NEGOPY program, 496 Monster of Foligno-serial killer, 25 Negotiation, 161, 164, 1095 Museum of Natural History (U.S.), 448, 462 Neo-Kantian jurisprudence, 304 Mutual recursion, 458, 461 Nepenthes system, 710–711 MYCIN expert system, 106, 193 Nested beliefs, 152–153, 422, 520 Nested relations (in database design), 761 N NetProbe, 720–723, 725, 728, 730–736, 740, NADE, 1098 742 Naïve Bayes, 97, 99, 606, 744 Network-based IDSs, 699 Naïve Bayesian classifier, 100, 1094 Network centrality, concepts, 510 NAPDToSpread, 928 Network forensics, malware, 741 Narrative Network representation, 1095 analyst’s Notebook, 477 Network topology, 1095 automated understanding, 374 Neural network clustering, 844 AV E R s , 477 Neural networks, 40–41, 51–52, 84, 100, 104, HOLMES2, 477 106, 189, 195, 321, 467, 483–486, intelligence, 351 488–489, 492, 507, 517, 523, plausibility, 323–324 525, 619–621, 626–627, 633, 638, Narrative reporting, 403–404 643–667, 675, 698, 701, 706, 744, ABDUL/ILANA, 403 753–754, 860, 920, 922, 931, 933, IMP, 403 1094 PAULINE, 403 algorithms, 488 Terminal Time, 404 application to law, 656–661 NASD, 728–730, 743 backpropagation algorithm, 658 Subject Index 1329

knowledge discovery from databases micro analysis of evidence, 325 (KDD), 658 “not guilty”, 325–326 Kort’s method, 657 plausible story of innocence, 327 PROLEXS, 657 relative plausibility, 326 application to rule defeasibility, 659 Rumelhart approach, 324–325 back-propagation of errors, 659 theory of anchored narratives, 325, 327 connectionism, 661 New Technology File System (NTFS), 688 ECHO program, 660 NIM, 274–275 brain structure, 644 NLETS, 536 classification application, 658 NLToolset, 756 designing of, 646 Nmap, 695 discretionary domains, 662 Nodal points, 860 errors back propagation, 649 Node-defeater, 147 feed forward networks, 645–649 Node potential function, malware, 746 architecture, 647 Noncredible, cognitive performance, 150 perceptron, 647–649 Non-invasive imaging techniques, 895 training data, 648 Nonmonotonic reasoning, 25, 146, 169, 480, input activation, 645 1095–1096 learning rate, 653 Normalised cross-phase spectrum, 969 QuickProp, 653 Normative ability, 249 weight matrix, 653 Nose, electronic, 916–924 weight space, 654 computing system, 920 momentum and bias, 654 detection system, 920 output activation, 645 fast gas chromatography (FGC), 923 overtraining of, 1097 gas chromatography, 916 perceptron network topology, 644 sample delivery system, 920 performance measurement, 656 trace vapour detection, 921 propagations of, 42 vapour concentration, 921 resemblance with brain, 644 NoteMap, 208 self-organising maps, 664–666 NTFS, 688 hierarchical, 666 Nuremberg Tribunal, 30, 301 neighbourhood function, 664 winning node, 665 O setting of, 650 Objective function, 607 symbolic computing, 643 Oblazione procedure in Italy, 221 training, 652 Obligations, reparational, 1107 training stopping criteria, 654 Occam’s Razor, principle of, 642 cross-validation resampling, 655 OCU, 793, 828 overfitting, 655 Odorology, 915–916 over-generalisation, 655 See also Olfaction overtraining, 655 Odour Undertraining, 654 Machine olfaction, 917, 921 unsupervised networks, 663 OEPs, 918 vagueness, 661 OET, 586 Neuropsychology, forensic, 150 Offender New Evidence scholarship, 324–328 crime results and suspects photograph, 833 anchored narratives approach, 328 graphical depiction, 832 explanationism, 326 offence types, 829 external anchoring, 327 operational command units (OCUs), 828 inference to the best explanation (IBE), 326 persistent offenders, 774 internal anchoring, 327 profiling, 493 likelihood ratio, 326 prolific, 827 macro structure of proof, 325 query definition dialog box, 829 1330 Subject Index

Offender (cont.) Orthopantomograms, 1002, 1009 search, 827–828 OSCAR, 88, 146 unknown offenders, 773–774 Outlier, 519, 629–630, 1097 DNA evidence, 773 strict anomaly detection, 698 virtual offender, 773 Out-of-court witness statements, 283 Olfaction Overfitting, 608, 1097 dynamic dilution, 918 OWL, 543 machine olfaction, 917, 921 Owlet species Athene blewitti, 468 See also Scent-detection, Cadaver dogs; Electronic nose P Olfactometer, 918 PACS, 1002 See also Olfaction PageRank, 503, 721, 726, 731 Olfactometerics, dynamic dilution, 918 Palaeoecological pollen, 925 Olfactory Evoked Potentials (OEPs), 918 Palaeontology, computational, 904 See also Olfaction Paleoclimatology, 927 On-line directory of expert witnesses, 602 Palmprints, 1097 Ontology, 387, 428, 544–548, 555, 560, 566, Palynodata Table Maker 1.0, 928 568, 581, 586–587, 758–762, 1096 Palynology, forensic, 893, 924–928 database relational notation, 761 actuopalynology, 926 engineering, 546 biostratigraphic services, 928 extraction tool (OET), 586 environmental palynology, 926 for liquids, 914 NAPDToSpread, 928 FuelFlowVis, 759, 761 palaeoecological pollen, 925 link analysis, 762 Palynodata Table Maker 1.0, 928 minimal ontology, 760 PARADOX software, 927 nested relations, 761 PAZ Software, 928 roles of person’s diagrams, 762 pollen data search engine, 928 transfer diagrams, 762 stratigraphic palynology, 926 See also Fuel laundering scam Webmapper, 924 Onus of proof, 1096 Web tools, 927–928 See also Burden of proof See also Forensic palynology Open multiagent computation, 481 Palynomorph, 925–927 Open-textured legal predicate, 1097 Paperwork burden in policing, 277 Operational command unit (OCU), 793, 828 PARADOX software, 927 Opinion appeal to expert, 1062–1063 Para-mortem trauma, 897 Opinion, evidence of, 1057, 1063 Paranoia, symptoms of, 87 Opinion-forming in judicial factfinding, 13–39 Parol Evidence Rule, 307 belief revision, 16–26 PARRY program, 87 considerations and suggestions, 26–28 Partitioned semantic networks, 542 focus on juries, 36–39 Part-simple retrieval, 244 manipulation devices, 28–29 PATER, 944, 1097 procedures and jurisdictions, 29–31 Paternity claims, 943 quantitative models, 31–36 Pattern analysis, 246 Opinion mining of text mining, 600 Pattern learning, statistical, 598 Opinion question, 1097 Pattern matching algorithm, 949 Opinions and beliefs, 16 Pattern recognition, 488, 922, 1097 Opposition identification module, SIGHTS PayPal fraud, 718 text mining, 680 PAZ Software, 928 Order-of-magnitude approximation, 120 PCA, 489, 673, 1100 Organisational problems in policing, 273–281 Pearl, Judea Organised crime, 275 probabilistic belief networks, 118 ORIONLink, 509 Pedagogical agent, 534–535 Ornithological fraud, 449 Pedophilia, 512 Subject Index 1331

Pedo-ring, 512 PMCTA, 991 Peer-to-peer (P2P), 693 PMMR, 992 PeGaSus, 741 POC, phase-only correlation, 967–968, 970 Peg unification, 856–857 Poisson process model, 35–36 PEIRCE-IGTT, 44, 47, 1098 Polarisation Pena alternativa (alternative penalty), 221 induced, 911–912 Pension planning, 106 test, 247 Pentitismo, 1098 Police and Criminal Evidence Act, 817 Perceptron network topology, 644 Police areas, 783–785 Peripheral inconsistency, 26 benefits, 784 Persistence, principle of, 17 feedback loop, 784 Personal authentication, 1097 incident handling, 783 Personal injury & products liability claims, Police National Computer, 8, 10 253 Police-oriented query, 557–558, 586 Personality traits, 86, 88, 395, 550 Police questioning, 156 Personal stare decisis, 626, 1098 Police science, 1099 Person authentication, see Verification Policing, 273–297 Personnel shortage in policing, 279 organisational problems, 273–281 PERSUADER, 161–164 communications divide, 275 Persuasion compulsive data demand, 277 persuasion argument, 172, 1098 criminality, pyramid, 277 persuasion machine, 161, 172 cross-border issues, 275 persuasion stories vs. arguments, 481 data mining, 274, 277 persuasion, studies, 28 defensive data concentration, 278 Peta Graph Mining library, 741 digital divide, 275 Petition efforts duplication, 278 forgeries in handwritten petitions, 616 information silos, 278 Petri nets, 899 institutional friction, 278–279 Phase-only correlation (POC), 967–968 intelligence gaps, 277 Philosophical ontology, 545 intelligence-led, 273, 278 Photoarray, 1098 intra-agency subculture divide, 279 Photofit, 862–863, 1099 linkage blindness, 276 photogrammetry, close range, 994 link analysis, 274 Photographs, use of, 294 local issues, 275 Physical countermeasures, 284 occupational subcultures, 279 Picture archiving and communication system organised crime, 275 (PACS), 1002 paperwork burden, 277 Piece-by-piece information, 32 personnel shortage, 279 Ping sweeps, 696 record-screening, 276 Pirate tracing software, 1099 spatio-temporal crime activity, 281 Pisa, Italy, 762 unit beat, 273–274 Pisa SNIPER Project, 762 self-incriminating confessions, 287–290 Plaintiff, 1099 identity parades (lineups), 290–297 Plausibility, 6–7, 45, 105, 110, 118, 126–128, right to silence, 287–288 159, 167, 323–326, 328–329, 336, wrongful convictions, 287 1099 suspects handling, 281–298 Plausible inference, 1099 control question test (CQT), 284 Plea custodial interrogation, 279 bargain, 117, 221, 224–225, 247–248, 253, eyewitness reliability, 283 256, 334, 1099 false confessions, 281 guilty, 1073 guilty knowledge test (GKT), 285 PLOTINUS umbrella project, 431 lying indicators, 283 PMCT, 991, 995, 998 out-of-court witness statements, 283 1332 Subject Index

Policing (cont.) Principled negotiation, 1100 polygraph Tests, 281–286 3D printing, 1007 voice-based lie detectors, 285 Prior convictions (evidence of), 1100 Polizia Giudiziaria, 216–217 Prisoners Dilemma, 254–255 Pollen data search engine, 928 Privacy, Electronic Privacy Information Center Pollution, air, 930 (EPIC), 240 Polonium, 742 Privacy International, 240 Polonium scalability, 747 Privacy of belief, 156 Polonium software, 742, 745 Private privilege, 1100–1101 Polonium technology in malware, 747 Privilege, 696–699, 1100 Polygraph Protection Act (1988), 284 administrator privileges, 696 Polygraph tests, 281–287, 1100 classification, 697–701 Polynomial regression models, 629, 1086 anomaly intrusions, 697–698 Polytopic vector analysis (PVA), 933–934 artificial intelligence, 698 Pooled techniques of text mining, 598 biometric fusion, 700 Pornography, 512 firewalls, 699 Port scans, 696 host-based, 698 Positivism malicious intermediate nodes, 700 fact, 1068 malicious traffic analysis, 700 legal, 1085 misuse intrusions, 697 Post-charge questioning, 1100 network-based IDSs, 699 Post-mortem computed tomography (PMCT), signature, 697 991, 995, 998 strict anomaly detection, 698 See also Virtopsy TCPdump, 699 Post-mortem magnetic resonance imaging Probabilistic and statistical reasoning, 887 (PMMR), 991–992 Probabilistic applications, 111–117 POWER program, 555 logic and reasoning, 111–115 Poznan ontology, 758–762 multidimensional scoring, 115–117 database relational notation, 761 Probabilistic belief networks, 118 FuelFlowVis, 759, 761 Probabilistic information retrieval models, link analysis, 762 1101 minimal ontology, 760 Probabilistic networks, 102 nested relations, 761 Probabilistic reasoning, 126 roles of person’s diagrams, 762 Probabilistic uncertainty, 106 transfer diagrams, 762 Probability, 5, 7, 33, 115, 167, 944, 955, 1101 See also Fuel laundering scam objective, 1101 P2P, 693 prior and posterior, 1101–1102 Pragmatic subjective, 1102 implication, 65, 90 Probability theory, 18, 31, 103–104, 107, narrativisation, 334 111–112, 120–121, 131, 174, 267, Prediction 693, 886, 944 statistical prediction, 488 Probative value, 288, 1102 Preferred Extension (PE), 161 Probe images, light, 872–873 Preliminary fact investigation, theory, 111 Probing, 696 Preponderance of the evidence, 336, 1100 Problem formulation, critical questions, 160 Preponderance, principle of, 120, 124–125 Proceduralists, 316 Presentation bias, 294 Procedural representation scheme, 1102 Preserved Fish, 561, 564 Procedural-support systems, 53, 208, 1102 Presley, Elvis, 395 Proceeds of Crime Act, 769 Prevention, intrusion, 700 Production rule, 1102 Price quantity pair auction, 715 See also IF-Then rule Principal component analysis (PCA), 489, 673, PROLEXS, 529, 657, 1103 1100 Prolific offenders, 827 Subject Index 1333

Prolog or Lisp, 546 non-demonstrative, 48 Prolog progamming language, 546 Rebutter, 1106 Prolog, reflective, 245 Receptor modeling problem, 933 Proof, free, 1072 Recognition, fingerprint, 1071 Propagation Record-screening in policing, 276 authority propagation, 726 Recoverability, principle of, 17, 24 belief propagation, 724–725 Recursion, 116, 682–685 See also Back propagation advantages, 684 Propagation matrix, 378, 724–726, 731–734 algorithm 1, 683 Prosecution author identification, 682 prosecution hypotheses, 39, 51, 737–739 InFilter, 682 prosecutorial discretion, 248, 1103 intrusion detection, 682 Pseudo-scientific claims, 68 masquerade detection, 682 PSICO, 80–83 mimicking, 682 Psychodrama, 497 processing stages, 684 Psychological profiling, see Offender, profiling segmentation, 485 Psychology self-organising maps, 489 eyewitness psychology, 415 spoofed internet protocol (IP), 682 cross-cultural, 88 spoofing, 682 Psychotherapist, 38 statistical modelling, 488 Public inquiry, 1103–1104 statistical prediction, 488 Public interest privilege, 1104 summarisation, 485 Public Prosecutor, 218 Recursive data mining, 682–685 Purkinje cells, 939 advantages, 684 Purposive-cognitive-affective algorithm, 87 algorithm 1, 683 Putative legitimate defence, 92 author identification, 682 PVA, polytopic vector analysis, 933–934 InFilter, 682 Pythagoras theorem, 989 intrusion detection, 682 masquerade detection, 682 Q mimicking, 682 Quack technology, 286 processing stages, 684 Quadratic programming (QP), 610 spoofed internet protocol (IP), 682 Quantitative methods, 889 spoofing, 682 Query Red Brigades, 25 analyser, 847 Reference-class problem, 108, 128, 293, 367, formulation, 606 886, 946, 1106 Questioned documents evidence, 1104–1105 Referential fallacy, 341 QuestMap, 167, 1105 Reflective Prolog, 245 QuickProp, 653 Refugee Review Tribunal (Australia), 201 Regional Forensic Science Group, 783 R Registration depository, central, 729 Radar, ground-penetrating radar (GPR), 910, Regression, 490–491, 1106 912–913 Regression, linear, 1086 Radio-frequency identification tags, 539 Rejoinder, 1106 Randomisation tests, 743 Relational contract theory, 307 Ratio decidendi, 183, 625–627, 1105 Relational database technology, 512, 540 RCMP, 983 Relational dependency networks (RDNs), 727 RDF-mapped Semantic Web resource, 557 Relative plausibility, 110, 127, 326, 336, RDNs, 727 694–695, 1099 Realism, Legal, 662, 1085 Relevance, 26, 298–322, 374, 888, 1106 Reason!Able, 167, 1105 argument refutation, 315–316 Reasoning old syllogistic formalism, 317 legal, 47, 50 bias-pluralism, 316 1334 Subject Index

Relevance (cont.) sequent calculus, 319 cultural cognition, 315 split up knowledge, 321 legal formalism, 317 strict implication, 317 relational formalism, 317 variable sharing principle, 319 rule-breach-remedy, 316 rule of recognition, 302 skeletal relevance, 315 rules of auxiliary probative policy, 299 character, 300 rules of extrinsic policy, 299 curative admissibility, 300 social epistemics, 300 definitions, 298–301 Reliability of eyewitness testimony, 283 epistemic paternalism, 300 Religious identities, 68 evidence and beyond, 308–314 Remote sensing, 906–907 conversational relevance, 309 Removing/wiping files, 688 legal decision-making, 309 Reparational obligations, 1107 normative relevance (NR), 313 Replication, 1107 relevant logic (RL), 310 Representational sophistication, 118 semantic considerations, 311 Reputation computation, 743 truth table TFTT, 310 Reputation systems, 720–721 visual recordings of the re-enactments, Resistivity 308 earth resistivity, 910 hearsay, 300 electrical resistivity method, 911 legal admissibility, 302 Resolution, 1107 legal formalism, 305–306 Resources-for-action, 843 classical formulation, 305 Respondent, 1107 aconsequential morality, 305 Retinal scan, 99 Negotiable instruments, 305–306 Rhetoric New formalism, 306 Inoculation (in psychology of juries), 29 purposive rule-following, 305 tactics of, 28 relational contract theory, 307 Rhetoric, forensic, 335 relational formalism, 307 Ridge bifurcation in fingerprints, 958 Leviathan, 303 Rights-based legal systems, 250 logic relevance, 303 Right to silence, 171, 219, 287–290 object of regulation, 301 Ring or line topology, 6 opinion, 300 Risk perlocutionary, 305 brokers, higher-risk, 729 previous convictions, 300 risk-allocating, non-utilitiarian, 250 Ravens paradox, 302 Victor’s Litigation Risk AnalysisTM, 252 relevance logic, 314–315, 317–322 Robust support vector machines, 701 computational resources, 321 Role-playing games (RPGs), 388 discursive importance, 314 Room 5, 168 epistemic logic, 321 Rough clustering, 490 Gentzen sequent calculi, 318 Rough set theory, 490 Kripke frames, 319–320 Routine activity theory, 154 legal proceduralists, 316 Routley-Meyer semantics, 320 linear logics, 321 Royal Canadian Mounted Police (RCMP), 983 material conditional, 318 RPGs, 388 material implication, 317 Rule-based expert systems, 846, 1107 “minimal relevance” problem, 314 Rule-extracting tools, 1107 natural deduction, 318 Rule induction, 637–643 ontologies, 322 algorithm, 619, 627, 639–643, 673 paraconsistent logics, 319 benefits of, 639 practitioners of, 317 data mining techniques, 638 Routley-Meyer semantics, 320 decision tree, 639 semantics of, 319 difficulties, 639 Subject Index 1335

examples, 642 scalability of Polonium, 747 inductive reasoning, 637 signature-based detection, 744 pattern interestingness, 638 virus signatures, 744 Ruleset, 1107 Segmentation, in medical imaging, 1006–1007 Self-exoneration structure, 58–59 S Self-incriminating confessions, 287–290 Safe Internet auction, 719–720 identity parades (lineups), 290–297 SAFER method, 933 right to silence, 287–288 SALOMAN project, 602 wrongful convictions, 287 Sample delivery system, 920 Self-Incriminating Confessions SARA, 280 Jewish law, 289 SAW, 920, 922 Roman law, 289 Scanning, 696 Self-organising maps (SOM), see Kohonen Scanning, Analysis, Response, and Assessment networks (SARA), 280 Self-training receptor modeling, 933 Scanning, fingerprint, 1071 Semantic multimedia, 548 Scanning Kelvin probe (SKP), 951 Semantic networks, 544–545 Scenario Space Builder, 850–852 Semantic Web, 165, 538, 547, 550–551, 557, backward chaining, 850–851 586 consistency phase, 851 Sensing, electronic, 917 forward chaining, 850–851 Sensitivity analysis, 1106–1107 initialisation phase, 850 Sensor Scent-detection, 915–916 fingerprint, 1071 See also Olfaction wagging, 702 Schema theory, 859 Sentencing, 214, 242, 245, 248 Scientific computing, 880 Sentencing Information System (SIS), 242–243 Scientific evidence, 50, 885, 954 Sentenza suicida, 1109 Scintilla of evidence, 1108 Sentiment analysis, 600–601 Scottish Crime Squad, 241 Sentiment-quantification methods, 600 SCP, 512 SEPPHORIS, 83 Search based drama manager (SBDM), Sequencing, 896 386–387 Sequential identification procedures, 293 Second-Generation FLINTS, see FLINTS 2 Serial killer, 25 Securities Dealers, National Association of, Settlement out of court, 1109 728–730, 743 Sex estimation from skeletal remains, 884 Securities fraud, 578 Sexual offender model’s labels, 518 Security SGA, 388–389 computer, 1041 Shallow extraction, 601 malware, 708, 710–711, 742–745 Sharp-weapon trauma, 898 anomaly-based detection, 744 Shield bidding, 716, 1109 densification, 741 Shill bidding, 716 detection, 740–747 Shilling, 1109 errors possibility, 744 Shilling, bid, 717 homophilic machine–file relation, 742, Shoes 745–746 footwear evidence, 890–891 network forensics, 741 Shooting, 58, 67, 327, 408, 433, 445, 556, 841, node potential function, 746 878, 883 overview of the Polonium technology, SIGHTS text mining, 679–682 747 algorithm modules, 680 PeGaSus, 741 interactive visualisation, 680 Polonium, 742 opposition identification module, 680 randomisation tests, 743 support vector machines (SVMs), 681 reputation computation, 743 temporal group algorithms, 680 1336 Subject Index

Signature-based detection of malware, 744 Spohn’s minimum equation, 121, 124 Silence, right to, 171, 219, 287–290 Spreading disambiguation, 45 Similar fact evidence, 1109 Spring-embedder, network, 499 Similarity, striking, 1112 Spurting, arterial, 974 SIMPLE, 547 SQL3, 546 Simpliciter, justification, 25, 146 Srinkling/aging algorithms, 868–871 Simulated annealing, 724 SSA, 615 Simulated kidnapping, 425–426 Standard Generalized Markup Language SIS, 242–243 (SGML), 538 Situational crime prevention (SCP), 512 Standard historiography, 474 Situation theory, 1109 STARE, 371–372 Skeletal ramains Stare decisis, 625, 634, 1110–1111 facial reconstruction from skeletal remains, local stare decisis, 626, 628, 631–632, 874–877 634–635, 1087 categories, 875 State witness, 1098, 1111 computer-graphic techniques, 875 Statistical learning theory, 609 Skeletal Trauma Mapping, 896 Statistical pattern learning, 598 SKP, 951 Statistical reasoning, 1111 Slot-machine model, 1110 Statistical syllogism, 147–148 Smell, machine olfaction, 917, 921 Statutory law, 1111–1112 Smithsonian Institution Research Report, 450, Steganography, 689–692, 948 464 applications, 691 Smith-Waterman algorithm, 704 blog-steganography, 692 Smoking simulation, 87 chaffing and winnowing, 691 Smurfing, 1110 ciphertext, 692 Sniffing, machine olfaction, 917, 921 content-aware, 692 Social epistemics, 251, 300, 1110 cultural context, 692 Social networks analysis, 494–512 network, 692 application of, 496 printed, 692 business, notion, 495 Stereolithography, 1007 computational linguistics, 495 Stereotyped plan, 83 NEGOPY program, 496 Sterling Software, 754 Sociatry, 497 See also Information Extraction Tools Sociodrama, 497 Stevie, 176, 1105 Sociomatrix, 500 I-node, 176 Sociometry, 497 S-node, 176 Software engineering, 173, 550, 564, 1007 Stipendiary magistrate, 1068 Soil fingerprinting, 908, 948 Stories and arguments, 476–481 SoilFit project, 908 See also AURANGZEB project Soil gas surveying, 915 Story generator algorithm (SGA), 388–389 SOM algorithm, 484 Story management, 385–386 Sotomayor controversy, 332 Story model, 1112 Spatial-temporal Storytelling, 399–402 correlation eigenvectors, 931 BORIS story understanding program, 399 crime activity, 281 Dr. K–, 400–401 modeling, 548 kaleidostories, 400 Spectral Comparator Video, 612 puppets system, 402 Spectrometry SAGE, 399 fluorescence, 905 StoryMat, 401–402 mass, 885, 920, 929 TEATRIX, 400 X-ray, 905 Stratigraphic palynology, 926 Split Up, 195–200, 626–627, 636, 650–651, Strict anomaly detection, 698 656, 661, 673, 1110 Striking similarity, 1112 Subject Index 1337

Structural risk evaluation, 937 evidence law, 1013 Structure risk minimisation, 609 legal basis for, 1013 3D studio MAX, 1005 See also Virtopsy Stylometrics, 611–618 Symptom fabrication, 150 Subjective labelling, 427 Synergism, 5 Subjective probabilities, 105–106, 111, 127, Syntax, syntactic equivalence, 26 250, 693 Substance-blind evidence, 785 T Suicide by hanging, 848 TACITUS, 379 Summarisation, automatic, 587–598 Tags, radio-frequency identification, 539 Supergrass, 1111 TALE-SPIN, 337–339, 372, 382 Supervisory special agent (SSA), 615 Taphonomics, 895–896 Support vector machines, 603–611 Target transformation factor analysis (TTFA), ability of, 605 934 boosting techniques, 606 Tarpits, 708 bullet hole image classification, 608 Task announcement processing, 530 computational learning theory, 605 Taste threshold, 918–919 exclusive-Or function, 604 Taxfarming, 286 Flexlaw, 606 Taxidermic examinations, 470 idea of, 609 Taxonomy information gain, 606 factorial, 98 information retrieval, 607 full-page, 98 linearly separable, 605 TCPdump, 699 machine learning algorithm, 606 Telecommunication, National Law objective function, 607 Enforcement Telecommunications over-fitting problem, 608 System (NLETS), 536 quadratic programming (QP), 610 Template matching, 510 query formulation, 606 Temporal and spatial correlation eigenvectors, sequential minimal optimisation, 605 931 Support vector machine (SVM), 681, 701 Temporal group algorithms, 680 Surface acoustic wave (SAW), 920, 922 Terrorism Act, 769 Surface scanning, 994 Testimony Surrebutter, 1112 out-of-court witness statements, 283 Surrejoinder, 1112 state witness, 1098, 1111 Survival analysis, 944 Text mining, 487, 525, 553, 557, 587–588, 593, Suspects, criminal, 537 598–611, 621, 664, 675, 677–682, Suspects handling, 281–298 685, 754, 1112–1114 control question test (CQT), 284 application areas, 603 custodial interrogation, 282 applications of, 600–601 eyewitness reliability, 283 automated coding, 601 false confessions, 281 clustering, 621 guilty knowledge test (GKT), 285 COPLINK project, 603 identification of, 810 filtering, 601 lying indicators, 283 information extraction, 601 out-of-court witness statements, 283 link detection, 598 polygraph tests, 281–286 natural-language querying, 601 voice-based lie detectors, 285 opinion mining, 600 SVM, 681, 701 pooled techniques, 598 SWALE, 373 SALOMAN project, 602 Swamping, 519 sentiment-quantification methods, 600 Switzerland (Justice System), 1011–1015 shallow extraction, 601 advantages, 1011–1012 statistical pattern learning, 598 criminal procedure, 1012–1015 summarisation, 601 1338 Subject Index

Text mining (cont.) Tomography support vector machines, 603–611 post-mortem computed angiography ability of, 605 (PMCTA), 991 boosting techniques, 606 post-mortem computer (PMCT), 991, 995, computational learning theory, 605 998 exclusive-Or function, 604 Tongue biting, 284 Flexlaw, 606 Tongues, electronic, 923–924 idea of, 609 Tool marks, 814 information gain, 606 Topkapi Museum, 409 information retrieval, 607 Topology linearly separable, 605 network, 1095 machine learning algorithm, 606 ring or line, 6 objective function, 607 Topsoil magnetic susceptibility, 910 over-fitting problem, 608 Toulmin argument structure, 131–132, 178, quadratic programming (QP), 608 183, 186, 198, 660, 1118 query formulation, 606 TPD, 748 sequential minimal optimisation, Trace Meta-eXplanation Pattern (TMXP), 606 377 tasks, 599 Trace vapour detection, 921 test grading, 601 Trademarks, forged, 737 text analytics, 598 TradeMe, 715 tools, 593, 600–601 Traffic accident WESTLAW, 597 disputes, 486, 619 See also Email mining reconstruction, 1006 Thagard’s ECHO algorithm, 42–43 Traitor tracing, 689, 1118 Thagard, Paul, 42–43 TRAITS, 86 Theft and relabeling, 448 Transactional theories of emotions, 89 Thematic abstraction units (TAUs), Transvaluationism, 661, 1088, 1118–1119 362, 399 TreeAge Pro, 252, 1119 Theme probability model, 212, 247 Tree diagram of betrayal, 423–424 Thermal imaging, 895 Tree, dynamic aspect, (DAT), 899 Thomson data analyzer, 595 Tree-like representation of formulae, 166 ThoughtTreasure, 374 Trend Hunter, 507 Threat data system, 755 Trial-and-error excavations, 906 Threat Image Projection (TIP), 921 Trial by mathematics, 1119 TIGER, 515 Triangle, golden, 860 Time Triangulation, 716, 1119 correlation eigenvectors, temporal and True state of affairs, 422 spatial, 931 Trust propagation, 726 formal models of, 898–901 TrustRank, 727 fraudulent actions, temporal awareness, Trustworthiness question, 1119 578 Truth maintenance system, 18, 20, 25, 146, granularity, 899 480, 844, 1119 Petri nets, 900 TTFA, 934 slicing, 910, 913–915 Tucson Police Department (TPD), 748 temporal analysis, 826–827 Turnbull rules, 463, 265–266 temporal group algorithms, 680 Two-witness rule, 1121–1122 TimeMap chronology-graphing software, 208 U TIMUR model, 433 U.S. National Museum of Natural History, 450, Tissue/liquid sampling, 999 464 TMS, see Truth maintenance system U.S. National Science Foundation, 208 Tohoku algorithm, 967 Ultimate evaluation phase, 97 Subject Index 1339

Uncertain inference, dynamic, 1054 for medical laypersons, 1010–1011 Uncharged conduct or uncharged misconduct, orthopantomograms, 1002, 1009 1119 picture archiving and communication Uncovered essentials, phenomenon, 44 system (PACS), 1002 Undeutch hypothesis, 18 post-mortem vs. ante-mortem, 1008 Uniform Commercial Code, 306–307 for radiologists and pathologists, Uniqueness claim about footprints, 955 1008–1009 United Nations Convention relating to the rapid prototyping, 1007 Status of Refugees, 180 segmentation, 1006–1007 United States’ National Crime Victimization windowing, 1002–1011 Survey, 244 post-mortem computed tomography Upper ontologies, 546 (PMCT), 991 post-mortem computed tomography V angiography (PMCTA), 991 VA F, 160–161 post-mortem magnetic resonance imaging Vagueness, 106, 304–305, 556, 661–662, (PMMR), 991–992 667 Swiss Justice System, 1011–1015 Validation Phase, 97 advantages, 1011–1012 Value-based Argumentation Framework criminal procedure, 1012–1015 (VAF), 160–161 evidence law, 1013 Vapour concentration, 921 legal basis for, 1013 VAT @ , 567 technical aspects, 993–1001 VAT fraud, 578, 581–582, 765 close range photogrammetry, 994 See also Fiscal fraud detection magnetic resonance imaging (MRI), Vehicle searching, 835–834 996 options, 835 photogrammetry, 994 results, 836 post-mortem computer tomography vehicle search dialog box, 835 (PMCT), 995, 998 Verification, fingerprint, 1071–1072 surface scanning, 994 ViCLASS, 517 tissue/liquid sampling, 999 Victimization Survey, United States’ National virtobot system, 993–994 Crime, 244 virtopsy workflow, 1000–1001 Victor’s Litigation Risk AnalysisTM, 252 Virtopsy workflow, 1000–1001 Video identification parade electronic Virtual embodied agents, 394–396 recording (VIPER), 291 Cybercafe, 395 Video ID system, 291 embodied conversational agents (ECAs), Video Spectral Comparator (VSC), 612 394 Videotaped interviews, 156 spark of life (SoL), 396 Violence, domestic, 196–200, 635–636 Virtual theater project, 80, 394–395 Violent Crime Linkage Analysis System Virus signatures, 744 (ViCLASS), 517 Visualizer, COPLINK, 508, 511, 748–754, VIPER, 291 1043 Virtobot system, 993–994 Visual surveillance devices, 274 Virtopsy, 991–1015 Vitality affects, 90 indications for, 992–993 Voice-based lie detectors, 285 medical imaging, 1002–1011 Voice identification, 883 animation, 1005 Voir dire, 1120 data storage, 1002 Volume crimes and suspects, 785 dental radiology, 1009 Volume rendering, 1005 image fusion, 1007 Von Bülow trials, 49, 52 2D imaging, 1002–1003 Von Neumann theorem, 115 3D imaging, 1003–1004 VSC, 612 1340 Subject Index

W Witness Wagging, 702 ear witness Warehousing, data, 536 expert witness, 109, 171, 522, 602, Warrant, concept, 92 881–892, 928, 935, 1063–1067, Warrant eavesdropping, 219 1082 Web bots, 525 on-line directory of expert witnesses, Web bugs, 537 602 Web geographical information systems linguist’s role, 618 (Web-GISs), 512 reliance on, 881 See also Geographical information role of, 894 systems eyewitness Webmapper, 928 accuracy of identification by, 262 Web mining, 490, 553 admissibility and sufficiency of, 267 Web tools, 927–928 identification, 38, 95, 263–265, 270, Weight matrix, 653 273, 283, 293, 295–297, 882–883 Weight space, 654 psychology, 415 Wells and Olson’s taxonomy, 95–99 reliability of, 266, 270, 283 What not guilty, conceptualization, 335 testimony, 50, 266, 270, 273, 415, 611, Wheels, Judges on, 486, 619 859, 1068 See also Traffic accident disputes multiple-witness identifications, 293 White collar crime two-witness rule, 1121–1122 criteria, 764–765 Out-of-court witness statements, 283 objective functions, 764 state witness, 1098, 1111 rule-based classification, 765 compellability in ADVOKATE project, sample selection bias, 763 269–270 Wigmore chart, 130–145, 168, 175, 209, 434, WizRule, 521 843, 1121 WizSoft, 521 analysis, 133–137, 843 WordNet, 547 background generalisation, 136 World Data Center for Paleoclimatology, 927 factum probandum, 135 Worms, 339, 708–710, 742, 926 factum probans, 135 Writeprint characteristics, 611 Newcastel embarrassing situation, 137–145 Wrongful convictions, 287 archetypal situation, 138 WSNs, 703 argument-structure, 140 missing evidence, 144 X notation of, 132–133 XML, LegalXML, 537 factum probandum, 132 factum probans, 132 Y Windowing, in medical imaging, 1004–1011 Yale School of natural-language processing, Wireless sensor networks (WSNs), 703 339 Wiretapping, 219 Yugoslavia Tribunal, 30, 301