ARCHIVED CONTENT
You are viewing ARCHIVED CONTENT released online between 1 April 2010 and 24 August 2018 or content that has been selectively archived and is no longer active. Content in this archive is NOT UPDATED, and links may not function.
Extract from article by Ralph LoseyThis is from part two of a two part article. (Read Part One)
The natural shift in relevance understanding is well-known in the field of text retrieval. Consider for instance the prior cited classic study by Ellen M. Voorhees, the computer scientist at the National Institute of Standards and Technology in charge of TREC, where she noted:
Test collections represent a user’s interest as a static set of (usually binary) decisions regarding the relevance of each document, making no provision for the fact that a real user’s perception of relevance will change as he or she interacts with the retrieved documents, or for the fact that “relevance” is idiosyncratic.
Voorhees, Variations in Relevance Judgments and the Measurement of Retrieval Effectiveness, 36 Info. Processing & Mgmt 697 (2000) at page 714 (emphasis added). (The somewhat related term query drift in information science refers to a different phenomena in machine learning. In query drift the concept of document relevance unintentionally changesfrom the use of indiscriminate pseudorelevance feedback. Cormack, Buttcher & Clarke, Information Retrieval Implementation and Evaluation of Search Engines (MIT Press 2010) at pg. 277. This can lead to severe negative relevance feedback loops.)
In concept drift the concept of what is relevant changes as a result of:
- Trying to apply the abstract concepts of relevance to the particular documents reviewed, and
- Changes in the case itself over time from new evidence, stipulations and court orders.