ARCHIVED CONTENT
You are viewing ARCHIVED CONTENT released online between 1 April 2010 and 24 August 2018 or content that has been selectively archived and is no longer active. Content in this archive is NOT UPDATED, and links may not function.
Extract from article by Herbert L. Roitblat, Ph.D.Even though many predictive coding tools yield respectable results, they do have differences. Among these differences is their sensitivity to class noise (inconsistency) in the training set. As you might expect, the greater the inconsistency in the coding of the training documents, the poorer is the performance of the machine learning system. For the most part, this inconsistency has rarely been examined in eDiscovery, but we do have enough information to say that the greater the number of people categorizing the training documents, the higher the expected level of inconsistency in their judgments (i.e., the higher the noise). Truth discovery methods could be used to reduce the class noise, but these methods can become expensive.
Not every eDiscovery case merits detailed examination of the accuracy of the search process. But knowledge of the variables that affect that accuracy can help to select the right tools and methods. In addition to the variables described in the introduction to this article, we should add (5) class noise in the training set.