From descriptions to discussions to diatribes, many individuals and organizations have attempted to inform and influence opinion in regard to the recent predictive coding related transcripts, objections, declarations, opinions and orders in the matter of Da Silva Moore v. Publicis Groupe & MSL Group, No. 11 Civ. 1279 (ALC) (AJP) (S.D.N.Y).
To help individuals form their own opinion in regard to predictive coding in relation to this matter from the original court documents, provided below is a single PDF document that consolidates key individual court documents into a single source for ease of study and consideration.
Document Index: Combined PDF of Key Documents Highlighting Judicial Consideration of Predictive Coding through the Lens of Da Silva Moore v. Publicis Groupe & MSL Group, No. 11 Civ. 1279 (ALC) (AJP) (S.D.N.Y).
PDF Download (Updated 04/11/2012)
Source: Public Domain
This entry was posted on Friday, March 2nd, 2012 at 2:39 pm. It is filed under Blog Slider, views and tagged with electronic discovery, research, social media, vendors. You can follow any responses to this entry through the RSS 2.0 feed.
Comments are closed.
Taken from a combination of public market sizing estimations as shared in leading electronic discovery reports, publications and posts over time, the following eDiscovery Market Size Mashup shares general worldwide market sizing considerations for both the software and service areas of the electronic discovery market for the years between 2013 and 2018.
Provided as a non-comprehensive overview of key and publicly announced eDiscovery related mergers, acquisitions and investments to date in 2014, the following listing highlights key industry activities through the lens of announcement date, acquired company, acquiring or investing company and acquisition amount (if known).
The results presented here do not support the commonly advanced position that seed sets, or entire training sets, must be randomly selected [19, 28] [contra 11]. Our primary implementation of SPL, in which all training documents were randomly selected, yielded dramatically inferior results to our primary implementations of CAL and SAL, in which none of the training documents were randomly selected.
Cormack and Grossman set up an ingenious experiment to test the effectiveness of three machine learning protocols. It is ingenious for several reasons, not the least of which is that they created what they call an “evaluation toolkit” to perform the experiment. They have even made this same toolkit, this same software, freely available for use by any other qualified researchers. They invite other scientists to run the experiment for themselves. They invite open testing of their experiment. They invite vendors to do so too, but so far there have been no takers.
I want to talk about an issue that is attracting attention at the moment: how to select documents for training a predictive coding system. The catalyst for this current interest is “Evaluation of Machine Learning Protocols for Technology Assisted Review in Electronic Discovery”, recently presented at SIGIR by Gord Cormack and Maura Grossman.