Based on a compilation of research from five leading analyst firms in the electronic discovery arena, the following “Top 50″ list provides a short listing of fifty electronic discovery providers.
This listing is taken directly from eDiscovery provider mentions in selected key formal industry reports published between August 2011 and December 2012. Where appropriate, the list has been adjusted for industry mergers and acquisitions, with the primary company listed as the recognized eDiscovery provider.
Reports considered in the compilation include:
This listing certainly is not all inclusive of either research firms and/or capable vendors – but does provide an aggregate overview of notable vendors that several industry researchers consider leading technology providers in the field of eDiscovery.
The “Aggregated List”of eDiscovery Providers
Source: Public Domain Research
This entry was posted on Friday, August 17th, 2012 at 10:05 pm. It is filed under chronology, discover and tagged with archiving, electronic discovery, research, social media, storage, vendors. You can follow any responses to this entry through the RSS 2.0 feed.
Comments are closed.
Taken from a combination of public market sizing estimations as shared in leading electronic discovery reports, publications and posts over time, the following eDiscovery Market Size Mashup shares general worldwide market sizing considerations for both the software and service areas of the electronic discovery market for the years between 2013 and 2018.
Provided as a non-comprehensive overview of key and publicly announced eDiscovery related mergers, acquisitions and investments to date in 2014, the following listing highlights key industry activities through the lens of announcement date, acquired company, acquiring or investing company and acquisition amount (if known).
Cormack and Grossman set up an ingenious experiment to test the effectiveness of three machine learning protocols. It is ingenious for several reasons, not the least of which is that they created what they call an “evaluation toolkit” to perform the experiment. They have even made this same toolkit, this same software, freely available for use by any other qualified researchers. They invite other scientists to run the experiment for themselves. They invite open testing of their experiment. They invite vendors to do so too, but so far there have been no takers.
I want to talk about an issue that is attracting attention at the moment: how to select documents for training a predictive coding system. The catalyst for this current interest is “Evaluation of Machine Learning Protocols for Technology Assisted Review in Electronic Discovery”, recently presented at SIGIR by Gord Cormack and Maura Grossman.
Maura Grossman and Gordon Cormack just released another blockbuster article, “Comments on ‘The Implications of Rule 26(g) on the Use of Technology-Assisted Review,’” 7 Federal Courts Law Review 286 (2014). The article was in part a response to an earlier article in the same journal by Karl Schieneman and Thomas Gricks, in which they asserted that Rule 26(g) imposes “unique obligations” on parties using TAR for document productions and suggested using techniques we associate with TAR 1.0.