Based on a website review of leading providers influencing the electronic discovery arena, the following list provides a quick, non-all inclusive reference of eDiscovery and eDiscovery-related (fron business intelligence to social media archiving) vendors who have ventured into the “Twittersphere” and have a company specific Twitter feed.
Updated – 4/29/2013
Click here to provide vendor specific additions, corrections and/or updates.
This entry was posted on Sunday, June 3rd, 2012 at 8:36 pm. It is filed under uncategorized and tagged with electronic discovery, research, social media. You can follow any responses to this entry through the RSS 2.0 feed.
Comments are closed.
Taken from a combination of public market sizing estimations as shared in leading electronic discovery reports, publications and posts over time, the following eDiscovery Market Size Mashup shares general worldwide market sizing considerations for both the software and service areas of the electronic discovery market for the years between 2013 and 2018.
Provided as a non-comprehensive overview of key and publicly announced eDiscovery related mergers, acquisitions and investments to date in 2014, the following listing highlights key industry activities through the lens of announcement date, acquired company, acquiring or investing company and acquisition amount (if known).
Maura Grossman and Gordon Cormack just released another blockbuster article, “Comments on ‘The Implications of Rule 26(g) on the Use of Technology-Assisted Review,’” 7 Federal Courts Law Review 286 (2014). The article was in part a response to an earlier article in the same journal by Karl Schieneman and Thomas Gricks, in which they asserted that Rule 26(g) imposes “unique obligations” on parties using TAR for document productions and suggested using techniques we associate with TAR 1.0.
There has been ongoing debate in information governance and e-discovery circles on the significance of documents that do not contain searchable text, with evidence that half or more of the documents in some collections cannot be analyzed or managed because the tools used for those purposes require textual representations. BR Sampling Quote How important is this limitation in any given collection? Without sampling nobody knows – it’s all conjecture – and conjecture is a poor foundation for sound information governance . Sampling provides a way to estimate the impact of using text-restricted document management tools in any given collection. [...]
The hype cycle around Predictive Coding/Technology Assisted Review (PC/TAR) has focused around court acceptance and actual review cost savings. The last couple weeks have seen a bit of blogging kerfuffle over the conclusions, methods and implications of the new study by Gordon Cormack and Maura Grossman, “Evaluation of Machine-Learning Protocols for Technology-Assisted-Review in Electronic Discovery”. Pioneering analytics guru Herbert L. Roitblat of OrcaTec has published two blogs (first and second links) critical of the study and its conclusions. As much as I love a spirited debate and have my own history of ‘speaking truth’ in the public forum, I can’t help wondering if this tussle over Continuous Active Learning (CAL) vs. Simple Active Learning (SAL) has lost view of the forest while looking for the tallest tree in it.
The Venus de Milo was once a block of solid marble until an ancient Greek sculptor negated some of the block leaving what is now one of the most famous statues in the world. Without the tools to remove the unwanted marble, Venus would still be hidden inside the solid block. In the document analysis world, once BR has clustered documents with its visual classification technology, BR’s negation technology lets users suppress things they don’t want to see so they can focus on what they want to see.