By William Webber
In my previous post, I found that relevance and uncertainty selection needed similar numbers of document relevance assessments to achieve a given level of recall. I summarized this by saying the two methods had similar cost. The number of documents assessed, however, is only a very approximate measure of the cost of a review process, and richer cost models might lead to a different conclusion.
One distinction that is sometimes made is between the cost of training a document, and the cost of reviewing it. It is often assumed that training is performed by a subject-matter expert, whereas review is done by more junior reviewers. The subject-matter expert costs more than the junior reviewers—let’s say, five times as much. Therefore, assessing a document for relevance during training will cost more than doing so during review.
This entry was posted on Friday, October 17th, 2014 at 1:04 pm. It is filed under chronology, industry, Technology-Assisted Review and tagged with electronic discovery, predictive coding. You can follow any responses to this entry through the RSS 2.0 feed.
An abridged look at the business of eDiscovery mergers, acquisitions, and investments. The presented listing highlights key industry business moves by sharing the announcement date, acquired company, acquiring or investing company, and acquisition amount (if known) of significant eDiscovery-related mergers, acquisitions, and investments.
Taken from a combination of public market sizing estimations as shared in leading electronic discovery reports, publications and posts over time, the following eDiscovery Market Size Mashup shares general worldwide market sizing considerations for software and services in the electronic discovery market for the years between 2016 and 2021.
One of the core purposes of all of the Tracks is to demonstrate the robustness of core retrieval technology. Moreover, one of the primary goals of TREC is: [T]o speed the transfer of technology from research labs into commercial products by demonstrating substantial improvements in retrieval methodologies on real-world problems.
The proceedings of the TREC Total Recall Track have been published by the National Institute of Standards and Technology. The purpose of track was to investigate methods and technologies to find, as nearly as possible, all documents in a collection that satisfy specific criteria, with reasonable effort.
2016 was an important year in eDiscovery. Did you catch all of the important events and developments that occurred over the course of the year? If you didn’t, here is your chance to catch up! This webcast will cover key events, trends, and developments that occurred over the course of last year and how they impact those in the eDiscovery community.
Best Practices for eDiscovery Searching: A Continuing Legal Education (CLE) On-Demand Presentation (1.0 Hour) prepared and presented by CloudNine. This CLE-approved webcast session will cover goals for effective searching, what to consider prior to collecting ESI that will be subject to search, mechanisms for culling prior to searching, mechanisms for improving search recall and precision, challenges to effective searching and recommended best practices for searching and validating your search results to ensure effective search results.
ComplexDiscovery | Creative Commons Attribution 4.0 International
Daily we read, see and hear more and more about technology developments that impact the areas of information governance and...