Based in proximity to New York City, e-Lucidata is a consultancy focused on delivering solutions in the areas of information governance, e-Discovery and professional services. One unique offering provided by e-Lucidata is their Engineered Review. Simply stated, Engineered Review is a more proactive version of Managed Review that applies automation to manual processes leading to improved quality control (QC) and increased client satisfaction. This increased client satisfaction arises from the advent of a more predictable and consistent litigation budget for time and spend.
McKinnon and her colleagues suggest that we view information governance “as a corporate objective, enabled by programs, projects, priorities, people and technology.” This aligns well with my recommendation to take a strategic approach to information governance.
It’s when you can combine the advantages of continuous learning with the flexibility that non-expert training gives you that TAR really starts to come alive. CAL means a lower total number of documents reviewed. Non-expert training means flexibility about how and when you can start the process, not to mention the ability to be massively parallel and cut down total elapsed clock time. Instead of having to wait, as you do in SAL and SPL, for your expert to have free time in order to train documents, with these two busted Myths you can hit the ground running, and be done long before your SPL or SAL may have even started.
Does the emperor have any clothes on? Thoughts on the EDRM – This is the first in a three-part series of “Throwback Thursday” posts that highlight ideas and opinions of announcements and reports from times past. These posts provide legal technology professionals an opportunity to consider current eDiscovery activities through the lens of yesterday’s thoughts.
BeyondRecognition will be highlighting its visual classification technology during LegalTech throughout the week of February 3-5, 2015, in New York City. Provided in this post is a selected group of questions for technology-assisted review vendors and experts that serve to highlight the challenges of text-based systems.
Daily we read, see and hear more and more about the technologies, tactics and terms of eDiscovery. This week’s cartoon and clip highlights the importance for eDiscovery practitioners and commentators to share their facts, ideas and opinions in an understandable manner (cartoon) and provides a link to one of the most informative and understandable blogs on eDiscovery by industry expert Craig Ball (clip).
Large document review projects can maximize efficiency by employing a two-filter method to cull documents from costly manual review. This method helps reduce costs and maximize recall.
Lexbe will be highlighting its complete portfolio of eDiscovery offerings during LegalTech throughout the week of February 3-5, 2015, at the London Hotel in New York City. Provided in this post is one unique differentiator the Lexbe Team will be sharing with potential clients during LegalTech. That differentiator is the speed and effectiveness of the Lexbe Processing Systems.
BeyondReview will be highlighting its recently announced eDiscovery offering during LegalTech throughout the week of February 3-5, 2015, at the London Hotel in New York City. Provided below is a short update on the unique approach and differentiation the BeyondReview Team will be sharing with potential clients during LegalTech.
By Gibson Dunn In our Mid-Year E-Discovery Update , we reported that 2014 was shaping up to be the “year of technology” in e-discovery. The remainder of the year more than lived up to those expectations. Powerful new data analytics tools have become available for search and review, predictive coding pricing is becoming more accessible […]
Daily we read, see and hear more and more about the many technology approaches, quality control measures and defensibility risks associated with technology-assisted review. This week’s cartoon and clip highlights the importance of agreement among industry leaders on key technology-assisted review issues (cartoon) and provides a short list of recent articles that advise caution in blindly agreeing with vendor and thought leader assertions (clip).
First Example of How to Calculate Recall Using the ei-Recall Method: Here we assume a review project of 100,000 documents. By the end of the search and review, when we could no longer find any more relevant documents, we decided to stop and run our ei-Recall quality assurance test. We had by then found and verified 8,000 relevant documents, the True Positives .
A TAR is Born: Continuous Active Learning Brings Increased Savings While Solving Real-World Review Problems
In July 2014, attorney Maura Grossman and professor Gordon Cormack introduced a new protocol for Technology Assisted Review that they showed could cut review time and costs substantially. Called Continuous Active Learning (“CAL”), this new approach differed from traditional TAR methods because it employed continuous learning throughout the review, rather than the one-time training used by most TAR technologies.
As we move closer to LegalTech New York 2015, the largest legal technology event of the year, eDiscovery practitioners and consumers will read, see and hear more and more about how specific vendors can help them solve discovery and review challenges. This week’s cartoon and clip highlights one important vendor qualification question (cartoon) and provides an alphabetical listing of some of the leading eDiscovery providers in the market today (clip).
A review some of the basic concepts and terminology used in this article may be helpful before going further. It is also important to remember that ei-Recall is a method for measuring recall, not attaining recall. There is a fundamental difference. Many of my other articles have discussed search and review methods to achieve recall, but this one does not.
Everyone should know that in legal search analysis False Negatives are documents that were falsely predicted to be irrelevant, that are in fact relevant. They are mistakes. Conversely, documents predicted irrelevant, that are in fact irrelevant, are called True Negatives. Documents predicted relevant that are in fact relevant are called True Positives. Documents predicted relevant that are in fact irrelevant are called False Positives.
A colleague buttonholed me at the American Bar Association’s recent TechShow and asked if I’d visit with a company selling concept search software to electronic discovery vendors. Concept searching allows electronic documents to be found based on the ideas they contain instead of particular words. A concept search for “exploding gas tank” should also flag documents that address fuel-fed fires, defective filler tubes and the Ford Pinto. An effective concept search engine “learns” from the data it analyzes and applies its own language intelligence, allowing it to, e.g., recognize misspelled words and explore synonymous keywords.
Daily we read, see and hear more and more about the many cutting edge technologies and business models associated with the delivery of eDiscovery solutions. This week’s cartoon and clip highlights one historical example of an aggressive investment approach to technology (cartoon) and provides a chronological listing of some of the major merger, acquisition and investment events in eDiscovery companies between 2001 and today (clip).
It’s time for our annual review of eDiscovery case law! We had more than our share of sanctions granted and denied, as well as disputes over admissibility of electronically stored information (ESI), eDiscovery cost reimbursement, and production formats, even disputes regarding eDiscovery fees. So, as we did last year and the year before that and also the year before that , let’s take a look back at 2014!
I have uncovered a new method for calculating recall in legal search projects that I call ei-Recall, which stands for elusion interval recall. I offer this to everyone in the e-discovery community in the hope that it will replace the hodgepodge of methods currently used, most of which are statistically invalid. My goal is to standardize a new best practice for calculating recall.
Daily we read, see and hear more and more about the challenges and concerns associated with predictive coding. This week’s cartoon and clip highlights one consideration for the use of predictive coding (cartoon) and provides a chronological listing of some of the most interesting industry posts on technology-assisted review in 2014 (clip).
Published on November 19, 2014, The Radicati Group’s new eDiscovery Market Quadrant 2014 report provides information technology and business professionals with information and competitive analysis of thirteen vendors focused on the delivery of electronic discovery solutions.
Published on December 24, 2014, the new Gartner Critical Capabilities for Enterprise Information Archiving report (G00262937) provides information technology and business professionals with information and insight into vendors, critical capabilities and uses cases for enterprise archiving systems.
Since our days as cavemen, people and companies have been tasked with what seems like an impossible job: Take in massive amounts of data, process that data, and make decisions based on that data for our benefit and the benefit of others. And while the types of data have changed and the tools used to analyze data have grown exponentially more sophisticated, the process is as old as our species itself. Over the past few years, the idea of harnessing data has been in the spotlight.