ARCHIVED CONTENT
You are viewing ARCHIVED CONTENT released online between 1 April 2010 and 24 August 2018 or content that has been selectively archived and is no longer active. Content in this archive is NOT UPDATED, and links may not function.Extract from article by John Tradennick and Jeremy Pickens
Validation is one of the more challenging parts of technology-assisted review. We have written about it— and the attendant difficulty of proving recall—several times:
- Is Recall A Fair Measure Of The Validity Of A Production Response?
- In TAR, What Is Validation And Why Is It Important?
- A Discussion About Dynamo Holdings: Is 43% Recall Enough?
- How Can You Validate Without A Control Set?
- Measuring Recall in E-Discovery Review, Part Two: No Easy Answers
- Measuring Recall in E-Discovery Review, Part One: A Tougher Problem Than You Might Realize
The fundamental question is whether a party using TAR has found a sufficient number of responsive documents to meet its discovery obligations. For reasons discussed in our earlier articles, proving that you have attained a sufficient level of recall to justify stopping the review can be a difficult problem, particularly when richness is low.
Special Master Maura Grossman recently issued an Order crafting a new validation protocol In Re Broiler Chicken Antitrust Litigation, (Jan. 3, 2018), which is currently pending in the Northern District of Illinois. You can download a copy to the Order here.
While the Order was issued in the context of what seems to be document intensive litigation, the validation method it offers is important because it could work for other matters, whether the review is based on TAR 1.0, TAR 2.0 or even a simple linear review.
Read the complete article at TAR for Smart Chickens
Additional Reading:
- Iterated Four-Step Work Flow for Active Machine Training to Help Attorneys Locate Relevant Evidence
- Analysis of Important New Case on Predictive Coding by a Rising New Judicial Star: “Winfield v. City of New York”