With the increased focus within the discipline of eDiscovery on Technology-Assisted Review, three references are provided to help legal professionals establish a solid base of definitional and contextual information for considering machine learning.
Reference #1: Book: New Advances in Machine Learning. Chapter: Types of Machine Learning Algorithms. Author: Taiwo Oldipupo Ayodele (University of Portsmouth, United Kingdom).
Reference #2: Video: Lectures on Machine Learning. Lecturer: Andrew Nq (Director , Stanford Artificial Intelligence Lab, Stanford University).
Available via Video Series Link: https://class.coursera.org/ml/lecture/preview
Reference #3: Video Lectures on Machine Learning. Lecturer: Pedros Domingos (Professor of Computer Science & Engineering, University of Washington.
Available via Video Series Link: https://class.coursera.org/machlearning-001/lecture/preview
Comments are closed.
As lawyers, we hear a lot about the technological advances in e-discovery and information governance. How do you describe the current state of e-discovery from an opportunity and growth perspective, and how does this market opportunity impact the pulse rate of mergers, acquisitions, and investments? For lawyers purchasing e-discovery packages, there are several types of vendors and pricing models, and they need to be asking the right questions. What does the data governance solution need to do, how much does it cost, what are the time constraints, and how complex is the system?
Since its 2007 introduction, kCura’s Relativity product has become one of the world’s leading attorney review platforms. One of the elements of Relativity’s strong growth and marketplace acceptance has been kCura’s focus on and support of partnerships. Provided as a by-product of review platform research and presented in the form of a simple and sortable table is an aggregation of kCura Premium Hosting Partners and Consulting Partners.
By Bill Dimm This article shows that it is often possible to find the vast majority of the relevant documents in a collection by starting with a single relevant seed document and using continuous active learning (CAL). This has important implications for making review efficient, making predictive coding practical for smaller document sets, and putting […]
F-scores are often inappropriately interpreted as measures of review quality when evaluating predictive coding results. To get a better understanding of how an application of predictive coding has performed, the component elements of the f-score — precision and recall — should be reviewed. But what do precision and recall scores indicate and how do they relate?
Technology assisted review (TAR), also known as predictive coding and computer assisted review, has become a frequently used tool to complete large document reviews quickly and cost efficiently. The promise of fast, accurate computer-assisted coding as a practical solution to increasingly massive collections is encouraging, but understanding various vendor approaches can be confusing and overwhelming. In many cases, there is little, if any, information about how a specific TAR methodology works, creating potential defensibility blind spots and jeopardizing the progress of your case. How can you trust or account for the results of a mystery process? Alternatively, if a methodology is fully disclosed, case teams can evaluate, explain, and justify outcomes with confidence.
ComplexDiscovery | Creative Commons Attribution 4.0 International
The Actionable Intelligence (@ActionableINT) Weekly "Quick 10" Corporate Risk Review provides in-house counsel with a weekly overview of ten significant...