Ricoh eDiscovery

Five Reasons Active Learning is Better than TAR

Posted by Roland von Borstel |2 minute read

Feb 27, 2018 10:56:23 AM

Relativity Active Learning Advanced Analytics

Relativity has just released Active Learning (“AL”) – their next generation technology assisted review application that simplifies the process of categorizing and prioritizing documents for review. We have had the privilege of piloting the system in Canada and through our initial testing, we believe Relativity's Active Learning is going to transform document review.

Here are our initial findings:

1. Administration

Setup of the system is simple and fast. Document reviewers can be launched in less than 15 minutes, and progress is easily monitored in real-time via the Active Learning console.

2. Speed and Accuracy

Our pilot project demonstrated a rate of 95% return on relevant documents served up to the legal reviewer. Further, we found the simplified process expedited review by 20% over Relativity's Latent Semantic Indexing (TAR 1.0) workflow. This was largely because reviewers are no longer required to examine extracted text and send excerpts to the system to train the documents.

3. Workflow Scenarios

Our evaluation also included a scenario where pre-coded documents from different workflows were leveraged and fed to the Active Learning system. This resulted in the quick categorization of documents without a requirement for manual review beyond elusion and recall sampling to validate results.

4. Project Completeness

To test project completeness, we began elusion testing once the system was no longer serving up relevant documents – this is a random sample of system categorized documents manually reviewed to determine recall (completeness) and elusion (missed documents).

An important factor in determining the project completeness is determining the appropriate value for the relevance cut off. This threshold can now be defined by the user and is no longer hard coded as it is in LSI. These settings can be subjective depending on the richness of the data set. Our team developed a script that allowed us to calculate precision, recall, elusion and F1 (a weighted measure of precision and recall) based on any cut off value. By reviewing the different results, we could adjust the threshold value and successfully determine completeness based on target precision, recall and elusion values.

5. Document Categorization

Feeding TAR results into Active Learning, we found that we could improve elusion and recall, and reduce the number of documents for relevance review by an average of 25%. This is because Active Learning will categorize all documents, whereas LSI will always have a universe of uncategorized documents that typically require eyes on review.

Overall, we found Active Learning has a more intuitive workflow and yields quality results even sooner. We are eager to bring this advanced functionality to our clients to support their time and costs reductions, while improving quality and defensibility.

New call-to-action

Topics: Intelligent Review, Roland von Borstel

   

Tell Us What You Think.