Lessons from our Partners – 5 Factors for TAR
Some of our own service-provider partners and users are fast becoming experts on technology-assisted review and the lessons they’ve learned are valuable for others who may be looking to implement TAR. As the TAR movement has picked up steam and there is greater comfort in cases reflecting judicial acceptance, the use of TAR tools such as Conduent’s own Viewpoint Assisted Review (VAR) and CategoriX has also picked up this year.
Here’s what some of our partners have taught us about what 5 factors to take into account when considering TAR:
1) Number of documents versus number of attorneys available: Smaller cases with under about 50,000 records, generally do not benefit from TAR because of the amount of time required to review the sample and seed sets. With that said, TAR can be very powerful if a small team of attorneys is facing a pile of over 50,000 to 100,000 records subject to e-discovery. It’s even more powerful where the data volumes are larger.
2)Timeframe: TAR timeframes vary depending on the data volume, nature of the case and the bandwidth of attorneys available for seed set review. If discovery timeframes are tight, TAR may be a good option. It’s important to make sure that the most knowledgeable attorneys on the case are engaged throughout the process, especially during seed set review, which can take days to weeks to complete, sometimes longer. The more time devoted up front, the faster the review can be. Combining TAR with other advanced analytics can also help your team get a faster view into the production.
3) The intention of the review: Is the main intent to prove or respond? The TAR process is fed from binary decisions, so a case that needs to be reviewed for a diverse number of issues may not necessarily be a good candidate for TAR. However, if the review is intended mainly to respond to discovery requests, i.e., deciding between responsive vs. non-responsive or privileged vs. not privileged, TAR is very useful.
4) Data types in the population: Assess what types of data you’re dealing with. Most TAR tools make text-based assessments. First check the quality of the OCR. Poor OCR makes it difficult to leverage the power of TAR so those documents would need to be set aside to be handled in a separate process. Images, numbers and spreadsheets also may need to be handled in a separate process depending on your TAR tool capabilities.
5) Data richness: Sample the population to see what percentage of the population is likely to yield responsive documents. The richer the population, the more information you will likely have from seed sets to properly train the system.
Tony Reyes is a Director at Lateral Data, a Conduent company. He can be reached at info@conduent.com.