Skip to main content

Defensible Technology-Assisted Review in Six Key Steps

How does one build effective and defensible processes around Technology-Assisted Review (TAR)? What attributes contribute to an effective and defensible TAR process? Although each matter requires its own customized methodology, building a foundation for a repeatable, defensible TAR process should entail the following six steps:

  1. Designate the key players. A TAR team should consist of one or more subject-matter experts, project managers, and consultants. Consultants may include experts on the software being used, linguists, or statisticians. Having the most knowledgeable people involved as early as possible increases defensibility and lowers costs by allowing for efficiency of process, consistency of assessments, and simplification of complex concepts and protocols.
  2. Get everyone on the same page. The project manager(s) assigned to the TAR matter should take the lead early in the project and make sure to align the entire team as far as project goals and strategies. The team should agree on goals, milestones, target metrics and standards, and be prepared to change course as necessary as goals may change based on statistical measurements throughout the TAR cycle. All decisions and changes in course should be documented and communicated to the team at large.
  3. Generate and confirm measurable results. After a sample of documents has been assessed for responsiveness by the subject matter expert(s), a quality control process should be conducted to ensure accuracy and consistency, especially before feeding these exemplar documents to the system for training purposes since the success of the TAR process depends on accuracy of these assessments. Once the model is finalized and applied across the entire population scoring each document based on its potential responsiveness, the resulting metrics should be assessed. A common strategy is to only send documents with above a certain score for human review, and documents with scores below a certain score are not reviewed, but should be sampled in a statistically valid fashion to verify that unreasonable numbers of responsive documents are not being missed. Documents in the middle range where the system is unsure about their responsiveness can be reviewed manually, or sampled in a statistically valid fashion. If significant numbers of responsive documents are found in this middle population, these can be incorporated into the existing model as exemplar documents and the scores redistributed.
  4. Evaluate and improve the process continually. The team should evaluate each set of results at each stage of the process and fine-tune approaches as necessary. Such evaluations include the responsive vs. non-responsive composition of the seed set, the sample size necessary to attain the desired level of statistical certainty, and the determination of when to stop training the system based on precision and recall calculations compared to the statistical goals.
  5. Perform thorough quality control on final results. No discovery process is perfect, and TAR is no exception to this. Therefore, in addition to the quality control measures throughout the process, for final acceptance testing the team should choose a reasonable threshold for quality and ensure the results exceed that threshold by sampling a subset of documents. Each project team will need to determine what amount of missed responsive documents is acceptable, and as long as the actual amount of missed responsive documents in a statistically valid sample is at or below that target, the results can be deemed acceptable.
  6. Maintain thorough documentation. The team should choose a TAR tool that provides comprehensive reporting so that metrics and progress of the review can be tracked and used to support a defensible process. All decisions, including when they happened, and any corresponding statistics need to be recorded as well.

By following these six steps, the case team on a given TAR matter can demonstrate that its review included the right documents, validate that it produced accurate results, and defend its processes with thorough documentation.

To learn more about constructing a defensible TAR strategy, please review our white paper.

Elizabeth Roberts is strategic search consultant at Conduent. She can be reached at