News & Press: Affiliates in the News

Test Your TAR IQ: See If You Can Answer These Smart Questions About TAR

Monday, May 1, 2017   (0 Comments)
Posted by: Mary Mack
Share |

Test Your TAR IQ: See If You Can Answer These Smart Questions About TAR
By John Tredennick, Esq.

“There is no such thing as a dumb question,” said the astronomer Carl Sagan. At Catalyst, that’s a principle we take to heart. We receive a lot of very smart questions from a lot of very smart people about discovery technology, and particularly about technology assisted review and our “TAR 2.0” platform Insight Predict.

No matter how a question comes in, we make a point to answer it as best we can. Questions come in from our clients, of course, but also through our webinars, blog, website and email. They come from lawyers, corporate counsel, litigation support professionals and others. We haven’t seen a dumb one yet.

A year ago, we got the idea of posting some of these questions and answers on our blog. Over time, we realized we’d answered enough of them to fill a book. So that’s exactly what we did. We compiled a selection of the best of them and published, A User’s Guide to TAR: Your Questions Answered About Technology Assisted Review, the second of our “For Smart People” series of books.

The process of preparing the book only further underscored our belief that there are no dumb questions when it comes to TAR. To the contrary, legal professionals have many smart questions about how the process works and how best to employ it. Questions ranged from conceptual to practical, from “What is … ?” to “How do I … ?” Some were generic while others focused on our TAR 2.0 platform and its continuous active learning (CAL) algorithm.

Test Your TAR Smarts

So we thought we’d have a little fun with it and give you the chance to test your own TAR smarts. Below is a sampling of some of the questions we answer in the book. As you read through them, take a moment and consider whether you could confidently provide the answer.

  1. What are the document thresholds at which it is appropriate to use TAR? Would the answer be case dependent or just a percentage of documents?
  2. With continuous active learning, how do I know when I can stop the review? Wouldn’t it be easier if I just knew in advance?
  3. What is the difference between an initial richness sample and a control set?
  4. TAR 2.0 seems to discourage using randomly selected documents for training. Doesn’t this bias the results? How do I know what I don’t know?
  5. What is validation and why is it important?
  6. What is contextual diversity and why is it important to a TAR process?
  7. If I am the producing party, on what basis do I decide the percentage at which I’m cutting off the search for relevant documents? Does that have to be agreed upon by the parties?
  8. We are halfway through a document review, but it is taking us longer than we anticipated and we are running short on time. Would it make sense to start using TAR at this stage?

Taking It to the Next Level

How did you do with answering those? Believe it or not, they were some of the more basic questions we received. In the second half of the book, we tackle more advanced issues. See how you do answering these:

  1. What is supervised machine learning and what does it have to do with TAR?
  2. Is recall a fair measure of the validity of a production response?
  3. How can I prove a negative—that a document does not exist in a collection?
  4. If I use documents from other matters to help train the algorithm, do I run the risk of exposing that data if opposing counsel requests the training set?
  5. If the parties are collaborating on what is a responsive or nonresponsive document in order to train the system, is there a fail-safe that keeps one party from inappropriately skewing the results?
  6. How do TAR 2.0 and CAL handle “bad” decisions by reviewers? Does the system base its learning on those bad decisions?
  7. I understand that TAR 2.0 processes do not use a control set. If so, how can you validate your results?
  8. How does a TAR 2.0 system handle synonyms? For example, if document 1 has “car” but not “automobile” and document 2 has “automobile” but not “car,” and if the reviewer gives the thumbs-up on document 1, does the system know how to rank document 2?
  9. In ranking documents, does TAR 2.0 use metadata information or is the ranking based solely on the document text?

How Did You Do?

OK, how many of the questions were you able to answer confidently? Now is the point in the article where I am supposed to give you all the answers so you can check how you did. The problem, however, is that answering all those questions would take a book. So here is the point where I shamelessly suggest you follow this link and download the book for yourself. Don’t worry, it won’t cost you anything.

The truth is, some of these questions are susceptible to multiple answers and multiple approaches. We’ve given them our best shots, based on our experience and training. Whether you agree or disagree, we’d love to hear your thoughts on any of these issues.

After all, just as there are no dumb questions regarding TAR, there are no dumb answers.

John Tredennick, Esq., is the founder and CEO of Denver-based Catalyst Repository Systems, www.catalystsecure.com, which designs, hosts and services the world's fastest and most powerful document repositories for large-scale discovery, regulatory investigations and compliance, and is a former litigation partner with a large law firm in Colorado.


What our customers say?

©2018 Association of Certified E-Discovery Specialists
All Rights Reserved