News & Press: ACEDS News

ACEDS Commentary: Bill Dimm Responds to Gordon Cormack

Thursday, August 18, 2016   (0 Comments)
Posted by: Jason Krause
Share |

NOTE: The following letter is from Dr. Bill Dimm, the founder and CEO of Hot Neuron LLC. He has developed the algorithms for conceptual clustering, near-dupe detection, and predictive coding used in the company’s Clustify software. He is responding to commentary from Information Retrieval expert Gordon Cormack, published here following the ACEDS' webinar ​How Automation is Revolutionizing E-Discovery. 
 

In the recent ACEDS' webinar "How Automation is Revolutionizing E-Discovery," our goal was to deliver a large amount of information that is useful to a broad e-discovery audience within the confines of a 60-minute webinar. Obviously, many things would be left out due to time constraints or them only being of interest to a small subset of the audience. While there was some discussion of e-discovery automation in general, there was a fair amount of time devoted to TAR. 

We talked about the history of the industry, but also about judicial decisions and methodologies that are less than a year old. We talked a lot about the state of judicial acceptance, which is of clear importance to practitioners and is not well understood -- the results from our poll at the beginning of the webinar illustrated this point. We talked about the problem of vendors putting overly optimistic spin on reporting about judicial decisions. We talked about the Cormack and Grossman JOLT study because it is the study that judicial opinions rely upon, and we're not aware of any subsequent study comparing the quality (not merely the cost) of TAR results to those of human review. In spite of its importance in judicial decisions, the JOLT study does not seem to be well understood. We pointed out that it studied only the systems that had the best performance, not systems that were in any sense average or typical.  Should courts be relying on a study of the best systems to make rulings on the use of systems bearing little resemblance to the systems studied? This is not an issue that is resolved by a Bonferroni correction (which merely ensures that statistical significance claims about the systems studied are valid for those systems). It is important for litigants to know how strong the evidence and judicial support for TAR really is. We pointed out that a mere month ago Judge Hedges proclaimed that TAR has not really been challenged yet.

We gave several tips for improving results with TAR. After describing TAR 1.0, 2.0, and 3.0, we showed how wildly different their performance can be. No single approach was best in all scenarios analyzed -- we talked about the factors that should be considered to pick the right tool for the job. We also warned against deciding which documents to produce based on "give up if it gets hard" metrics like the F1 score. We warned against assuming that recall tells the whole story when it comes to adequacy of production.

We did not attempt to solve every problem or shortcoming that we mentioned. There is value in being aware of shortcomings and risks even if they cannot be resolved. We did not attempt to suggest methods for assessing tools or areas for research because, even if we had such suggestions to offer, a webinar aimed at non-researchers would not be the appropriate venue.


What our customers say?

©2016 Association of Certified E-Discovery Specialists
All Rights Reserved