News & Press: News

ACEDS Interview: Maura Grossman’s Big Move

Wednesday, July 20, 2016   (0 Comments)
Posted by: Jason Krause
Share |

When Maura Grossman speaks, people listen. In 2011, she was already known as a leading e-discovery attorney and litigator. But her influence exploded when she released research with co-author Gordon Cormack, a computer science professor at the University of Waterloo in Ontario, that concluded software using predictive-coding technology can do as good a job of sifting through documents as human reviewers. The paper, called Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review in the Richmond Journal of Law and Technology, has been widely cited in case law, helping to pave the way for predictive coding in litigation.  

In her work with Cormack, she helped popularize the term Technology Assisted Review (TAR), a term that she now fears may be losing meaning as vendors scramble to join in the predictive coding revolution. She has also been a leader with the National Institute of Standards and Technology’s Text Retrieval Conference (“TREC”), studying the effectiveness of computer-assisted review, and was instrumental in bringing artificial intelligence into use at her firm, Wachtell Lipton. She and Cormack have since developed a process called Continuous Active Learning, in which a computer uses machine learning techniques to find responsive documents in a collection, for which they have earned several patents.

When we reported that Grossman was leaving Wachtell Lipton to join Cormack (to whom she is engaged) as a professor at the University of Waterloo in Ontario, more than a few people in our industry were interested in what she has planned. In addition to research work, she has launched an e-discovery law and consulting practice, offering Continuous Active Learning technology, as well as other services. We talked to Maura about her new careers, the state of computer-assisted review in litigation, the appropriate measures for success in TAR, the role of humans in the process, and whether she ever stops thinking about predictive coding.


What brought about or inspired this career change?

I’ve always been a person who likes to tackle new challenges. In 1996, I went from being a clinical psychologist and hospital administrator, to a full-time law student at Georgetown. In late 2006, I went from being a general litigator to being an e-discovery lawyer. Entering 2016, I was ready for new challenges. I wanted to continue to advance the technology, practice, and law of e-discovery, and TAR in particular, but at a whole new level. At the University of Waterloo – which is home to one of the top computer science schools in the world – I'll be able to conduct experiments on collections of over a billion tweets or web pages, and collaborate with very smart people who think about Big Data problems every day. As a solo e-discovery practitioner, I'll be able to accept high-profile engagements that may not have been possible while at a large corporate firm, due to potential conflicts or other business considerations

What will you be doing in your new jobs?

As a Research Professor, my primary responsibility is to advance knowledge and to generate scholarship in my chosen field, although I will also teach an occasional course for upper-class computer science majors or graduate students. Mostly, I will work with Gordon Cormack, as well as other faculty members and students at Waterloo, to advance the state of the art in what we have dubbed “high-stakes information retrieval.” E-discovery is one important instance of high-stakes information retrieval, but it’s not the only one. Evidence-based medicine – where the object is to find all of the studies in a particular area – is another example. Government archives – which are statutorily required to identify and store open records, and to make them available to the public – is a third example. Gordon and I have been working with the Library of Virginia and the Illinois State Archives to help classify their gubernatorial records.

In my private practice, I am available to work on individual e-discovery matters (using Gordon and my Continuous Active Learning (CAL) software), and also to serve as a special master, mediator/neutral, expert, or consultant. But mostly, I am excited to have more freedom to develop technology and best practices – both for the use and evaluation of TAR – as well as to influence the governing case law in these areas.

How do you look back on the e-discovery practice you created at Wachtell? Any lessons for other firms to follow?

I was very fortunate to be at Wachtell Lipton as e-discovery swept the legal field over the last decade. Among Big Law firms, Wachtell is somewhat unique in that it has low leverage, with only one or two associates per partner. What that meant was that we did not have vast associate resources to undertake massive manual review processes – particularly under time pressure – and there was some reluctance to completely outsource this task. This set of circumstances presented me with the opportunity – and the firm afforded me the latitude – to explore technological solutions. As a consequence, in 2008, I became involved in the National Institute of Standards and Technology’s Text Retrieval Conference, where I met Gordon Cormack. The rest is history.

As for advice for other firms, I would say that seeking out innovative and efficient solutions to the challenges facing you and your clients is an imperative for all law firms today.

Your paper Technology-Assisted Review in E-Discovery… has been influential. What do you feel the legacy or impact has been?

We are thrilled that so many practitioners – and the courts in the U.S. and elsewhere – have cited and relied on our work, and that we have been able to make the search and review process more effective and efficient. That said, it is sometimes irksome that the terms “technology assisted review” and “TAR” – which we coined in that paper – have been stretched beyond recognition, and have been applied to any number of methods that bear no resemblance to those we evaluated in our study. It is unfortunate that some people have misused our work as authority to suggest that these other methods are effective, when that may not be the case. But, like parents who want their children to grow up to be one thing, and they become another, you cannot always control what happens to your progeny. Overall, though, it has been very gratifying to be able to impact the law and legal practice in the U.S. and abroad.

You were very careful to qualify the claims made in the paper. Do you feel the profession has understood you position - or is there a sense people have overstated the effectiveness of technology-assisted review? Or have they understated the effectiveness?

I think I may have anticipated this question with my previous answer. Our work showed that two methods – one that we subsequently named “Continuous Active Learning” or “CAL,” and one that involved a careful, hand-crafted rule base –  were able to do at least as well as, if not better than, manual review, with a tiny fraction of the effort. As we said in our paper, not all technology-assisted review – and not all manual review – is created equal. Some TAR methods have been overestimated and others have been underestimated; you can’t just lump them all together.

In the JOLT article, F1, precision and recall were the standards used to measure effectiveness. How have they held up as generally accepted quantitative measures? Do we have a gold standard for measuring the effectiveness of review today?

In information retrieval research, recall, precision, and F1 have typically been used to measure the relative effectiveness of several different search methods, not as an absolute standard for measuring a single method. Like all summary measures, they do not tell the whole story, and in my view, should generally not be relied on as the sole measure of the adequacy of a search or review effort. Other factors, such as the importance of the documents that were missed (or found), as well as the completeness of the coverage of different aspects of the relevant subject matter, are also important to consider. Many of these factors are not captured by an easy-to-calculate summary measure, such as precision or elusion. Recall is, in fact, exceedingly difficult to calculate accurately. Gordon and I are currently investigating more nuanced evaluation measures; for example, the quality and reliability measures set forth in our SIGIR 2016 paper (http://dl.acm.org/citation.cfm?doid=2911451.2911510).  But no single measure, alone, can substitute for a reasonable consideration of all available evidence, as we argued in our 2014 Federal Courts Law Review article, Comments on “The Implications of Rule 26(g) in the Use of Technology-Assisted Review.”

In that article, Comments on the Implications of Rule 26(g)…  there was an analogy between cooking a turkey and the process of technology-assisted review. Are we close to having a recipe? Is there a successful formula for doing TAR right?

In our experience – involving both actual matters and laboratory experiments – the CAL recipe works better than any other recipe we are aware of. Certainly, it is more effective and efficient than keyword culling, followed by manual review. Our research has also shown that CAL does a good job of covering various aspects of the subject matter, with reasonable review effort, and very limited oversight. See, for example, our SIGIR 2015 paper on this issue (http://plg.uwaterloo.ca/~gvcormac/facet/). 

We’ve also found our autonomous version of TAR (“AutoTAR”) to be remarkably effective, without any human intervention, other than coding documents. Short of intentional miscoding of relevant documents as non-relevant (or vice versa), there is little that a human can do to manipulate the eventual outcome of the process. In most situations, a single relevant “seed document” is sufficient; additional seed documents that illustrate different kinds of relevant documents can accelerate the review in its early stages, and can offer the parties reassurance of comprehensive coverage. We know of no way to manipulate the AutoTAR process (short of consciously miscoding many documents), so as to avoid particular subject matters.

All of that said, we are confident there will be further improvements in TAR, hopefully by us and by others. Gordon and I are continuing to investigate methods to measure how well CAL works, to improve how well it works, and to improve the reliability and predictability of the CAL process.

What is the role of humans in technology-assisted review? In that same (Comments…) article, you talked about “TAR Whisperers” … people who create seed sets and make the machines run, but who can potentially manipulate the process. When and how do human reviewers need to get involved to create seed sets, fine tune the process, or other steps?

In Continuous Active Learning, humans play three roles: first, they suggest likely-relevant documents or develop search terms that will uncover likely-relevant documents; second, they code the documents suggested by the TAR software as relevant or non-relevant; and finally, they decide, based on evidence presented by the software, as well as statistical evidence, when it would be reasonable to discontinue the review, because the effort to find more relevant documents would be disproportionate to their value in resolving the issues in dispute.

So are robots going to take all of the review attorneys’ jobs?

I think those concerns are largely overblown. For three centuries, automation has supplanted human effort for many repetitive tasks. As a result, certain kinds of jobs have disappeared, while others have emerged. We still need dishwashers (in the human sense) even though dishwashers (in the mechanical sense) now do much of their work. Whether automation will reduce the number of document reviewers by an order of magnitude, or simply allow an order of magnitude more ESI to be reviewed, remains to be seen.

Are you frustrated or disappointed with the acceptance of TAR in litigation? Why are law firms reluctant to adopt it? Is it to avoid the risk of missing smoking gun documents? To avoid losing profitable work manually reviewing records? How do you convince them they are wrong?

Of course, I wish that the adoption of TAR would be faster; I have been pushing the same boulder uphill for over five years now! I believe that we are still awaiting the explosive growth of TAR, which will only occur when the current barriers to adoption are overcome. Those include a series of complex but unproven rituals; prohibitively expensive pricing; and fear, skepticism, and doubt. Once lawyers – and the public at large – have more experience using TAR and observing that it just works, these barriers will gradually disappear. Gordon and I are looking forward to continuing to help to eliminate those barriers.

How will your academic work and research co-exist with your consulting and commercial efforts?  Is there a potential for a conflict of interest?

My efforts with Gordon operate in what I will call a virtuous cycle. First, we observe an issue or challenge; next, we invent a solution; then we evaluate the solution in the laboratory. After that, we deploy the solution in the real world; and finally, we observe a new issue or challenge that we need to study. My position at Waterloo will serve as an incubator for inventing and evaluating new solutions. My practice offers me the chance to deploy the solutions and to observe new issues and challenges to take back to the lab.

Research ethics is a valid concern and a topic of considerable public interest. Whenever there is a profit motive involved, it is important to make sure that the research and commercial interests are transparent and aligned. While my objective in all endeavors is to improve the state of the art, the standard of reasonableness in the lab is very different from the one in actual matters. As a researcher, you are always striving to achieve the best that can possibly be achieved, but in practice, that may not always be reasonable under the circumstances. Looking at the issue from another angle, my ultimate success – whether as an academic or as an e-discovery practitioner – depends upon my maintaining my reputation as an advocate for sensible best practices, as a person of integrity, and as a “straight shooter” who calls it like she sees it.

You hold several patents in this area. What role do they play in your work? Will you license them to other providers or directly to clients?

The primary purpose of our patents is defensive; that is, if we don't patent our work, someone else will, and that could inhibit us from being able to use it. Similarly, if we don't protect the marks “Continuous Active Learning” and “CAL” from being diluted or misused, they may go the same route as technology-assisted review and TAR. So, while securing and protecting IP isn’t high on our list of favorite activities, it’s a necessity.

What is the need for further research in this area? Do we need a new TREC-like effort? What is needed to help settle open questions around TAR?

Gordon and my primary research focus this past year has been on the question of when to stop?’ In fact, Gordon will be presenting a new paper we wrote in Italy next week at SIGIR 2016. The paper is titled “Engineering Quality and Reliability in Technology-Assisted Review,” which, we think, aptly captures our present research focus. We are proud of this paper, and hope it will have the same kind of impact as our JOLT and SIGIR 2014 CAL papers, but it only scratches the surface of the complicated problem of what constitutes a reasonable (or proportionate) search? 

At the same time as we are focusing on evaluation methods, we are continuing to pursue our goal of developing a self-driving TAR, where the only input is a few search terms to start the process, followed by human coding of documents selected by the system, until the system can no longer find any more relevant documents. Your readers can play with our free, on-line prototype of this technology (http://cormack.uwaterloo.ca/caldemo/), which we discussed in an article we published in the April/May edition of Practical Law Journal—Litigation

Since 2015, there has also been a new TREC effort: The Total Recall Track, now in its second year. It involves a high-recall information retrieval task. The most significant result from 2015, was that fully automated TAR methods worked quite well; so did other methods that were more (human) labor-intensive.

What is it like working with Gordon? Is it hard to stop talking about TAR, or can you turn off that part of your brain after work?

Well, anyone who has ever met Gordon, knows that he is unique and special. We both have a tendency to eat, drink, and sleep TAR. When we want to get away from it, we go on long, unplanned road trips. Moose, elk, mountain sheep, bears, kangaroos, and wombats are all worthy diversions.  

 

 


What our customers say?

©2016 Association of Certified E-Discovery Specialists
All Rights Reserved