Whither Predictive Coding?

Back to Blog Posts

In my previous post, I identified the principal reasons I believe Predictive Coding or Technology Assisted Review (“TAR”) has not yet caught on in mainstream litigation.  Let me summarize very briefly: complexity, opacity, and cost.  That is, most TAR systems are difficult to set up, difficult to use, difficult to understand, and usually expensive.  

So, what is DISCO’s approach and how is it different?     

DISCO is a backwards company 

By that I mean we do not believe in building software that requires users to substantially modify their workflow.  Instead, DISCO starts with users’ preferred workflow and works backwards by building software to match users need -- so they can focus more on the merits of the case (our unofficial motto is “We automate … You litigate.”).  

In that way, DISCO’s “backwards” development of TAR began with observing users’ needs across reviews (coupled with our in house counsels’ varied experiences).  And the first thing that became very apparent when analyzing reviews across varying jurisdictions, case types, size, deadlines, and lead counsel is that there is not one uniform method of handling a review.  But we did detect trends that allowed us to generalize and derive several areas where  we thought TAR might help our users.

We then prepared a requirement list for our computer engineers and data scientists.  We wanted a system for our users that was:

  • Accurate.  Obviously without accuracy there is no real “A” (“assist”) in TAR. To achieve state-of-the-art quality, DISCO hired an expert in the most current machine learning techniques with a doctorate in artificial intelligence (AI) as its head of data science to create the best possible algorithm on the market. As a result, DISCO is the first to market in the legal space with breakthrough deep learning methods, combining Google’s Word2Vec technology with convolutional neural networks to yield highly accurate tag scores.
  • Simple.  Our goal was to use software to make reviews easier, not harder.  DISCO’s system would require no complex setup, meaning no seed sets, continuous setting adjustments, charts and graphs to master, or new terms to learn.  Most litigators have never used TAR, so the goal was to serve the needs of the majority, not just the expert-level predictive coding users.  
  • Flexible.  Not only is no review identical, but neither do they typically remain static over the lifespan of a case. Therefore, DISCO’s design would have to work seamlessly with our existing ediscovery software, such that it could be turned on, off, or on again for any need our user might have at any time or for any task.  
  • Sortable.  Our algorithm produces a “score” for every tag as applied to every document.  This score lets users predictively “sort” each document by each tag (or review decision).  Sorting is critical for all manner of workflow management, including instances when the review project timeline is tight (which on some level is almost always true).
  • Searchable.  We wanted our “scores” that predict likelihood of tags to be searchable, immediately returning the tag results (or combination of tags) for any document set or subset (such as only those scored particularly highly for a tag, such as “Responsive”).  This is useful for numerous workflow needs, but flexible for occasional usage as well.
  • Adaptable.  Because issues change throughout the course of a case or a review, and because we wanted the most precise system we could get for our users (without complex setup costs), we needed a system that “learns” continuously.  This has the obvious advantage of getting better and better as a review progresses.

After more than a year of development, we are now approaching a general release of these new features. Selected clients are already using an early version of Disco ML, and their feedback is very promising. When the system is opened to all our customers later this year or early next, we believe it will have transformative impact on the use of machine learning in our industry.

How do we anticipate DISCO ML will be used by a typical case team?  

We have some ideas outlined below, but first a caveat:  DISCO didn’t design TAR to replace lawyers in a document review -- that is, it’s not intended to actually tag documents.  Our overview of reviews suggested that the (often silent) vast majority of document review teams neither need nor want that type software, at least in the near term.

So, how could DISCO’s system of tag predictions assist a case team (who likely have little or no predictive coding experience or AI training) do what they do better? And more importantly, how can tag predictions be used in virtually every case of any size?  Here are some of our thoughts:

  • Prioritizing the order of a review.The order of a review can be important.  Often reviews are of opposing party productions, with looming depositions, essentially creating a triage situation.  Prioritizing review by issue prediction can often assist more than simple random or linear review order.  Similarly, the order of a document review for a production might be most effective to place the “likely responsive” group first, thereby maximizing the opportunity to review that set earlier in the review.
  • Finding critical facts fast.  DISCO’s tag system allows users to create any number of tags.  Once they’re applied to documents, the algorithm begins analyzing the characteristics of that document against every other document in the database to find those that seem to share the characteristic.  That is, the algorithm asks why the user applied the tag and what other documents might the user think should also have that tag.  The name of the tag does not matter, so you could create a tag “Black” to apply to any documents--such as those received from an opposing party--that mention a company’s profits or any financial transactions.  As you start finding those documents and applying the tag by word searching or linear review, the software will “learn” the pattern and score every document by the likelihood that it does (or does not) include the relevant facts (i.e., those that should receive the “Black” tag).  After a short period of review, users can then search by or sort the entire set by the highest likelihood to receive the “Black” tag. The more tags that the reviewer applies (or declines to apply), the more the machine learns about the “Black” tag, and its predictions improve over time. In this way, reviewers can find facts faster (or without trying to manually search for every way a transaction can be described or profits reported).
  • Suggestions of each “tag” for every document.  Because DISCO’s algorithm scores every document for every tag, the system essentially provides “suggestions” of what (and what not) to tag.  These can offer guidance for each document, picking up on subtle clues that a cursory review might miss (for example, a strong DISCO privilege tag prediction can ensure reviewers are “mindful” of the possibility of the tag, thereby acting as a “red flag” that a document might be privileged).
  • Predictions also can assist in managing a document review workflow.  Since all documents are scored and predicted, reviewers can use those estimates to better review.  For example, documents that are “likely” or “predicted” highly correlative of Issue A in a review can be assigned to reviewers with Issue A speciality (e.g., a particular project, or subject matter, such as a relevant technology in a patent case, or a specific element of damages, etc.).   
  • Early case analysis of unreviewed sets of documents.  If every document is predicted according to every tag (and the entirety is searchable), users can use the predictions to learn what is in the “unreviewed” portion.  When coupled with something like a visualization of data set or particular search terms, one can learn quickly what is “likely” in the set. Think of it as a report from a junior associate who has skimmed thousands of documents to provide a preliminary report.  This can help set up a review (e.g., how many reviewers and the “level” of reviewer necessary) or provide estimates for other reasons.
  • Locating documents after all keyword or other “traditional” methods have been exhausted.  The fact is, most of us are not human thesauri, (don’t worry, I looked up the plural form), and even if we were, we would not be able to capture all the possible ways our clients and their employees refer to various and sundry subjects in emails.  A good predictive algorithm can supplement these efforts by finding patterns to previously categorized documents that humans might miss.  In short, the DISCO algorithm suggests additional documents that might meet the relevance criteria (think of it as a “more like this” button applied to a decision, such as “Responsive”).

Of course users can use some or all of these ideas, in conjunction with a traditional review, and use them if and when they meet a review’s needs at any particular time.  But these will be a tremendous assistance to any review.  Plus, not one of these ideas would necessarily change anything users have done in the past from the standpoint of a “traditional” review.  They are all simply aids to a more thorough document review, certainly a factor in deciding to what extent opposing counsel or judicial blessing is warranted.  All in all, we think it will change legal teams’ collective perception of “predictive coding.” 

Subscribe to the blog
Scott Upchurch

Scott Upchurch is the Senior Director of Product Strategy and an Associate General Counsel at DISCO. Prior to joining DISCO, Scott was a civil litigator for 15 years, representing both plaintiffs and defendants in various federal and state courts, as well as before arbitration panels. Scott earned his J.D. from the University of Chicago.

0%
100%