With recent events highlighting the indispensability of lawyers, Koli Mitra, our Global Editor, chats with attorney Cathryn Zahn, to probe the issue of automation and its impact on performing legal tasks.
KM: Recently there was a lawyers’ strike in Kolkata that raised some significant issues. What’s the lesson? Is it that lawyers’ demands should be met, or that businesses should think seriously about filling their legal needs with Artificial Intelligence (AI)? You have done extensive work with eDiscovery systems and that is an area where the automation threat is frequently touted as imminent. Do you think AI will displace human lawyers in certain practice areas, like eDiscovery and document review?
CZ: Not any time soon. Eventually, of course, automation will displace all workers, including document reviewers and other attorneys, but right now the technology is just not ready. These tasks require a higher level of interpretive intelligence. Machines can’t replicate that yet. Lawyers are not about to lose their jobs to technology in the near future.
KM: So, we’re a couple of lawyers talking shop, but we should provide a little background for our readers. “Discovery” is the legal mechanism by which parties in a lawsuit compel each other to produce documents into evidence – including all forms of recorded communications. Many attorney-hours go into reviewing these documents. Over the last couple of decades, document review has become almost entirely electronic, as most litigants (individuals, businesses, and other organizations) have switched to electronic systems of generating and storing documents. An entire industry called “eDiscovery” has emerged to provide law firms with technological and administrative support involved in document review and production. But at first, didn’t some people worry that streamlining the review process through digitization would put a lot of lawyers out of work?
CZ: Absolutely. But that never happened. In the past, there were much fewer documents to get through. It was mostly just official memos, letters, contracts, etc. These were usually saved in paper form and lawyers had to sift through them by hand. But even as eDiscovery has simplified and sped up document review, discovery attorneys are busier than ever, because the digital revolution has also exploded the sheer volume of documents that are generated (and need to be reviewed) like emails, texts, chat platforms, and transcribed voice memos.
KM: But isn’t AI different from simply computerizing previously analog processes? It’s not just “automation” in the sense of being tools that know their functions and don’t need repeated instructions. It’s machine learning, which is more about pattern recognition and judgment and mimicking human cognitive behavior.
CZ: Those might be the goals of AI, but the capacity isn’t there in eDiscovery. In my experience, the biggest impact of AI on eDiscovery is the move toward predictive coding, which is a machine learning process used to cull the most potentially relevant information out of the gigantic databases that every organization has these days. It’s really a tool to assist lawyers in their work.
KM: Not an artificial lawyer that will replace them?
CZ: Not quite!
KM: Tell us about how predictive coding works.
CZ: It works by having lawyers do a first-pass review on a small sample set of documents and using the results to train the software for the next level of review. The software does a machine analysis of how the lawyers have “coded” the documents (meaning they have marked each document as relevant or not-relevant for a particular issue, and added pertinent information they want the software to learn to recognize). Based on that human input, the software teaches itself to look for patterns for what makes a document potentially interesting to the humans. It then applies what it learns to the whole database and pulls the documents it “predicts” will most likely be of interest to the human lawyers. Next, the lawyers do a higher level review of the documents the software has identified for them. Sometimes the technology is more basic and it simply identifies all documents that hit certain search terms, which just means the volume of documents that lawyers have to read is much higher, but there is less risk of error, since we don’t have to worry about whether the machine learning process (and therefore the predictions) were wrong in some way. In any case, an actual lawyer’s eyes must be on the document, at some point.
KM: And really, there is no way to check the AI’s learning process – because it’s happening at the machine level, hidden from view – so we can’t check for logical soundness without dedicating massive human hours to replicate and study the process, which defeats the purpose of training the AI in the first place. The key problem is that software “thinks” differently from the way we think. No matter how “smart” a machine is, it is (at least currently) a vastly different kind of intelligence from human intelligence. Machines are great at computations and quantitative analytics on enormous amounts of data at very high speeds. People find this impressive, because we can’t do it. But when it comes to qualitative analysis, interpretation of words and thoughts, the things that seem very simple and intuitive to us are next to impossible for technology to tackle. They don’t have instinct or flashes of insight. They have architecture. They have algorithms. They are still very pre-structured, in many ways, although AI is supposed to evolve beyond that. And it might be well on its way. But quite possibly, there might just be limits to what machines can do.
CZ: And it doesn’t help that tech developers are not lawyers. They often don’t understand how legal analysis works. And lawyers don’t fully understand how machine logic works. So, there is a gap between what lawyers do and what the technology is being designed to do.
KM: There are, of course, apps that can research and answer a host of legal questions, like the one powered by IBM’s Watson, called ROSS. But that’s also more like a research tool than a true dispenser of legal advice, despite being hailed as the “world’s first artificially intelligent attorney” back when it was new. And it’s rather telling that apps like these are usually not backed by assurances or assumption of liability, the way a human lawyer’s work would be. Are you skeptical about whether law firms will ever accept liability on behalf of some software, especially since they can’t fully test or vouch for the underlying decision making process?
CZ: It will be a long time, even after the technology is ready. Law practice is a conservative field. It’s slow to adapt, and even somewhat resistant to some kinds of change. Imagine a 70+ year old senior partner – a stodgy old man – trying to decide whether to let AI do some basic legal tasks. He doesn’t trust it. He wants an intelligent, legally trained human to read a document and tell him if it’s relevant. That’s the process he knows and trusts and will stand by. And that’s important to clients. Automation is always a good cost cutting measure, but legal services is an area where clients are focused on quality and expect their lawyers to stand by the quality of service.
Comment here (your name & contact optional)