I’ve previously gushed about my love for stone tools and this post is an expansion on some practical matters regarding how AI will affect my job.
I already use chatGPT-4 in my work, but only in a limited fashion so far. Sometimes I feed it text and ask it to revise it, or sometimes I treat it as a superior version of Wikipedia and ask it questions about DNA analysis (I know not to trust its answers at face value but it's invaluable as a starting foundation). When it comes to playing around with AI, I'm already way ahead any of my colleagues and I was flabbergasted when I met a few of them that were my age that somehow never played around with chatGPT or its ilk.
There's a lot of tasks I expect to fully outsource to chatGPT. The one I'm most thrilled about are using it to look up cases and synthesize caselaw from disparate scenarios, and using it to write briefs directly applicable to the fact scenario I give it. That alone will save me countless tedious hours. But I'm not at all worried about my entire job being replaced, and not because I'm deluded enough to think I'm irreplaceable.
There's a scene from the 1959 movie Anatomy of a Murder where they show the defense attorney perusing through the shelves of a law library. Back in the day, if you wanted to look up cases, you had to crack open heavy tomes (called case law reporters) where individual decisions were catalogued. One of the perennially vexing issues with legal research in a Common Law system is to keep track of which cases are still considered “good law”, as in whether or not they've been abrogated, overturned, reaffirmed, questioned, or distinguished by a latter case opinion or a higher court. Back in the day, this was impossible to do on your own. If you found a case from 20 years ago, it's flatly not possible to read through every court case from every appellate level from the last 20 years to see if any of them pruned the case you're interested in.
The solution was created by the salesman and non-lawyer Frank Shepard in 1873 when he started cataloguing every citation used by any given court case. These indexes would then be periodically reviewed and Shepard would sell these sticky perforated sheets that you could tear off and stick it on top of the relevant case inside a reporter compilation. These lists would tell you at a glance where else this case was cited, and whether it was treated positively or negatively. The procedure back then was, whenever finding a relevant case, to then consult Shepard's index and ensure it was still “good law.” Every legal database has this basic feature nowadays but to this day the act of checking whether a case is still good is referred to as Shepardizing.
Consider also what transpired before computer search was a thing1. Here too, legal publishers rushed to fill the gap and created their own index of topics known as “headnotes”, typically prepared by lawyers who are experts in their respective fields. The indexes they created was sometimes nonsensically organized and they often missed issues, but overall if you wanted to find all cases that addressed say for example “damages from missed payments in the fishing industry” looking up headnotes was obviously much better than just sifting through a random tome.
Legal research has gotten way easier with searchable databases available to everyone and job expectations have gone up in proportion. This tracks developments elsewhere. I don't know what explains the rapid rise of serial killers throughout the 70s and 80s, but the decline isn't that surprising: it's just so much harder to crime and get away with it nowadays. A murder investigation in the 1950s might get lucky with a fingerprint but would otherwise be heavily reliant on eyewitness testimony and alibi investigations (this is part of a long tradition and explains why trials and rules of evidence revolve so much around witness testimony). Now, a relatively simple cases generates a fuckton of discovery for me to sift through: dozens of cameras, hundreds of hours of footage, tons of photographs, a laser-scan of the entire scene, contents of entire cell phones, audio recordings of the computer aided dispatch for the previous 12 hours, and on on and on.
All of this can fit nicely on my laptop and though I can ask for help, I'm generally expected to have the tools to pursue this case on my own. After all, I don't rely on a secretary to type up the briefs I dictate nor would I need a paralegal to organize hundreds of VHS tapes. The advancement that seems obvious to me is that our workload expectations will just go up, with the accurate understanding that modern tools make it easier to handle more.2
I’ve commented before on how averse courtrooms are to technology and that’s true of the legal field in general (most likely a result of a traditional hierarchy where the oldest are generally endowed with the most power over the profession). When I worked at the ACLU one of our tasks was to monitor every single proposed bill that was introduced at the legislature, see if it had a civil liberties component, and grade it accordingly. We’d get bombarded with endless amendments and revisions, and the ACLU staff attorneys reviewed each iteration by opening up the PDFs side-by-side and going through it manually line by line.
There was a day when I was literally marched around the office and announced via the dramatic “You won’t believe what Yassine just showed me!” That day was when I told everyone about Adobe Acrobat’s file comparison feature.
I'm already encountering some panic among local public defense leadership wanting to completely ban chatGPT. I've had to patiently explain to them that this is a reflexive overreaction, completely unenforceable, and also likely to be moot as big tech continues to jump on the bandwagon with products like Microsoft Copilot. I don't think that aversion will last long though, because the benefits are so blatant here and way too valuable to pass up, and part of the argument I made to local leadership is that prosecutors and law enforcement are definitely already using LLMs to assist with tediousness.
I’m not a Luddite, neither practically nor philosophically. Luddites were a group of English textile workers between 1811 and 1816 who were so concerned about the burgeoning industrial competition that they started destroying textile factory machinery. This culminated into a region-wide rebellion and was only suppressed with legal and military force. Nowadays this movement is the target of general derision, mocked for its quixotic attempt and delusional belief that it could somehow stop technological progress. To put things in context though, many of the Luddites were destitute artisans who literally knew no other way to make a living. Their entire being was honed towards the singular purpose of turning strands into cloth, and now some inanimate object demonstrated how useless their life’s efforts were.
In contrast, I’m speaking from a position of material privilege when I declare that if any technology renders all my skills obsolete then so be it. I’m certainly attached to how I make a living and the niche expertise I’ve developed within my chosen profession, but my overall skillset is versatile enough to deal with potential changes. I’m not and shouldn’t be the poster boy of the cost of technological progress, especially so when I think my profession’s very existence is a societal failure.
Normal people should not need an ordained specialist like myself to navigate basic aspects about the world, like starting a business, getting divorced, or just avoiding jail. My hope is that LLMs make legal issues dramatically more accessible. The legal code is currently written by lawyers for other lawyers but normal people are expected to know and abide by it. I already plug statutes into chatGPT and ask it to explain it to me because not even I can be bothered to machete chop through dense legalese and it does an amazing job. To the extent that I anticipate having any residual value, it would be a measure of accountability. The ability to say “I consulted with a lawyer” will continue to have weight in ways that “I asked chatGPT” won't. But again, the day my profession becomes obsolete would be a happy day for society at large.
I do wonder what equilibrium we'd settle in: would law become more understandable thanks to LLMs ability to explain and summarize it, or would it become even more complicated thanks to legislatures going hog-wild with LLMs ability to generate infinite spools of text?
Fun/sad fact: At least as of 2011, 90% of people did not know about CTRL+F.
I can’t prove it but I would basically guarantee that plenty of judges and their newly-graduated clerks are already using chatGPT to cut down on their workload but keeping quiet about it.
I would be very careful about things like that. ChatGPT frequently hallucinates information that sounds completely believable.
For example, I recently asked it for a simple Javascript function to take in a list of objects and split it into N smaller lists. It returned a function that it claimed could do this, along with a long explanation of exactly how it worked, which looked reasonable. I tested the function on lists of length 1, 2, 3, 4; seemed to work fine. Next day I discover a bug in my program; after spending quite a while tracking it down, turns out ChatGPT's code fails when asked to divide a list into 20 smaller lists. I looked into what the function was actually doing and it turned out to be doing something completely different under the hood from what I had asked for, which only returned correct answers for certain small inputs.
This sort of behavior is common; it's trying to generate text that sounds realistic, not text that's actually true. Here's a similar story from an acquaintance trying to use it to find a physics paper: https://imgur.com/a/zNafwRy
My general approach is to only use ChatGPT for anything that I can verify myself afterwards. e.g. I might give it a description of concept or event and ask it me what it's named, but I'll never trust the name it gives me without googling it and verifying that it's actually correct. I would be really hesitant to use it to summarize a longer text, because I have no way to verify that its summary is correct, and I think it's quite likely that it will contain errors.
I think you are right that LLMs (probably specialised tools, not vanilla ChatGPT) can automate away a hell of a lot of what lawyers do. I wonder about the protectionist response.
On the one hand, American lawyers have the most powerful guild on earth. Nothing will topple them without violence. On the other hand, that might not save the lowly lawyer. The leaders of big firms have a lot of profit they could capture by automating away the drudge work of their juniors. And that's where the political power is.