Stop fantasising about replacing juniors with AI
And top 3 reasons why it's not happening anytime soon
Newsletter #2
Hang in there until the end for Finnish Enlightenment
A word from the person behind the laptop
We’re back! Thanks for the great feedback on my first newsletter. 500 (!) readers and a 100 % opening rate. I guess that means I’ll continue to grace your inbox each week.
This week, we're diving headfirst into people's bizarre obsession with replacing legal juniors. We’ll also discuss the legal advisor from hell and the unravelling of the enigma that is Stability AI, which (surprise!) is not as stable as it sounds.
Hang around for a Finnish twist at the end. And no, it doesn’t include saunas or karaoke, I promise.
Kiitos!
Stop fantasising about replacing juniors with AI
What’s up with this strange fantasy of getting rid of juniors? I keep hearing this line echoed everywhere in legal circles: “AI's first victims in law will be the interns and junior lawyers”. It's like a broken record by now. Whether in conversations about giant law firms, in-house legal teams, or government legal departments – they all say the same.
I was even talking to an acquaintance the other day with no relation to either tech or the legal sector, and it was the same mantra being repeated again and again: “The legal juniors are f****", right?".
So let’s have a look at just how close the juniors are to being on the chopping block.
To be fair: juniors are pretty intimidating
Despite the hysteria, it's not exactly time for the interns to start packing their briefcases. There are several reasons for this, but let me start with the most obvious one: juniors are young and will adapt faster.
Lawyer graduates bring a tech-savvy, change-embracing approach that could change legal practices with efficient, cost-effective processes. Their fresh perspectives might innovate legal services and alter market expectations, challenging the notorious billable hour model. This shift presents a significant challenge to traditional practices, prompting senior lawyers to question if they can match the juniors' pace of adaptation.
Many are trying to frame it the other way around though. Juniors will not be needed because AI can solve their tasks better and cheaper, best exemplified by a legaltech CEO in this Bloomberg article:
“With AI replacing junior associates on legal research, document review, and document management tasks, partners will be freed to hone business strategies with their clients, deepen their relationships, and leverage their unique capabilities.”
Partners and management want small and nimble teams and therefore AI will be used to completely solve legal tasks that juniors used to solve before. I might be old, but back in the day you'd bring in juniors to join the conversation with clients – and eventually even to take over the dialogue.
Clearly these people need to spend some more time working with ChatGPT as a colleague before they bring the hallucinating bot to the client meeting.
You can’t blame the bot
Another fact: being human has its perks, especially when you need someone to point a finger at after an error in legal analysis. This is called accountability, which is crucial in law, as even small mistakes can lead to significant consequences – hence why people pay so much for legal services.
As a new lawyer straight outta uni, you're bound to make mistakes - and you’ll be held accountable by people more senior than you. Hopefully you’ll learn from experience, trial, and error. However, establishing the same kind of accountability and trust with AI used for legal tasks is challenging. Nevertheless, such software could serve as a supportive tool for junior lawyers, aiding their development and enhancing their work.
Investing in training juniors (including AI tool usage) offers surprising economic benefits over replacing them and more companies are catching on, yet currently, juniors often navigate AI learning independently.
Still waiting for the revolution
By now, we should grasp that OpenAI's GPT 3.5, 4, and even Claude 3 aren't ready to replace junior lawyers. They struggle with privacy, reliability, and explainability, to name a few issues. Legal sector companies are recognizing and addressing these gaps.
What tools are refining AI performance? Here’s a brief rundown of three of the most talked about:
1. Retrieval Augmented Generation (RAG): This is a key method for addressing issues of inaccuracy or 'hallucinations' in Large Language Models (LLMs).
RAG operates by referencing external documents to provide a more grounded output. The process, however, isn't straightforward. It involves intricate steps such as ingesting, segmenting, compressing, storing, indexing, retrieving, ranking, and updating data for queries. Even when the right data processing and technologies are employed – think vector databases, indexing, and effective search and ranking systems – RAG has the chance of failing. This entire process (surprisingly) takes a lot of time and resources which most organisations don’t prioritise.
2. Context length: Significant progress has been made in the so-called “context length” capabilities of LLMs. What does that mean?
It's basically about how much text an AI model can chew on at once. In a legal context where people are often swimming in oceans of text, the ability to process vast amounts of information is crucial. Basically, the longer the context length, the smarter and more useful the AI becomes as a tool for lawyers. This is expected to double biennially.
Anthropic has long been on the forefront of this development with their model Claude, and currently has a context window of 200k tokens (meaning around 150.000 words).
3. Fine-tuning: Fine-tuning was initially hailed as the secret sauce for LLMs to perform perfectly in a legal context. You’d have an LLM and then just tune a couple of parameters, and then - BOOM! - all of a sudden you’d have this amazing model spitting out legal analysis without problems.
It kind of reminds me of the tough Fila-wearing boys from my hometown, talking about tuning their Yamaha Jog scooters: “I swear it will go from 0 to 100 in 5 seconds after I tune it with this new technique.” Often, the result was a scooter barely reaching 37 km/h, sounding like a sick seagull, with no way to slow down again.
Fine-tuning, as a standalone solution, isn’t the panacea it was hoped to be. Integrating it with other methods like RAG is essential for creating an effective internal AI tool – assuming you have sufficient data. However, the final outcome might still not meet high expectations.
Advanced techniques are not advanced enough
Advanced techniques in LLMs are impressive but haven't reached the level where they can replace junior lawyers. They may speed up some tasks, but we're still in the early stages. The advanced techniques will of course improve rapidly so I’ll revisit them all in future newsletters.
In law, clarity in the legal reasoning is essential, and currently, AI's explainability and efficiency still fall short. I don’t know one single lawyer who will want to be accountable for the decisions made by an LLM, and the tech is surely not quite there yet - but it is getting closer.
So, while there's buzz about juniors becoming obsolete, they're here to stay, perhaps with an AI tool to ease their workload and let them focus on more complex matters.
In other news…
Stability AI is not stable anymore
Stability AI's recent turmoil – unpaid dues, contractual mishaps, and a challenging encounter with Nvidia's Jensen Huang – along with an “acquisition” by Microsoft, reveals the first cracks in the supercharged AI growth curve, signaling potential legal entanglements in the coming months.
Data on wheels
It's one thing for car AI systems to use your data, but sharing it with insurers? That's a new legal battleground. As carmakers shift gears into collecting and charing more and more that, they're navigating a complex web of privacy and consent issues. Fasten your seatbelts and read the full story here.
Outlaw AI legal advisor
NYC's government, in a bid to modernise, rolled out an AI chatbot for entrepreneurs. This digital guide was meant to ease the pain of navigating the business world in the city. But, plot twist: it's been leading businesses astray with not only wrong legal advice, but urging them to break the law.
Startup story of the week
In my research for this week's newsletter, I stumbled upon YC backed legaltech company Leya. An AI assistant aimed at improving legal research and workflow efficiency. Tbh, I have no idea how it works, but I like the idea of integrating the AI functionality as part of a well-designed workflow instead of as a Word plugin. How much customisation will be needed for this is a good question, but kudos for the approach and design
Learn something new today
Back in my uni days, I kicked off the first-ever coding course for law students. The resources were decent, but nothing to write home about. First many hours were spent on setting everything up, and it was kind of a hassle.
University of Helsinki saves the day with their free and online python course. "Introduction to Python" is a game-changer for beginners with an aspiration of learning the fundamentals of software development. Many AI applications are built with Python so it might be a good way to set up your own bot.
Start coding straight away here.
Extra toppings
The latest version of The Ezra Klein Show “How should I be using A.I. right now” features Ethan Mollick. Mollick who teaches the effects of AI on work and education. I really recommend listening to this podcast to get an idea of how to make AI useful, and what some of the near future prospects are. Probably not as crazy as many people think - but crazy enough.




When I read the title I instantly said "at last someone said it!". I was not disappointed by the content, either. 😉