Newsletter #13
Hang in there for some online Harvey drama
A word from the person behind the laptop
Like a proper Continental I’ve spent my July on a boat in Greece, guzzling Retsina and pretending Greek salad is a balanced meal, as long as it’s got a rooftop of feta.
Relatable video
With AI news flooding in like my inbox after a week off, I’ve decided to grace you with a rundown of the biggest AI updates from the summer. Yes, you’re welcome.
Next week, we’ll return to our regularly scheduled programming. Until then, endure!
1# Harvey drama on Twitter
Like before in this newsletter we've discussed the actual value of an AI product like Harvey. An AI legal tech product with bunch of hard-to-find-customers. Despite their numerous partnerships and truckloads of cash, trying to find a genuine customer review of Harvey is like hunting for Mew in the first couple of Pokémon games.
This summer, they raked in another $100 million, prompting even non-legal folks to raise their eyebrows. With a valuation like that, you'd think there'd be at least one review floating around somewhere. Even some anonymous profile talking shit on reddit. But no.
Anyway. New drama kicked off with a tweet predicting Harvey's demise as "roadkill." The author later backtracked after Harvey's fundraising success (because money apparently equals credibility).
Yet, quite a few people ran with the first statement - and surprisingly, many agreed. The real question is: How long can Harvey keep riding this gravy train if the product is simply not good enough? The error rate in the LLMs powering these products (GPT-4 as far as I know) might simply be too high for the lawyers to accept the results.
From my experience these products take a whole lot of customisation which takes time and resources. It you add that to the cost of using the models then the profitability decreases.
Anyway, some people say that it has to do with “land grab” and being the first to market. Makes perfect sense if you’re into that sort of thing. The real entertainment will be watching if big law firms end up saddled with hallucinating software, forcing lawyers to “delve” into the messy aftermath.
Time will tell.
If you - or anyone you know - have experience with Harvey and want to talk, please reach out to me. I’m curious to hear more.
2# A slow summer for Sam
Talking about the models powering Harvey, OpenAI has had a pretty sluggish summer. They did a delayed release of the voice assistant and a prototype of SearchGPT, which I’ll dive into in a future newsletter, but the revolution is not exactly around the corner with these two updates.
But besides that, OpenAI seems to be accumulating new problems. I’ve mentioned the slew of pending legal cases, but now there’s a potential profitability issue on the horizon.
The burn rate is through the roof, and generative AI demands more energy than the power grid can handle. Training these models is equally untenable, thanks to ongoing legal woes (courtesy of some alleged theft) and the sheer volume of training data required.
This is all laid out and well-argued in Ed Zitron's newsletter.
On top of that, Microsoft—the sugardaddy to end all sugardaddies—is starting to consider OpenAI a competitor rather than a friend. This spells bad news for OpenAI, while probably not having much of an effect on Microsoft, which has access to the IP powering OpenAI’s models.
Personally I still believe there must be some way to make this work. As I always say: This product is already useful as it is. But the way to profitability and a sustainable business model might be different from what we see today,.
3# Who’s the big bad regulator now?
There are some potential new constraints on AI development - and it's NOT in the EU (written while enjoying the sun on a random Greek island).
California State Senator Scott Wiener is behind SB 1047, a bill mandating that developers of large AI models must test and certify their creations to ensure they won't cause significant harm. This legislation has received endorsements from Geoffrey Hinton and Yoshua Bengio, two of the most prominent and frequently cited researchers in the AI field.
Vox has the story on SB 1047
Critics worry this heightened liability could discourage companies like Meta (who just released Llama 3.1) from releasing open-weight models in the future. That would be a particular blow to startups, many of which use open-weight models because they cannot afford to train their own models from scratch.
SB 1047 wouldn’t apply to the Llama models Meta released - not even the model with 405 billion parameters. Meta says that training this model required 3.8x10^25 floating point operations (FLOPs). That’s less than half the 10^26 FLOP threshold that triggers potential liability under SB 1047.
But given the exponential growth of large language models in recent years, it’s easy to imagine Meta’s next generation of language models exceeding the 10^26 FLOP limit. If SB 1047 is the law in California at that point, Meta could face a new set of legal requirements.
With smaller models improving daily, even they could soon face the same scrutiny. While I’m skeptical that the new California bill will be a cure-all, it might at least inject some much-needed accountability into the mix - and that’s not necessarily a bad thing.
See you next week!