Simon Willison’s Weblog

Subscribe
Atom feed for law

30 posts tagged “law”

2025

AI Hallucination Cases (via) Damien Charlotin maintains this database of cases around the world where a legal decision has been made that confirms hallucinated content from generative AI was presented by a lawyer.

That's an important distinction: this isn't just cases where AI may have been used, it's cases where a lawyer was caught in the act and (usually) disciplined for it.

It's been two years since the first widely publicized incident of this, which I wrote about at the time in Lawyer cites fake cases invented by ChatGPT, judge is not amused. At the time I naively assumed:

I have a suspicion that this particular story is going to spread far and wide, and in doing so will hopefully inoculate a lot of lawyers and other professionals against making similar mistakes.

Damien's database has 116 cases from 12 different countries: United States, Israel, United Kingdom, Canada, Australia, Brazil, Netherlands, Italy, Ireland, Spain, South Africa, Trinidad & Tobago.

20 of those cases happened just this month, May 2025!

I get the impression that researching legal precedent is one of the most time-consuming parts of the job. I guess it's not surprising that increasing numbers of lawyers are returning to LLMs for this, even in the face of this mountain of cautionary stories.

# 25th May 2025, 3:56 pm / ai-ethics, ethics, generative-ai, hallucinations, ai, llms, law

Maybe Meta’s Llama claims to be open source because of the EU AI act

Visit Maybe Meta's Llama claims to be open source because of the EU AI act

I encountered a theory a while ago that one of the reasons Meta insist on using the term “open source” for their Llama models despite the Llama license not actually conforming to the terms of the Open Source Definition is that the EU’s AI act includes special rules for open source models without requiring OSI compliance.

[... 852 words]

I’ve disabled the pending geoblock of the UK because I now think the risks of the Online Safety Act to this site are low enough to change strategies to only geoblock if directly threatened by the regulator. [...]

It is not possible for a hobby site to comply with the Online Safety Act. The OSA is written to censor huge commercial sites with professional legal teams, and even understanding one's obligations under the regulations is an enormous project requiring expensive legal advice.

The law is 250 pages and the mandatory "guidance" from Ofcom is more than 3,000 pages of dense, cross-referenced UK-flavoured legalese. To find all the guidance you'll have to start here, click through to each of the 36 pages listed, and expand each page's collapsible sections that might have links to other pages and documents. (Though I can't be sure that leads to all their guidance, and note you'll have to check back regularly for planned updates.)

Peter Bhat Harkins, site administrator, lobste.rs

# 20th March 2025, 4:26 pm / politics, uk, moderation, law

I Went To SQL Injection Court (via) Thomas Ptacek talks about his ongoing involvement as an expert witness in an Illinois legal battle lead by Matt Chapman over whether a SQL schema (e.g. for the CANVAS parking ticket database) should be accessible to Freedom of Information (FOIA) requests against the Illinois state government.

They eventually lost in the Illinois Supreme Court, but there's still hope in the shape of IL SB0226, a proposed bill that would amend the FOIA act to ensure "that the public body shall provide a sufficient description of the structures of all databases under the control of the public body to allow a requester to request the public body to perform specific database queries".

Thomas posted this comment on Hacker News:

Permit me a PSA about local politics: engaging in national politics is bleak and dispiriting, like being a gnat bouncing off the glass plate window of a skyscraper. Local politics is, by contrast, extremely responsive. I've gotten things done --- including a law passed --- in my spare time and at practically no expense (drastically unlike national politics).

# 25th February 2025, 10:45 pm / thomas-ptacek, sql, politics, government, databases, data-journalism, law

There are contexts in which it is immoral to use generative AI. For example, if you are a judge responsible for grounding a decision in law, you cannot rest that on an approximation of previous cases unknown to you. You want an AI system that helps you retrieve specific, well-documented cases, not one that confabulates fictional cases. You need to ensure you procure the right kind of AI for a task, and the right kind is determined in part by the essentialness of human responsibility.

Joanna Bryson, Generative AI use and human agency

# 20th February 2025, 1:14 pm / llms, ai, ethics, generative-ai, ai-ethics, law

Baroness Kidron’s speech regarding UK AI legislation (via) Barnstormer of a speech by UK film director and member of the House of Lords Baroness Kidron. This is the Hansard transcript but you can also watch the video on parliamentlive.tv. She presents a strong argument against the UK's proposed copyright and AI reform legislation, which would provide a copyright exemption for AI training with a weak-toothed opt-out mechanism.

The Government are doing this not because the current law does not protect intellectual property rights, nor because they do not understand the devastation it will cause, but because they are hooked on the delusion that the UK's best interests and economic future align with those of Silicon Valley.

She throws in some cleverly selected numbers:

The Prime Minister cited an IMF report that claimed that, if fully realised, the gains from AI could be worth up to an average of £47 billion to the UK each year over a decade. He did not say that the very same report suggested that unemployment would increase by 5.5% over the same period. This is a big number—a lot of jobs and a very significant cost to the taxpayer. Nor does that £47 billion account for the transfer of funds from one sector to another. The creative industries contribute £126 billion per year to the economy. I do not understand the excitement about £47 billion when you are giving up £126 billion.

Mentions DeepSeek:

Before I sit down, I will quickly mention DeepSeek, a Chinese bot that is perhaps as good as any from the US—we will see—but which will certainly be a potential beneficiary of the proposed AI scraping exemption. Who cares that it does not recognise Taiwan or know what happened in Tiananmen Square? It was built for $5 million and wiped $1 trillion off the value of the US AI sector. The uncertainty that the Government claim is not an uncertainty about how copyright works; it is uncertainty about who will be the winners and losers in the race for AI.

And finishes with this superb closing line:

The spectre of AI does nothing for growth if it gives away what we own so that we can rent from it what it makes.

According to Ed Newton-Rex the speech was effective:

She managed to get the House of Lords to approve her amendments to the Data (Use and Access) Bill, which among other things requires overseas gen AI companies to respect UK copyright law if they sell their products in the UK. (As a reminder, it is illegal to train commercial gen AI models on ©️ work without a licence in the UK.)

What's astonishing is that her amendments passed despite @UKLabour reportedly being whipped to vote against them, and the Conservatives largely abstaining. Essentially, Labour voted against the amendments, and everyone else who voted voted to protect copyright holders.

(Is it true that in the UK it's currently "illegal to train commercial gen AI models on ©️ work"? From points 44, 45 and 46 of this Copyright and AI: Consultation document it seems to me that the official answer is "it's complicated".)

I'm trying to understand if this amendment could make existing products such as ChatGPT, Claude and Gemini illegal to sell in the UK. How about usage of open weight models?

# 29th January 2025, 5:25 pm / politics, ethics, generative-ai, training-data, ai, copyright, deepseek, ai-ethics, law

2024

As an independent writer and publisher, I am the legal team. I am the fact-checking department. I am the editorial staff. I am the one responsible for triple-checking every single statement I make in the type of original reporting that I know carries a serious risk of baseless but ruinously expensive litigation regularly used to silence journalists, critics, and whistleblowers. I am the one deciding if that risk is worth taking, or if I should just shut up and write about something less risky.

Molly White

# 26th October 2024, 10:07 pm / law, molly-white, blogging, journalism

But increasingly, I’m worried that attempts to crack down on the cryptocurrency industry — scummy though it may be — may result in overall weakening of financial privacy, and may hurt vulnerable people the most. As they say, “hard cases make bad law”.

Molly White

# 24th May 2024, 1:19 am / blockchain, privacy, molly-white, law

LLMs may offer immense value to society. But that does not warrant the violation of copyright law or its underpinning principles. We do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process. There is compelling evidence that the UK benefits economically, politically and societally from upholding a globally respected copyright regime.

UK House of Lords report on Generative AI

# 2nd February 2024, 3:54 am / politics, ethics, generative-ai, ai, llms, ai-ethics, law

2023

Microsoft announces new Copilot Copyright Commitment for customers. Part of an interesting trend where some AI vendors are reassuring their paying customers by promising legal support in the face of future legal threats:

“As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.”

# 31st October 2023, 3:35 pm / ai, microsoft, law

And the notion that security updates, for every user in the world, would need the approval of the U.K. Home Office just to make sure the patches weren’t closing vulnerabilities that the government itself is exploiting — it boggles the mind. Even if the U.K. were the only country in the world to pass such a law, it would be madness, but what happens when other countries follow?

John Gruber

# 24th August 2023, 6:16 am / uklaw, cryptography, uk, john-gruber, law

An Iowa school district is using ChatGPT to decide which books to ban. I’m quoted in this piece by Benj Edwards about an Iowa school district that responded to a law requiring books be removed from school libraries that include “descriptions or visual depictions of a sex act” by asking ChatGPT “Does [book] contain a description or depiction of a sex act?”.

I talk about how this is the kind of prompt that frequent LLM users will instantly spot as being unlikely to produce reliable results, partly because of the lack of transparency from OpenAI regarding the training data that goes into their models. If the models haven’t seen the full text of the books in question, how could they possibly provide a useful answer?

# 16th August 2023, 10:33 pm / ethics, generative-ai, openai, chatgpt, ai, llms, arstechnica, benj-edwards, ai-ethics, law

Mandatory Certification Regarding Generative Artificial Intelligence (via) From the Judge Specific Requirements for Judge Brantley Starr in Austin, TX:

“All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being. [...]”

# 31st May 2023, 3:31 am / chatgpt, llms, ai, generative-ai, law

Lawyer cites fake cases invented by ChatGPT, judge is not amused

Visit Lawyer cites fake cases invented by ChatGPT, judge is not amused

Legal Twitter is having tremendous fun right now reviewing the latest documents from the case Mata v. Avianca, Inc. (1:22-cv-01461). Here’s a neat summary:

[... 2,844 words]

Beyond these specific legal arguments, Stability AI may find it has a “vibes” problem. The legal criteria for fair use are subjective and give judges some latitude in how to interpret them. And one factor that likely influences the thinking of judges is whether a defendant seems like a “good actor.” Google is a widely respected technology company that tends to win its copyright lawsuits. Edgier companies like Napster tend not to.

Timothy B. Lee

# 3rd April 2023, 3:38 pm / generative-ai, ai, copyright, law

Stable Diffusion copyright lawsuits could be a legal earthquake for AI. Timothy B. Lee provides a thorough discussion of the copyright lawsuits currently targeting Stable Diffusion and GitHub Copilot, including subtle points about how the interpretation of “fair use” might be applied to the new field of generative AI.

# 3rd April 2023, 3:34 pm / stable-diffusion, generative-ai, github-copilot, ai, copyright, text-to-image, law

2013

Is it possible to run a successful company without being unethical or operating on the fringes of the law?

There is nothing inherently unethical about entrepreneurship. Find a problem people have. Figure out how much money solving it will save them (or help them make). Charge them less than that.

[... 108 words]

How should two equal startup founder formalize a cliff?

A company is a legal entity. The person who leaves the company is the person who resigns from that legal entity.

[... 36 words]

2009

Years ago, Alex Russell told me that Django ought to be collecting CLAs. I said "yeah, whatever" and ignored him. And thus have spent more than a year gathering CLAs to get DSF's paperwork in order. Sigh.

Jacob Kaplan-Moss

# 21st September 2009, 6:35 pm / alex-russell, jacob-kaplan-moss, clas, django, law

2008

Free licenses upheld by US “IP” court. Free software and CC licenses which dictate conditions that, when violated, turn you in to a copyright infringer now have precedence in US law.

# 14th August 2008, 9:33 am / law, uslaw, creativecommons, freesoftware, open-source, licenses, copyright, lawrence-lessig

Draconian failure on error is not the answer problems of Postel's law. Draconian error handling creates an unstable equilibrium in Game Theory terms - it only lasts until one player breaks the rule. One non-Draconian XML5 implementation in key client product and the Draconian XML ranks would break. Well-specified error recovery is the right way to implement the liberal part of Postel's law.

Henri Sivonen

# 20th March 2008, 2:43 pm / draconian, html5, postelslaw, xml, henri-sivonen, law

Principles and Legality. Eric Meyer notes that language about legality in Microsoft’s recent IE announcement suggests that Opera’s much criticised EU threat may have helped positively influence the result.

# 4th March 2008, 7:45 pm / opera, eric-meyer, ie8, standards, microsoft, law

2007

The logo is still evolving, say designers. The Olympics logo is designed to be “hackable”—which is actually a great idea, but lawyers advised against unveiling that concept at the same time as the abstract shapes.

# 11th June 2007, 10:22 am / olympicslogo, design, law

New Dutch accessibility law. Sounds extremely forward thinking, designed by people who really understand the field. Just one problem: the guidelines are only available in Dutch!

# 16th January 2007, 12:59 pm / dutch, ppk, accessibility, guidelines, law

2006

’National interest’ halts arms corruption inquiry. “It has been necessary to balance the need to maintain the rule of law against the wider public interest.”

# 15th December 2006, 2:09 pm / uk, politics, scandal, law

UK copyright law petition. New official petition system from MySociety.

# 14th November 2006, 11:38 am / law

2005

Naked Law (via) A blog about technology law, written by actual lawyers.

# 5th July 2005, 3:33 pm / law

2004

FOXSports.com’s ludicrous link policy. In brief: send our legal department a letter first.

# 17th August 2004, 5:44 pm / law