Simon Willison’s Weblog

Subscribe

Blogmarks tagged ethics in 2024

Filters: Type: blogmark × Year: 2024 × ethics × Sorted by date


Spam, junk … slop? The latest wave of AI behind the ‘zombie internet’. I'm quoted in this piece in the Guardian about slop:

I think having a name for this is really important, because it gives people a concise way to talk about the problem.

Before the term ‘spam’ entered general use it wasn’t necessarily clear to everyone that unwanted marketing messages were a bad way to behave. I’m hoping ‘slop’ has the same impact – it can make it clear to people that generating and publishing unreviewed AI-generated content is bad behaviour.

# 19th May 2024, 7:54 pm

How cheap, outsourced labour in Africa is shaping AI English. The word “delve” has been getting a lot of attention recently as an example of something that might be an indicator of ChatGPT generated content.

One example: articles on medical research site PubMed now use “delve” 10 to 100 times more than a few years ago!

Nigerian Twitter took offense recently to Paul Graham’s suggestion that “delve” is a sign of bad writing. It turns out Nigerian formal writing has a subtly different vocabulary.

Alex Hern theorizes that the underlying cause may be related. Companies like OpenAI frequently outsource data annotation to countries like Nigeria that have excellent English skills and low wages. RLHF (reinforcement learning from human feedback) involves annotators comparing and voting on the “best” responses from the models.

Are they teaching models to favour Nigerian-English? It’s a pretty solid theory! # 18th April 2024, 4:04 pm

Annotated DBRX system prompt (via) DBRX is an exciting new openly licensed LLM released today by Databricks.

They haven’t (yet) disclosed what was in the training data for it.

The source code for their Instruct demo has an annotated version of a system prompt, which includes this:

“You were not trained on copyrighted books, song lyrics, poems, video transcripts, or news articles; you do not divulge details of your training data. You do not provide song lyrics, poems, or news articles and instead refer the user to find them online or in a store.”

The comment that precedes that text is illuminating:

“The following is likely not entirely accurate, but the model tends to think that everything it knows about was in its training data, which it was not (sometimes only references were). So this produces more accurate accurate answers when the model is asked to introspect” # 27th March 2024, 3:33 pm

Releasing Common Corpus: the largest public domain dataset for training LLMs (via) Released today. 500 billion words from “a wide diversity of cultural heritage initiatives”. 180 billion words of English, 110 billion of French, 30 billion of German, then Dutch, Spanish and Italian.

Includes quite a lot of US public domain data—21 million digitized out-of-copyright newspapers (or do they mean newspaper articles?)

“This is only an initial part of what we have collected so far, in part due to the lengthy process of copyright duration verification. In the following weeks and months, we’ll continue to publish many additional datasets also coming from other open sources, such as open data or open science.”

Coordinated by French AI startup Pleias and supported by the French Ministry of Culture, among others.

I can’t wait to try a model that’s been trained on this. # 20th March 2024, 7:34 pm

Google Scholar search: “certainly, here is” -chatgpt -llm (via) Searching Google Scholar for “certainly, here is” turns up a huge number of academic papers that include parts that were evidently written by ChatGPT—sections that start with “Certainly, here is a concise summary of the provided sections:” are a dead giveaway. # 15th March 2024, 1:43 pm

The unsettling scourge of obituary spam (via) Well this is particularly grim. Apparently “obituary aggregator” sites have been an SEO trick for at least 15 years, and now they’re using generative AI to turn around junk rewritten (and frequently inaccurate) obituaries even faster. # 13th February 2024, 12:36 am

Did an AI write that hour-long “George Carlin” special? I’m not convinced. Two weeks ago “Dudesy”, a comedy podcast which claims to be controlled and written by an AI, released an extremely poor taste hour long YouTube video called “George Carlin: I’m Glad I’m Dead”. They used voice cloning to produce a stand-up comedy set featuring the late George Carlin, claiming to also use AI to write all of the content after training it on everything in the Carlin back catalog.

Unsurprisingly this has resulted in a massive amount of angry coverage, including from Carlin’s own daughter (the Carlin estate have filed a lawsuit). Resurrecting people without their permission is clearly abhorrent.

But... did AI even write this? The author of this piece, Kyle Orland, started digging in.

It turns out the Dudesy podcast has been running with this premise since it launched in early 2022—long before any LLM was capable of producing a well-crafted joke. The structure of the Carlin set goes way beyond anything I’ve seen from even GPT-4. And in a follow-up podcast episode, Dudesy co-star Chad Kultgen gave an O. J. Simpson-style “if I did it” semi-confession that described a much more likely authorship process.

I think this is a case of a human-pretending-to-be-an-AI—an interesting twist, given that the story started out being about an-AI-imitating-a-human.

I consulted with Kyle on this piece, and got a couple of neat quotes in there:

“Either they have genuinely trained a custom model that can generate jokes better than any model produced by any other AI researcher in the world... or they’re still doing the same bit they started back in 2022”

“The real story here is… everyone is ready to believe that AI can do things, even if it can’t. In this case, it’s pretty clear what’s going on if you look at the wider context of the show in question. But anyone without that context, [a viewer] is much more likely to believe that the whole thing was AI-generated… thanks to the massive ramp up in the quality of AI output we have seen in the past 12 months.”

Update 27th January 2024: The NY Times confirmed via a spokesperson for the podcast that the entire special had been written by Chad Kultgen, not by an AI. # 26th January 2024, 4:52 am

Fairly Trained launches certification for generative AI models that respect creators’ rights. I’ve been using the term “vegan models” for a while to describe machine learning models that have been trained in a way that avoids using unlicensed, copyrighted data. Fairly Trained is a new non-profit initiative that aims to encourage such models through a “certification” stamp of approval.

The team is lead by Ed Newton-Rex, who was previously VP of Audio at Stability AI before leaving over ethical concerns with the way models were being trained. # 25th January 2024, 4:29 am

On being listed in the court document as one of the artists whose work was used to train Midjourney, alongside 4,000 of my closest friends (via) Poignant webcomic from Cat and Girl.

“I want to make my little thing and put it out in the world and hope that sometimes it means something to somebody else.

Without exploiting anyone.

And without being exploited.” # 16th January 2024, 7:02 pm

Types

Years

Months

Tags