Simon Willison’s Weblog

Subscribe

Items tagged ai in 2023

Filters: Year: 2023 × ai × Sorted by date


I’ve been thinking about generative AI tools as “bicycles for the mind” (to borrow an old Steve Jobs line), but I think “electric bicycles for the mind” might be more appropriate

They can accelerate your natural abilities, you have to learn how to use them, they can give you a significant boost that some people might feel is a bit of a cheat, and they’re also quite dangerous if you’re not careful with them!

Me # 13th February 2023, 6:52 pm

ChatGPT Is a Blurry JPEG of the Web. Science fiction author Ted Chiang offers a brilliant analogy for ChatGPT in this New Yorker article: it’s a highly lossy compression algorithm for a vast amount of information which works like a JPEG, and uses grammatically correct interpolation to fill back in the missing gaps. “ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.” # 9th February 2023, 9:28 pm

The 21st century is being delayed: We’re stuck with corporations building these incredible artifacts and then staring at them and realizing the questions they encode are too vast and unwieldy to be worth the risk of tackling. The future is here – and it’s locked up in a datacenter, experimented with by small groups of people who are aware of their own power and fear to exercise it. What strange times we are in.

Jack Clark (on MusicML) # 5th February 2023, 5:51 pm

The most dramatic optimization to nanoGPT so far (~25% speedup) is to simply increase vocab size from 50257 to 50304 (nearest multiple of 64). This calculates added useless dimensions but goes down a different kernel path with much higher occupancy. Careful with your Powers of 2.

Andrej Karpathy # 4th February 2023, 12:08 am

Exploring MusicCaps, the evaluation data released to accompany Google’s MusicLM text-to-music model

Google Research just released MusicLM: Generating Music From Text. It’s a new generative AI model that takes a descriptive prompt and produces a “high-fidelity” music track. Here’s the paper (and a more readable version using arXiv Vanity).

[... 1323 words]

It is very important to bear in mind that this is what large language models really do. Suppose we give an LLM the prompt “The first person to walk on the Moon was ”, and suppose it responds with “Neil Armstrong”. What are we really asking here? In an important sense, we are not really asking who was the first person to walk on the Moon. What we are really asking the model is the following question: Given the statistical distribution of words in the vast public corpus of (English) text, what words are most likely to follow the sequence “The first person to walk on the Moon was ”? A good reply to this question is “Neil Armstrong”.

Murray Shanahan # 23rd January 2023, 12:30 pm

Generate a comprehensive and informative answer (but no more than 80 words) for a given question solely based on the provided web Search Results (URL and Summary). You must only use information from the provided search results. Use an unbiased and journalistic tone. Use this current date and time: Wednesday, December 07, 2022 22:50:56 UTC. Combine search results together into a coherent answer. Do not repeat text. Cite search results using [${number}] notation. Only cite the most relevant results that answer the question accurately. If different results refer to different entities with the same name, write separate answers for each entity.

Perplexity AI, via a prompt injection leak attack # 22nd January 2023, 7:47 pm

OpenAI Cookbook: Techniques to improve reliability (via) “Let’s think step by step” is a notoriously successful way of getting large language models to solve problems, but it turns out that’s just the tip of the iceberg: this article includes a wealth of additional examples and techniques that can be used to trick GPT-3 into being a whole lot more effective. # 21st January 2023, 5:15 am

Weeknotes: AI hacking and a SpatiaLite tutorial

Short weeknotes this time because the key things I worked on have already been covered here:

[... 477 words]

How to implement Q&A against your documentation with GPT3, embeddings and Datasette

If you’ve spent any time with GPT-3 or ChatGPT, you’ve likely thought about how useful it would be if you could point them at a specific, current collection of text or documentation and have it use that as part of its input for answering questions.

[... 3491 words]

You will not use the Software for any act that may undermine China’s national security and national unity, harm the public interest of society, or infringe upon the rights and interests of human beings.

The GLM-130B License # 10th January 2023, 10:45 pm

Petals (via) The challenge with large language models in the same scale ballpark as GPT-3 is that they’re large—really large. Far too big to run on a single machine at home. Petals is a fascinating attempt to address that problem: it works a little bit like BitTorrent, in that each user of Petal runs a subset of the overall language model on their machine and participates in a larger network to run inference across potentially hundreds of distributed GPUs. I tried it just now in Google Colab and it worked exactly as advertised, after downloading an 8GB subset of the 352GB BLOOM-176B model. # 2nd January 2023, 11:29 pm

nanoGPT. “The simplest, fastest repository for training/finetuning medium-sized GPTs”—by Andrej Karpathy, in about 600 lines of Python. # 2nd January 2023, 11:27 pm