Simon Willison’s Weblog

Subscribe
Atom feed for ai

943 items tagged “ai”

2023

bloomz.cpp (via) Nouamane Tazi Adapted the llama.cpp project to run against the BLOOM family of language models, which were released in July 2022 and trained in France on 45 natural languages and 12 programming languages using the Jean Zay Public Supercomputer, provided by the French government and powered using mostly nuclear energy.

It’s under the RAIL license which allows (limited) commercial use, unlike LLaMA.

Nouamane reports getting 16 tokens/second from BLOOMZ-7B1 running on an M1 Pro laptop.

# 16th March 2023, 12:24 am / llama, open-source, ai, generative-ai, edge-llms, llms, bloom

"AI" has for recent memory been a marketing term anyway. Deep learning and variations have had a good run at being what people mean when they refer to AI, probably overweighting towards big convolution based computer vision models.

Now, "AI" in people's minds means generative models.

That's it, it doesn't mean generative models are replacing CNNs, just like CNNs don't replace SVMs or regression or whatever. It's just that pop culture has fallen in love with something else.

version_five

# 15th March 2023, 9:05 pm / ai, generative-ai, llms

We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms. Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.

Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell

# 15th March 2023, 3:30 pm / ai, ethics, generative-ai, llms

GPT-4 Developer Livestream. 25 minutes of live demos from OpenAI co-founder Greg Brockman at the GPT-4 launch. These demos are all fascinating, including code writing and multimodal vision inputs. The one that really struck me is when Greg pasted in a copy of the tax code and asked GPT-4 to answer some sophisticated tax questions, involving step-by-step calculations that cited parts of the tax code it was working with.

# 15th March 2023, 12:20 am / openai, gpt-3, ai, generative-ai, gpt-4, llms

GPT-4 Technical Report (PDF). 98 pages of much more detailed information about GPT-4. The appendices are particularly interesting, including examples of advanced prompt engineering as well as examples of harmful outputs before and after tuning attempts to try and suppress them.

# 14th March 2023, 9:39 pm / openai, gpt-3, ai, generative-ai, gpt-4, llms

We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. [...] We’ve spent 6 months iteratively aligning GPT-4 using lessons from our adversarial testing program as well as ChatGPT, resulting in our best-ever results (though far from perfect) on factuality, steerability, and refusing to go outside of guardrails.

OpenAI

# 14th March 2023, 5:02 pm / openai, gpt-3, ai, generative-ai, gpt-4, chatgpt, llms

Int-4 LLaMa is not enough—Int-3 and beyond (via) The Nolano team are experimenting with reducing the size of the LLaMA models even further than the 4bit quantization popularized by llama.cpp.

# 13th March 2023, 11:55 pm / llama, ai, generative-ai, edge-llms, llms

Stanford Alpaca, and the acceleration of on-device large language model development

Visit Stanford Alpaca, and the acceleration of on-device large language model development

On Saturday 11th March I wrote about how Large language models are having their Stable Diffusion moment. Today is Monday. Let’s look at what’s happened in the past three days.

[... 2,055 words]

We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. Alpaca behaves similarly to OpenAI’s text-davinci-003, while being surprisingly small and easy/cheap to reproduce (<600$).

Alpaca: A Strong Open-Source Instruction-Following Model

# 13th March 2023, 6:18 pm / llama, stanford, ai, generative-ai, llms, fine-tuning

I've successfully run LLaMA 7B model on my 4GB RAM Raspberry Pi 4. It's super slow about 10sec/token. But it looks we can run powerful cognitive pipelines on a cheap hardware.

Artem Andreenko

# 12th March 2023, 6:22 pm / llama, raspberrypi, ai, generative-ai, llms

ChatGPT’s API is So Good and Cheap, It Makes Most Text Generating AI Obsolete (via) Max Woolf on the quite frankly weird economics of the ChatGPT API: it’s 1/10th the price of GPT-3 Da Vinci and appears to be equivalent (if not more) capable. “But it is very hard to economically justify not using ChatGPT as a starting point for a business need and migrating to a more bespoke infrastructure later as needed, and that’s what OpenAI is counting on. [...] I don’t envy startups whose primary business is text generation right now.”

# 11th March 2023, 11:05 pm / gpt-3, generative-ai, openai, chatgpt, ai, max-woolf, llms

Large language models are having their Stable Diffusion moment

Visit Large language models are having their Stable Diffusion moment

The open release of the Stable Diffusion image generation model back in August 2022 was a key moment. I wrote how Stable Diffusion is a really big deal at the time.

[... 1,815 words]

Running LLaMA 7B on a 64GB M2 MacBook Pro with llama.cpp. I got Facebook’s LLaMA 7B to run on my MacBook Pro using llama.cpp (a “port of Facebook’s LLaMA model in C/C++”) by Georgi Gerganov. It works! I’ve been hoping to run a GPT-3 class language model on my own hardware for ages, and now it’s possible to do exactly that. The model itself ends up being just 4GB after applying Georgi’s script to “quantize the model to 4-bits”.

# 11th March 2023, 4:19 am / facebook, gpt-3, generative-ai, ai, llama, edge-llms, llms

What could I do with a universal function — a tool for turning just about any X into just about any Y with plain language instructions?

Robin Sloan

# 10th March 2023, 6:02 pm / robin-sloan, gpt-3, ai, generative-ai, llms

ChatGPT couldn’t access the internet, even though it really looked like it could

Visit ChatGPT couldn’t access the internet, even though it really looked like it could

A really common misconception about ChatGPT is that it can access URLs. I’ve seen many different examples of people pasting in a URL and asking for a summary, or asking it to make use of the content on that page in some way.

[... 1,745 words]

apple-notes-to-sqlite (via) With the help of ChatGPT I finally figured out just enough AppleScript to automate the export of my notes to a SQLite database. AppleScript is a notoriously read-only language, which is turns out makes it a killer app for LLM-assisted coding.

# 9th March 2023, 6:04 am / apple, sqlite, generative-ai, projects, chatgpt, ai, dogsheep, applescript

Weeknotes: NICAR, and an appearance on KQED Forum

I spent most of this week at NICAR 2023, the data journalism conference hosted this year in Nashville, Tennessee.

[... 1,941 words]

Online gradient descent written in SQL (via) Max Halford trains an online gradient descent model against two years of AAPL stock data using just a single advanced SQL query. He built this against DuckDB—I tried to replicate his query in SQLite and it almost worked, but it gave me a “recursive reference in a subquery” error that I was unable to resolve.

# 7th March 2023, 6:56 pm / machine-learning, sql, sqlite, duckdb, ai

Since November, OpenAI has already updated ChatGPT several times. The researchers are using a technique called adversarial training to stop ChatGPT from letting users trick it into behaving badly (known as jailbreaking). This work pits multiple chatbots against each other: one chatbot plays the adversary and attacks another chatbot by generating text to force it to buck its usual constraints and produce unwanted responses. Successful attacks are added to ChatGPT’s training data in the hope that it learns to ignore them.

The inside story of how ChatGPT was built

# 5th March 2023, 10:04 pm / openai, chatgpt, ai, generative-ai, llms

How to Wrap Our Heads Around These New Shockingly Fluent Chatbots. I was a guest on KQED Forum this morning, a live radio documentary and call-in show hosted by Alexis Madrigal. Ted Chiang and Claire Leibowicz were the other guests: we talked about ChatGPT and and the new generation of AI-powered tools.

# 3rd March 2023, 4:59 am / gpt-3, radio, talks, chatgpt, generative-ai, ai, llms, ted-chiang

OpenAI: Introducing ChatGPT and Whisper APIs. The ChatGPT API is a new model called “gpt-3.5-turbo” and is priced at 1/10th of the price of text-davinci-003, previously the most powerful GPT-3 model. Whisper (speech to text transcription) is now available via an API as well, priced at 36 cents per hour of audio.

# 1st March 2023, 7:36 pm / gpt-3, generative-ai, openai, whisper, chatgpt, ai, llms

Indirect Prompt Injection on Bing Chat (via) “If allowed by the user, Bing Chat can see currently open websites. We show that an attacker can plant an injection in a website the user is visiting, which silently turns Bing Chat into a Social Engineer who seeks out and exfiltrates personal information.” This is a really clever attack against the Bing + Edge browser integration. Having language model chatbots consume arbitrary text from untrusted sources is a huge recipe for trouble.

# 1st March 2023, 5:29 am / prompt-engineering, bing, prompt-injection, security, generative-ai, ai, llms

I think now of all the kids coming up who are learning to write alongside ChatGPT, just as I learned to write with spell-check. ChatGPT isn’t writing for them; it’s producing copy. For plenty of people, having a robot help them produce serviceable copy will be exactly enough to allow them to get by in the world. But for some, it will lower a barrier. It will be the beginning of their writing career, because they will learn that even though plenty of writing begins with shitty, soulless copy, the rest of writing happens in edits, in reworking the draft, in all the stuff beyond the initial slog of just getting words down onto a page.

Ryan Bradley

# 27th February 2023, 6:10 pm / chatgpt, writing, ai, generative-ai, llms

New AI game: role playing the Titanic. Fantastic Bing prompt from Ethan Mollick: “I am on a really nice White Star cruise from Southampton, and it is 14th April 1912. What should I do tonight?”—Bing takes this very seriously and tries to help out! Works for all sorts of other historic events as well.

# 26th February 2023, 3:53 am / bing, ai, generative-ai, llms, ethan-mollick

Tech’s hottest new job: AI whisperer. No coding required. (via) I’m quoted in this Washington Post article about prompt engineering by Drew Harwell. “There are people who belittle prompt engineers, saying, ’Oh lord, you can get paid for typing things into a box. But these things lie to you. They mislead you. They pull you down false paths to waste time on things that don’t work. You’re casting spells—and, like in fictional magic, nobody understands how the spells work and, if you mispronounce them, demons come to eat you.”

# 25th February 2023, 2:14 pm / washington-post, prompt-engineering, ai, generative-ai, llms

Thoughts and impressions of AI-assisted search from Bing

Visit Thoughts and impressions of AI-assisted search from Bing

It’s been a wild couple of weeks.

[... 1,763 words]

Introducing LLaMA: A foundational, 65-billion-parameter large language model (via) From the paper: “For instance, LLaMA-13B outperforms GPT-3 on most benchmarks, despite being 10× smaller. We believe that this model will help democratize the access and study of LLMs, since it can be run on a single GPU.”

# 24th February 2023, 5:34 pm / facebook, gpt-3, ai, generative-ai, llama, llms

Hallucinations = creativity. It [Bing] tries to produce the highest probability continuation of the string using all the data at its disposal. Very often it is correct. Sometimes people have never produced continuations like this. You can clamp down on hallucinations - and it is super-boring. Answers "I don't know" all the time or only reads what is there in the Search results (also sometimes incorrect). What is missing is the tone of voice: it shouldn't sound so confident in those situations.

Mikhail Parakhin

# 24th February 2023, 3:37 pm / bing, ai, generative-ai, llms

ControlNet (via) A spectacular step forward in image generation—using “conditional control” to control models like Stable Diffusion. The README here is full of examples of what this enables. Extremely finely grained control of generated images based on a sketch, or in input image—including tricks like using Canny edge detection (an algorithm from 1986) to convert any image into an outline which can then be used as input to the model.

# 22nd February 2023, 5:45 pm / stable-diffusion, generative-ai, ai, text-to-image

FlexGen (via) This looks like a very big deal. FlexGen is a paper and accompanying code that massively reduces the resources needed to run some of the current top performing open source GPT-style large language models. People on Hacker News report being able to use it to run models like opt-30b on their own hardware, and it looks like it opens up the possibility of running even larger models on hardware available outside of dedicated research labs.

# 21st February 2023, 6:41 pm / gpt-3, ai, generative-ai, llms