Simon Willison’s Weblog

Subscribe

Blogmarks

Filters: Sorted by date

Sheepy-T—an LLM running on an iPhone. Kevin Kwok has a video on Twitter demonstrating Sheepy-T—his iPhone app which runs a full instruction-tuned large language model, based on EleutherAI’s GPT-J, entirely on an iPhone 14. I applied for the TestFlight beta and I have this running on my phone now: it works!

# 11th April 2023, 5:54 pm / iphone, local-llms, llms

How we’re building a browser when it’s supposed to be impossible (via) Andreas Kling: “The ECMAScript, HTML, and CSS specifications today are (for the most part) stellar technical documents whose algorithms can be implemented with considerably less effort and guesswork than in the past.” The Ladybird project is such an inspiration, and really demonstrates the enormous value of the work put in by web standards spec authors over the last twenty years.

# 11th April 2023, 10:18 am / browsers, web-standards, andreas-kling, ladybird

The AI singularity is here. Can’t say I’m a fan of the headline, but the subhead “The time to figure out how to use generative AI and large language models in your code is now” is much more illustrative of the story. I’m referred to in this one as “One of the most outspoken advocates for LLM-enhanced development” which is a bit of a surprise!

# 10th April 2023, 7:17 pm / ai, llms

AI is flooding the workplace, and workers love it. The microwave kiln pottery project I helped Natalie with gets a mention in this story about people who are putting AI tools to use.

# 10th April 2023, 7:15 pm / ai, generative-ai, llms

Floor796 (via) “An ever-expanding animation scene showing the life of the 796th floor of the huge space station” by Russian artist 0x00, who built their own custom browser-based pixel animation tool with which they are constructing this project. Absolutely crammed with pop culture references and easter eggs. The “Changes” link at the top shows almost daily updates, with links to jump to the latest content.

# 10th April 2023, 4:59 pm / art, pixelart

On Endings: Why & How We Retired Elm at Culture Amp (via) Culture Amp made extensive use of Elm—a ML-like functional language that compiles to JavaScript—between 2016 and 2020 while building their company’s frontend. They eventually decided to move away from it, for reasons described at length in this post primarily relating to its integration with React. This piece is worth reading mainly as a thoughtful approach to engineering management challenge of deprecating a well-loved piece of technology from the recommended stack at a company.

# 10th April 2023, 2:11 am / functional-programming, javascript, kevin-yank, management, react

Database “sharding” came from UO? (via) Raph Koster coined the term “shard” back in 1996 in a design document proposing a way of scaling Ultima Online: “[...] we realized we would need to run multiple whole copies of Ultima Online for users to connect to, we needed to come up with a fiction for it. [...] the evil wizard Mondain had attempted to gain control over Sosaria by trapping its essence in a crystal. When the Stranger at the end of Ultima I defeated Mondain and shattered the crystal, the crystal shards each held a refracted copy of Sosaria.”

# 7th April 2023, 1:56 pm / scaling, sharding

Why ChatGPT and Bing Chat are so good at making things up. I helped review this deep dive by Benj Edwards for Ars Technica into the hallucination/confabulation problem with ChatGPT and other LLMs, which is attracting increasing attention thanks to stories like the recent defamation complaints against ChatGPT. This article explains why this is happening and talks to various experts about potential solutions.

# 7th April 2023, 3:33 am / ai, generative-ai, chatgpt, llms, benj-edwards

The different uses of Python type hints (via) Luke Plant describes five different categories for how Python optional types are being used today: IDE assistants, type checking, runtime behavior changes via introspection (e.g. Pydantic), code documentation, compiler instructions (ala mypyc)—and a bonus sixth, dependency injection.

# 7th April 2023, 2:17 am / luke-plant, python, pydantic

image-to-jpeg (via) I built a little JavaScript app that accepts an image, then displays that image as a JPEG with a slider to control the quality setting, plus a copy and paste textarea to copy out that image with a data-uri. I didn't actually write a single line of code for this: I got ChatGPT/GPT-4 to generate the entire thing with some prompts.

Here's the full transcript.

# 5th April 2023, 10:10 pm / projects, tools, ai, generative-ai, chatgpt, llms, ai-assisted-programming

Blinded by Analogies (via) Ethan Mollick discusses how many of the analogies we have for AI right now are hurting rather than helping our understanding, particularly with respect to LLMs.

# 5th April 2023, 5 am / ai, generative-ai, llms, ethan-mollick

Eight Things to Know about Large Language Models (via) This unpublished paper by Samuel R. Bowman is succinct, readable and dense with valuable information to help understand the field of modern LLMs.

# 5th April 2023, 3:36 am / ai, gpt-3, generative-ai, llms

From Deep Learning Foundations to Stable Diffusion. Brand new free online video course from Jeremy Howard: 30 hours of content, covering everything you need to know to implement the Stable Diffusion image generation algorithm from scratch. I previewed parts of this course back in December and it was fascinating: this field is moving so fast that some of the lectures covered papers that had been released just a few days before.

# 5th April 2023, 1:13 am / ai, fastai, stable-diffusion, generative-ai, jeremy-howard, text-to-image

Guess we could start calling this a ’hallucitation’? Kate Crawford coins an excellent neologism for hallucinated citations in LLMs like ChatGPT.

# 4th April 2023, 10:21 pm / chatgpt, llms

trurl manipulates URLs. Brand new command-line tool from curl creator Daniel Stenberg: The tr stands for translate or transpose, and the tool provides various mechanisms for normalizing URLs, adding query strings, changing the path or hostname and other similar modifications. I’ve tried designing APis for this kind of thing in the past—Datasette includes some clumsily named functions such as path_with_removed_args()—and it’s a deceptively deep set of problems.
.

# 4th April 2023, 10:08 pm / curl, urls, daniel-stenberg

ROOTS search tool (via) BLOOM is one of the most interesting completely openly licensed language models. The ROOTS corpus is the training data that was collected for it, and this tool lets you run searches directly against that corpus. I tried searching for my own name and got an interesting insight into what it knows about me.

# 3rd April 2023, 8:40 pm / ai, generative-ai, llms, bloom, training-data

Closed AI Models Make Bad Baselines (via) The NLP academic research community are facing a tough challenge: the state-of-the-art in large language models, GPT-4, is entirely closed which means papers that compare it to other models lack replicability and credibility. “We make the case that as far as research and scientific publications are concerned, the “closed” models (as defined below) cannot be meaningfully studied, and they should not become a “universal baseline”, the way BERT was for some time widely considered to be.”

Anna Rogers proposes a new rule for this kind of research: “That which is not open and reasonably reproducible cannot be considered a requisite baseline.”

# 3rd April 2023, 7:57 pm / nlp, ai, openai, generative-ai, gpt-4

Stable Diffusion copyright lawsuits could be a legal earthquake for AI. Timothy B. Lee provides a thorough discussion of the copyright lawsuits currently targeting Stable Diffusion and GitHub Copilot, including subtle points about how the interpretation of “fair use” might be applied to the new field of generative AI.

# 3rd April 2023, 3:34 pm / copyright, law, ai, stable-diffusion, generative-ai, github-copilot, text-to-image

Django 4.2 released. “This version has been designated as a long-term support (LTS) release, which means that security and data loss fixes will be applied for at least the next three years.” Some neat new async features, including improvements to async streaming responses.

# 3rd April 2023, 2:14 pm / async, django

AI photo sorter (via) Really interesting implementation of machine learning photo classification by Alexander Visheratin. This tool lets you select as many photos as you like from your own machine, then provides a web interface for classifying them into labels that you provide. It loads a 102MB quantized CLIP model and executes it in the browser using WebAssembly. Once classified, a “Generate script” button produces a copyable list of shell commands for moving your images into corresponding folders on your own machine. Your photos never get uploaded to a server—everything happens directly in your browser.

# 2nd April 2023, 4:27 am / machine-learning, webassembly, openai, clip

textual-mandelbrot (via) I love this: run “pipx install textual-mandelbrot” and then “mandelexp” to get an interactive Mandelbrot fractal exploration interface right there in your terminal, built on top of Textual. The code for this is only 250 lines of Python and delightfully easy to follow.

# 1st April 2023, 7:23 pm / mandelbrot, python, textual

How to use AI to do practical stuff: A new guide (via) Ethan Mollick’s guide to practical usage of large language model chatbot like ChatGPT 3.5 and 4, Bing, Claude and Bard is the best I’ve seen so far. He includes useful warnings about common traps and things that these models are both useful for and useless at.

# 31st March 2023, 6:17 am / bing, ai, chatgpt, bard, llms, ethan-mollick, claude

Downloading and converting the original models (Cerebras-GPT) (via) Georgi Gerganov added support for the Apache 2 licensed Cerebras-GPT language model to his ggml C++ inference library, as used by llama.cpp.

# 31st March 2023, 4:28 am / llama, local-llms, llms, cerebras, llama-cpp

Schillace Laws of Semantic AI (via) Principles for prompt engineering against large language models, developed by Microsoft’s Sam Schillace.

# 30th March 2023, 12:20 am / ai, prompt-engineering, generative-ai, llms

Making SQLite extensions npm install’able for Node.js, and on deno.land/x for Deno (via) Alex Garcia figured out how to get his “pip install X” trick for distributing compiled SQLite extensions to work for Node too! Now you can “npm install” 10 of his extensions, including sqlite-regex and sqlite-xsv and sqlite-http and sqlite-html and more, and attach them to a node-sqlite3 or better-sqlite3 connection. He’s bundled them for Deno too!

# 29th March 2023, 10:13 pm / nodejs, sqlite, npm, deno, alex-garcia

gpt4all. Similar to Alpaca, here’s a project which takes the LLaMA base model and fine-tunes it on instruction examples generated by GPT-3—in this case, it’s 800,000 examples generated using the ChatGPT GPT 3.5 turbo model (Alpaca used 52,000 generated by regular GPT-3). This is currently the easiest way to get a LLaMA derived chatbot running on your own computer: the repo includes compiled binaries for running on M1/M2, Intel Mac, Windows and Linux and provides a link to download the 3.9GB 4-bit quantized model.

# 29th March 2023, 6:03 pm / open-source, ai, generative-ai, llama, local-llms, llms, fine-tuning

Quicker serverless Postgres connections. Neon provide “serverless PostgreSQL”—autoscaling, managed PostgreSQL optimized for use with serverless hosting environments. A neat capability they provide is the ability to connect to a PostgreSQL server via WebSockets, which means their database can be used from environments such as Cloudflare workers which don’t have the ability to use a standard TCP database connection. This article describes some clever tricks they used to make establishing new connections via WebSockets more efficient, using the least possible number of network round-trips.

# 28th March 2023, 10:09 pm / postgresql, websockets

Cerebras-GPT: A Family of Open, Compute-efficient, Large Language Models (via) The latest example of an open source large language model you can run your own hardware. This one is particularly interesting because the entire thing is under the Apache 2 license. Cerebras are an AI hardware company offering a product with 850,000 cores—this release was trained on their hardware, presumably to demonstrate its capabilities. The model comes in seven sizes from 111 million to 13 billion parameters, and the smaller sizes can be tried directly on Hugging Face.

# 28th March 2023, 10:05 pm / open-source, ai, gpt-3, generative-ai, local-llms, llms, cerebras, llm-release

Announcing Open Flamingo (via) New from LAION: “OpenFlamingo is a framework that enables training and evaluation of large multimodal models (LMMs)”. Multimodal here means it can answer questions about images—their interactive demo includes tools for image captioning, animal recognition, counting objects and visual question answering. Theye’ve released the OpenFlamingo-9B model built on top of LLaMA 7B and CLIP ViT/L-14—the model checkpoint is a 5.24 GB download from Hugging Face, and is available under a non-commercial research license.

# 28th March 2023, 9:59 pm / ai, generative-ai, llama, laion, llms, clip

LLaMA voice chat, with Whisper and Siri TTS. llama.cpp author Georgi Gerganov has stitched together the LLaMA language model, the Whisper voice to text model (with his whisper.cpp library) and the macOS “say” command to create an entirely offline AI agent that he can talk to with his voice and that can speak replies straight back to him.

# 27th March 2023, 9:06 pm / macos, text-to-speech, ai, generative-ai, whisper, llama, local-llms, llms, llama-cpp, speech-to-text

Years

Tags