Simon Willison’s Weblog

Subscribe
Atom feed for max-woolf

14 items tagged “max-woolf”

2024

The Super Effectiveness of Pokémon Embeddings Using Only Raw JSON and Images. A deep dive into embeddings from Max Woolf, exploring 1,000 different Pokémon (loaded from PokéAPI using this epic GraphQL query) and then embedding the cleaned up JSON data using nomic-embed-text-v1.5 and the official Pokémon image representations using nomic-embed-vision-v1.5.

I hadn't seen nomic-embed-vision-v1.5 before: it brings multimodality to Nomic embeddings and operates in the same embedding space as nomic-embed-text-v1.5 which means you can use it to perform CLIP-style tricks comparing text and images. Here's their announcement from June 5th:

Together, Nomic Embed is the only unified embedding space that outperforms OpenAI CLIP and OpenAI Text Embedding 3 Small on multimodal and text tasks respectively.

Sadly the new vision weights are available under a non-commercial Creative Commons license (unlike the text weights which are Apache 2), so if you want to use the vision weights commercially you'll need to access them via Nomic's paid API.

Nomic do say this though:

As Nomic releases future models, we intend to re-license less recent models in our catalogue under the Apache-2.0 license.

# 30th June 2024, 9:22 pm / ai, embeddings, max-woolf, clip

Does Offering ChatGPT a Tip Cause it to Generate Better Text? An Analysis (via) Max Woolf:“I have a strong hunch that tipping does in fact work to improve the output quality of LLMs and its conformance to constraints, but it’s very hard to prove objectively. [...] Let’s do a more statistical, data-driven approach to finally resolve the debate.”

# 23rd February 2024, 5:42 pm / max-woolf, generative-ai, chatgpt, ai, llms

2023

Pushing ChatGPT’s Structured Data Support To Its Limits. The GPT 3.5, 4 and 4 Turbo APIs all provide “function calling”—a misnamed feature that allows you to feed them a JSON schema and semi-guarantee that the output from the prompt will conform to that shape.

Max explores the potential of that feature in detail here, including some really clever applications of it to chain-of-thought style prompting.

He also mentions that it may have some application to preventing prompt injection attacks. I’ve been thinking about function calls as one of the most concerning potential targets of prompt injection, but Max is right in that there may be some limited applications of them that can help prevent certain subsets of attacks from taking place.

# 21st December 2023, 5:20 pm / max-woolf, generative-ai, openai, ai, llms, prompt-engineering, prompt-injection

simpleaichat (via) Max Woolf released his own Python package for building against the GPT-3.5 and GPT-4 APIs (and potentially other LLMs in the future).

It’s a very clean piece of API design with some useful additional features: there’s an AsyncAIChat subclass that works with Python asyncio, and the library includes a mechanism for registering custom functions that can then be called by the LLM as tools.

One trick I haven’t seen before: it uses a combination of max_tokens: 1 and a ChatGPT logit_bias to ensure that answers to one of its default prompts are restricted to just numerals between 0 and 9. This is described in the PROMPTS.md file.

# 8th June 2023, 9:06 pm / prompt-engineering, llms, generative-ai, openai, chatgpt, ai, max-woolf, python

ChatGPT should include inline tips

Visit ChatGPT should include inline tips

In OpenAI isn’t doing enough to make ChatGPT’s limitations clear James Vincent argues that OpenAI’s existing warnings about ChatGPT’s confounding ability to convincingly make stuff up are not effective.

[... 1,488 words]

ChatGPT’s API is So Good and Cheap, It Makes Most Text Generating AI Obsolete (via) Max Woolf on the quite frankly weird economics of the ChatGPT API: it’s 1/10th the price of GPT-3 Da Vinci and appears to be equivalent (if not more) capable. “But it is very hard to economically justify not using ChatGPT as a starting point for a business need and migrating to a more bespoke infrastructure later as needed, and that’s what OpenAI is counting on. [...] I don’t envy startups whose primary business is text generation right now.”

# 11th March 2023, 11:05 pm / gpt-3, generative-ai, openai, chatgpt, ai, max-woolf, llms

2022

Stable Diffusion 2.0 and the Importance of Negative Prompts for Good Results. Stable Diffusion 2.0 is out, and it’s a very different model from 1.4/1.5. It’s trained using a new text encoder (OpenCLIP, in place of OpenAI’s CLIP) which means a lot of the old tricks—notably using “Greg Rutkowski” to get high quality fantasy art—no longer work. What DOES work, incredibly well, is negative prompting—saying things like “cyberpunk forest by Salvador Dali” but negative on “trees, green”. Max Woolf explores negative prompting in depth in this article, including how to combine it with textual inversion.

# 29th November 2022, 1:22 am / stable-diffusion, generative-ai, max-woolf, prompt-engineering, text-to-image

I Resurrected “Ugly Sonic” with Stable Diffusion Textual Inversion (via) “I trained an Ugly Sonic object concept on 5 image crops from the movie trailer, with 6,000 steps [...] (on a T4 GPU, this took about 1.5 hours and cost about $0.21 on a GCP Spot instance)”

# 20th September 2022, 3:35 am / machine-learning, stable-diffusion, ai, max-woolf, generative-ai, text-to-image

2020

When I was curating my generated tweets, I estimated 30-40% of the tweets were usable comedically, a massive improvement over the 5-10% usability from my GPT-2 tweet generation. However, a 30-40% success rate implies a 60-70% failure rate, which is patently unsuitable for a production application.

Max Woolf

# 18th July 2020, 7:33 pm / machine-learning, max-woolf, llms, ai, gpt-2

Tempering Expectations for GPT-3 and OpenAI’s API. Insightful commentary on GPT-3 (which is producing some ridiculously cool demos at the moment thanks to the invite-only OpenAI API) from Max Woolf.

# 18th July 2020, 7:29 pm / machine-learning, max-woolf, gpt-3, ai, openai, generative-ai, llms

Data Science is a lot like Harry Potter, except there's no magic, it's just math, and instead of a sorting hat you just sort the data with a Python script.

GPT-3, shepherded by Max Woolf

# 29th June 2020, 4:45 am / machine-learning, data-science, max-woolf

A List of Hacker News’s Undocumented Features and Behaviors (via) If you’re interested in community software design this is a neat insight into the many undocumented features of Hacker News, collated by Max Woolf.

# 6th June 2020, 5:36 pm / hacker-news, community, max-woolf

gpt2-headlines.ipynb. My earliest experiment with GPT-2, using gpt-2-simple by Max Woolf to generate new New York Times headlines based on a GPT-2 fine-tuned against headlines from different decades of that newspaper.

# 31st January 2020, 2:13 am / llms, generative-ai, ai, max-woolf, gpt-2

2018

Things About Real-World Data Science Not Discussed In MOOCs and Thought Pieces (via) Really good article, pointing out that carefully optimizing machine learning models is only a small part of the day-to-day work of a data scientist: cleaning up data, building dashboards, shipping models to production, deciding on trade-offs between performance and production and considering the product design and ethical implementations of what you are doing make up a much larger portion of the job.

# 11th December 2018, 8:51 pm / data-science, max-woolf, ethics