770 items tagged “llms”
Large Language Models (LLMs) are the class of technology behind generative text AI systems like OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude.
2024
Zoom CEO envisions AI deepfakes attending meetings in your place. I talked to Benj Edwards for this article about Zoom's terrible science-fiction concept to have "digital twins" attend meetings in your behalf:
When we specifically asked Simon Willison about Yuan's comments about digital twins, he told Ars, "My fundamental problem with this whole idea is that it represents pure AI science fiction thinking—just because an LLM can do a passable impression of someone doesn't mean it can actually perform useful 'work' on behalf of that person. LLMs are useful tools for thought. They are terrible tools for delegating decision making to. That's currently my red line for using them: any time someone outsources actual decision making authority to an opaque random number generator is a recipe for disaster."
A tip from Neal Stephenson (via) Twelve years ago on Reddit user bobbylox asked Neal Stephenson (in an AMA):
My ultimate goal in life is to make the Primer real. Anything you want to make sure I get right?
Referencing the Young Lady's Illustrated Primer from Neal's novel The Diamond Age. Stephenson replied:
Kids need to get answers from humans who love them.
(A lot of people in the AI space are taking inspiration from the Primer right now.)
computer scientists: we have invented a virtual dumbass who is constantly wrong
tech CEOs: let's add it to every product
GPT-2 five years later. Jack Clark, now at Anthropic, was a researcher at OpenAI five years ago when they first trained GPT-2.
In this fascinating essay Jack revisits their decision not to release the full model, based on their concerns around potentially harmful ways that technology could be used.
(Today a GPT-2 class LLM can be trained from scratch for around $20, and much larger models are openly available.)
There's a saying in the financial trading business which is 'the market can stay irrational longer than you can stay solvent' - though you might have the right idea about something that will happen in the future, your likelihood of correctly timing the market is pretty low. There's a truth to this for thinking about AI risks - yes, the things we forecast (as long as they're based on a good understanding of the underlying technology) will happen at some point but I think we have a poor record of figuring out a) when they'll happen, b) at what scale they'll happen, and c) how severe their effects will be. This is a big problem when you take your imagined future risks and use them to justify policy actions in the present!
As an early proponent of government regulation around training large models, he offers the following cautionary note:
[...] history shows that once we assign power to governments, they're loathe to subsequently give that power back to the people. Policy is a ratchet and things tend to accrete over time. That means whatever power we assign governments today represents the floor of their power in the future - so we should be extremely cautious in assigning them power because I guarantee we will not be able to take it back.
Jack stands by the recommendation from the original GPT-2 paper for governments "to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems."
A look at Apple’s new Transformer-powered predictive text model. Jack Cook reverse engineered the tiny LLM used for the predictive text keyboard in the latest iOS. It appears to be a GPT-2 style custom model with 34M parameters and a 15,000 token vocabulary.
Turns out that LLMs learn a lot better and faster from educational content as well. This is partly because the average Common Crawl article (internet pages) is not of very high value and distracts the training, packing in too much irrelevant information. The average webpage on the internet is so random and terrible it's not even clear how prior LLMs learn anything at all.
Experimenting with local alt text generation in Firefox Nightly (via) The PDF editor in Firefox (confession: I did not know Firefox ships with a PDF editor) is getting an experimental feature that can help suggest alt text for images for the human editor to then adapt and improve on.
This is a great application of AI, made all the more interesting here because Firefox will run a local model on-device for this, using a custom trained model they describe as "our 182M parameters model using a Distilled version of GPT-2 alongside a Vision Transformer (ViT) image encoder".
The model uses WebAssembly with ONNX running in Transfomers.js, and will be downloaded the first time the feature is put to use.
Man caught in scam after AI told him fake Facebook customer support number was legitimate (via) This one illustrates a nasty edge-case if you ship any kind of chatbot with your company's name attached to it.
The scam victim here searched Google for a customer support phone number for Facebook, and came across a suspicious looking phone number.
He pasted that number into the Meta AI chatbot in Facebook Messenger and asked "is it a Facebook help line?" - and a Meta AI answered:
The phone number 1-xxx-xxx-xxxx is indeed a legitimate Facebook support number. Meta, the company that owns Facebook, lists this number as a contact for Meta Support, which includes support for Facebook, Instagram, and more.
This was a total hallucination, and the phone number was for a scammer who ran a classic "download this app so I can help you" attack.
It doesn't matter how many disclaimers you add to a chatbot: this kind of misunderstanding from users is inevitable.
What does the public in six countries think of generative AI in news? (via) Fascinating survey by the Reuters Institute for the Study of Journalism at Oxford that asked ~12,000 people across six countries for their opinions on AI usage in journalism.
It’s also being interpreted as evidence that few members of the general public actually use these tools, because the opening survey questions ask about personal usage.
I don’t think the numbers support that narrative, personally. For survey participants in the USA 7% used ChatGPT daily and 11% used it weekly, which is higher than I would expect for those frequencies. For the UK those were 2% daily and 7% weekly.
The 18-24 group were the heaviest users of these tools. Lots of other interesting figures to explore.
The realization hit me [when the GPT-3 paper came out] that an important property of the field flipped. In ~2011, progress in AI felt constrained primarily by algorithms. We needed better ideas, better modeling, better approaches to make further progress. If you offered me a 10X bigger computer, I'm not sure what I would have even used it for. GPT-3 paper showed that there was this thing that would just become better on a large variety of practical tasks, if you only trained a bigger one. Better algorithms become a bonus, not a necessity for progress in AGI. Possibly not forever and going forward, but at least locally and for the time being, in a very practical sense. Today, if you gave me a 10X bigger computer I would know exactly what to do with it, and then I'd ask for more.
Codestral: Hello, World! Mistral's first code-specific model, trained to be "fluent" in 80 different programming languages.
The weights are released under a new Mistral AI Non-Production License, which is extremely restrictive:
3.2. Usage Limitation
- You shall only use the Mistral Models and Derivatives (whether or not created by Mistral AI) for testing, research, Personal, or evaluation purposes in Non-Production Environments;
- Subject to the foregoing, You shall not supply the Mistral Models or Derivatives in the course of a commercial activity, whether in return for payment or free of charge, in any medium or form, including but not limited to through a hosted or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
To Mistral's credit at least they don't misapply the term "open source" in their marketing around this model - they consistently use the term "open-weights" instead. They also state that they plan to continue using Apache 2 for other model releases.
Codestral can be used commercially when accessed via their paid API.
Training is not the same as chatting: ChatGPT and other LLMs don’t remember everything you say
I’m beginning to suspect that one of the most common misconceptions about LLMs such as ChatGPT involves how “training” works.
[... 1,543 words]What We Learned from a Year of Building with LLMs (Part I). Accumulated wisdom from six experienced LLM hackers. Lots of useful tips in here. On providing examples in a prompt:
If n is too low, the model may over-anchor on those specific examples, hurting its ability to generalize. As a rule of thumb, aim for n ≥ 5. Don’t be afraid to go as high as a few dozen.
There's a recommendation not to overlook keyword search when implementing RAG - tricks with embeddings can miss results for things like names or acronyms, and keyword search is much easier to debug.
Plus this tip on using the LLM-as-judge pattern for implementing automated evals:
Instead of asking the LLM to score a single output on a Likert scale, present it with two options and ask it to select the better one. This tends to lead to more stable results.
Reproducing GPT-2 (124M) in llm.c in 90 minutes for $20 (via) GPT-2 124M was the smallest model in the GPT-2 series released by OpenAI back in 2019. Andrej Karpathy's llm.c is an evolving 4,000 line C/CUDA implementation which can now train a GPT-2 model from scratch in 90 minutes against a 8X A100 80GB GPU server. This post walks through exactly how to run the training, using 10 billion tokens of FineWeb.
Andrej notes that this isn't actually that far off being able to train a GPT-3:
Keep in mind that here we trained for 10B tokens, while GPT-3 models were all trained for 300B tokens. [...] GPT-3 actually didn't change too much at all about the model (context size 1024 -> 2048, I think that's it?).
Estimated cost for a GPT-3 ADA (350M parameters)? About $2,000.
Why Google’s AI might recommend you mix glue into your pizza. I got “distrust and verify” as advice on using LLMs into this Washington Post piece by Shira Ovide.
Golden Gate Claude. This is absurdly fun and weird. Anthropic's recent LLM interpretability research gave them the ability to locate features within the opaque blob of their Sonnet model and boost the weight of those features during inference.
For a limited time only they're serving a "Golden Gate Claude" model which has the feature for the Golden Gate Bridge boosted. No matter what question you ask it the Golden Gate Bridge is likely to be involved in the answer in some way. Click the little bridge icon in the Claude UI to give it a go.
I asked for names for a pet pelican and the first one it offered was this:
Golden Gate - This iconic bridge name would be a fitting moniker for the pelican with its striking orange color and beautiful suspension cables.
And from a recipe for chocolate covered pretzels:
Gently wipe any fog away and pour the warm chocolate mixture over the bridge/brick combination. Allow to air dry, and the bridge will remain accessible for pedestrians to walk along it.
UPDATE: I think the experimental model is no longer available, approximately 24 hours after release. We'll miss you, Golden Gate Claude.
Nilay Patel reports a hallucinated ChatGPT summary of his own article (via) Here's a ChatGPT bug that's a new twist on the old issue where it would hallucinate the contents of a web page based on the URL.
The Verge editor Nilay Patel asked for a summary of one of his own articles, pasting in the URL.
ChatGPT 4o replied with an entirely invented summary full of hallucinated details.
It turns out The Verge blocks ChatGPT's browse mode from accessing their site in their robots.txt:
User-agent: ChatGPT-User
Disallow: /
Clearly ChatGPT should reply that it is unable to access the provided URL, rather than inventing a response that guesses at the contents!
I just left Google last month. The "AI Projects" I was working on were poorly motivated and driven by this panic that as long as it had "AI" in it, it would be great. This myopia is NOT something driven by a user need. It is a stone cold panic that they are getting left behind.
The vision is that there will be a Tony Stark like Jarvis assistant in your phone that locks you into their ecosystem so hard that you'll never leave. That vision is pure catnip. The fear is that they can't afford to let someone else get there first.
Some goofy results from ‘AI Overviews’ in Google Search. John Gruber collects two of the best examples of Google’s new AI overviews going horribly wrong.
Gullibility is a fundamental trait of all LLMs, and Google’s new feature apparently doesn’t know not to parrot ideas it picked up from articles in the Onion, or jokes from Reddit.
I’ve heard that LLM providers internally talk about “screenshot attacks”—bugs where the biggest risk is that someone will take an embarrassing screenshot.
In Google search’s case this class of bug feels like a significant reputational threat.
A Grand Unified Theory of the AI Hype Cycle. Glyph outlines the pattern of every AI hype cycle since the 1960s: a new, novel mechanism is discovered and named. People get excited, and non-practitioners start hyping it as the path to true “AI”. It eventually becomes apparent that this is not the case, even while practitioners quietly incorporate this new technology into useful applications while downplaying the “AI” branding. A new mechanism is discovered and the cycle repeats.
What is prompt optimization? (via) Delightfully clear explanation of a simple automated prompt optimization strategy from Jason Liu. Gather a selection of examples and build an evaluation function to return a numeric score (the hard bit). Then try different shuffled subsets of those examples in your prompt and look for the example collection that provides the highest averaged score.
Mastering LLMs: A Conference For Developers & Data Scientists (via) I’m speaking at this 5-week (maybe soon 6-week) long online conference about LLMs, presenting about “LLMs on the command line”.
Other speakers include Jeremy Howard, Sophia Yang from Mistral, Wing Lian of Axolotl, Jason Liu of Instructor, Paige Bailey from Google, my former co-worker John Berryman and a growing number of fascinating LLM practitioners.
It’s been fun watching this grow from a short course on fine-tuning LLMs to a full-blown multi-week conference over the past few days!
New Phi-3 models: small, medium and vision. I couldn't find a good official announcement post to link to about these three newly released models, but this post on LocalLLaMA on Reddit has them in one place: Phi-3 small (7B), Phi-3 medium (14B) and Phi-3 vision (4.2B) (the previously released model was Phi-3 mini - 3.8B).
You can try out the vision model directly here, no login required. It didn't do a great job with my first test image though, hallucinating the text.
As with Mini these are all released under an MIT license.
UPDATE: Here's a page from the newly published Phi-3 Cookbook describing the models in the family.
Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet (via) Big advances in the field of LLM interpretability from Anthropic, who managed to extract millions of understandable features from their production Claude 3 Sonnet model (the mid-point between the inexpensive Haiku and the GPT-4-class Opus).
Some delightful snippets in here such as this one:
We also find a variety of features related to sycophancy, such as an empathy / “yeah, me too” feature 34M/19922975, a sycophantic praise feature 1M/847723, and a sarcastic praise feature 34M/19415708.
Fast groq-hosted LLMs vs browser jank (via) Groq is now serving LLMs such as Llama 3 so quickly that JavaScript which attempts to render Markdown strings on every new token can cause performance issues in browsers.
Taras Glek's solution was to move the rendering to a requestAnimationFrame()
callback, effectively buffering the rendering to the fastest rate the browser can support.
A Plea for Sober AI. Great piece by Drew Breunig: “Imagine having products THIS GOOD and still over-selling them.”
AI counter app from my PyCon US keynote. In my keynote at PyCon US this morning I ran a counter at the top of my screen that automatically incremented every time I said the words "AI" or "artificial intelligence", using vosk, pyaudio and Tkinter. I wrote it in a few minutes with the help of GPT-4o - here's the code I ran as a GitHub repository.
I'll publish full detailed notes from my talk once the video is available on YouTube.
Understand errors and warnings better with Gemini (via) As part of Google's Gemini-in-everything strategy, Chrome DevTools now includes an opt-in feature for passing error messages in the JavaScript console to Gemini for an explanation, via a lightbulb icon.
Amusingly, this documentation page includes a warning about prompt injection:
Many of LLM applications are susceptible to a form of abuse known as prompt injection. This feature is no different. It is possible to trick the LLM into accepting instructions that are not intended by the developers.
They include a screenshot of a harmless example, but I'd be interested in hearing if anyone has a theoretical attack that could actually cause real damage here.
But where the company once limited itself to gathering low-hanging fruit along the lines of “what time is the super bowl,” on Tuesday executives showcased generative AI tools that will someday plan an entire anniversary dinner, or cross-country-move, or trip abroad. A quarter-century into its existence, a company that once proudly served as an entry point to a web that it nourished with traffic and advertising revenue has begun to abstract that all away into an input for its large language models.
PaliGemma model README (via) One of the more over-looked announcements from Google I/O yesterday was PaliGemma, an openly licensed VLM (Vision Language Model) in the Gemma family of models.
The model accepts an image and a text prompt. It outputs text, but that text can include special tokens representing regions on the image. This means it can return both bounding boxes and fuzzier segment outlines of detected objects, behavior that can be triggered using a prompt such as "segment puffins".
You can try it out on Hugging Face.
It's a 3B model, making it feasible to run on consumer hardware.