Blogmarks that use markdown

Owned by simonw, visibility: Public

SQL query
select count(*) from blog_blogmark where use_markdown = true

1 row

count
118
Copy and export data

Duration: 5.02ms

SQL query
select 'https://simonwillison.net/b/' || id as url, link_url, link_title, commentary, via_url, via_title, created, card_image, use_markdown from blog_blogmark where use_markdown = true order by id desc

118 rows

url link_url link_title commentary via_url via_title created card_image use_markdown
https://simonwillison.net/b/7871 https://github.com/quickwit-oss/tantivy-cli tantivy-cli I tried out this Rust based search engine today and I was very impressed. [Tantivy](https://github.com/quickwit-oss/tantivy) is the core project - it's an open source (MIT) Rust library that implements Lucene-style full text search, with a very full set of features: BM25 ranking, faceted search, range queries, incremental indexing etc. `tantivy-cli` offers a CLI wrapper around the Rust library. It's not actually as full-featured as I hoped: it's intended as more of a demo than a full exposure of the library's features. The JSON API server it runs can only be used to run simple keyword or phrase searches for example, no faceting or filtering. Tantivy's performance is fantastic. I was able to index the entire contents of my link blog in a fraction of a second. I found [this post](https://fulmicoton.com/posts/behold-tantivy/) from 2017 where Tantivy creator Paul Masurel described the initial architecture of his new search side-project that he created to help him learn Rust. Paul went on to found [Quickwit](https://quickwit.io/), an impressive looking analytics platform that uses Tantivy as one of its core components. The [Python bindings](https://github.com/quickwit-oss/tantivy-py) for Tantivy look well maintained, wrapping the Rust library using [maturin](https://github.com/PyO3/maturin). Those are probably the best way for a developer like myself to really start exploring what it can do. Also notable: the [Hacker News thread](https://news.ycombinator.com/item?id=40492834) has dozens of posts from happy Tantivy users reporting successful use on their projects. https://news.ycombinator.com/item?id=40492834 Hacker News 2024-06-13 06:03:00+00:00 - null - True
https://simonwillison.net/b/7870 https://gcollazo.com/optimal-sqlite-settings-for-django/ Optimal SQLite settings for Django Giovanni Collazo put the work in to figure out settings to make SQLite work well for production Django workloads. WAL mode and a `busy_timeout` of 5000 make sense, but the most interesting recommendation here is `"transaction_mode": "IMMEDIATE"` to avoid locking errors when a transaction is upgraded to a write transaction. Giovanni's configuration depends on the new `"init_command"` support for SQLite PRAGMA options [introduced in Django 5.1alpha](https://docs.djangoproject.com/en/5.1/ref/databases/#setting-pragma-options). https://lobste.rs/s/9lchst/optimal_sqlite_settings_for_django Lobste.rs 2024-06-13 05:04:36+00:00 - null - True
https://simonwillison.net/b/7869 https://pdf-to-podcast.com/ PDF to Podcast At first glance this project by Stephan Fitzpatrick is a cute demo of a terrible sounding idea... but then I tried it out and the results are weirdly effective. You can listen to a fake podcast version of the transformers paper, or upload your own PDF (with your own OpenAI API key) to make your own. It's open source (Apache 2) so I had a poke around in [the code](https://github.com/knowsuchagency/pdf-to-podcast). It gets a lot done with a single [180 line Python script](https://github.com/knowsuchagency/pdf-to-podcast/blob/512bfbdb4fd658ad4b301336020c4ea16cb69e18/main.py). When I'm exploring code like this I always jump straight to [the prompt](https://github.com/knowsuchagency/pdf-to-podcast/blob/512bfbdb4fd658ad4b301336020c4ea16cb69e18/main.py#L47-L80) - it's quite long, and starts like this: > Your task is to take the input text provided and turn it into an engaging, informative podcast dialogue. The input text may be messy or unstructured, as it could come from a variety of sources like PDFs or web pages. Don't worry about the formatting issues or any irrelevant information; your goal is to extract the key points and interesting facts that could be discussed in a podcast. [...] So I grabbed a copy of it and pasted in [my blog entry about WWDC](https://simonwillison.net/2024/Jun/10/apple-intelligence/), which produced [this result](https://gist.github.com/simonw/edac62f6c11640abe98925cbc17f4ac3#apple-intelligence-a-deep-dive-into-the-future-of-ai) when I ran it through Gemini Flash using [llm-gemini](https://github.com/simonw/llm-gemini): `cat prompt.txt | llm -m gemini-1.5-flash-latest` Then I piped the result through my [ospeak](https://simonwillison.net/2023/Nov/7/ospeak/) CLI tool for running text-to-speech with the OpenAI TTS models (after truncating to 690 tokens with [ttok](https://github.com/simonw/ttok) because it turned out to be slightly too long for the API to handle): `llm logs --response | ttok -t 690 | ospeak -s -o wwdc-auto-podcast.mp3` And [here's the result](https://static.simonwillison.net/static/2024/wwdc-auto-podcast.mp3) (3.9MB 3m14s MP3). It's not as good as the PDF-to-Podcast version because Stephan has some [really clever code](https://github.com/knowsuchagency/pdf-to-podcast/blob/512bfbdb4fd658ad4b301336020c4ea16cb69e18/main.py#L115-L126) that uses different TTS voices for each of the characters in the transcript, but it's still a surprisingly fun way of repurposing text from my blog. I enjoyed listening to it while I was cooking dinner. https://news.ycombinator.com/item?id=40653417 Show HN 2024-06-13 01:03:56+00:00 - null - True
https://simonwillison.net/b/7868 https://docs.datasette.io/en/stable/changelog.html#v0-64-7 Datasette 0.64.7 A very minor dot-fix release for Datasette stable, addressing [this bug](https://github.com/simonw/datasette/issues/2353) where Datasette running against the latest version of SQLite - 3.46.0 - threw an error on canned queries that included `:named` parameters in their SQL. The root cause was Datasette using [a now invalid clever trick](https://github.com/simonw/datasette/blob/7437d40e5dd4d614bb769e16c0c1b96c6c19647f/datasette/utils/__init__.py#L1137-L1150) I came up with against the undocumented and unstable opcodes returned by a SQLite `EXPLAIN` query. I asked on the SQLite forum and learned that the feature I was using was removed in [this commit to SQLite](https://sqlite.org/src/info/dd5977c9a8a418be). D. Richard Hipp [explains](https://sqlite.org/forum/forumpost/1cafc721009cef7f): > The P4 parameter to OP_Variable was not being used for anything. By omitting it, we make the prepared statement slightly smaller, reduce the size of the SQLite library by a few bytes, and help sqlite3_prepare() and similar run slightly faster. - null - - null - 2024-06-12 22:55:00+00:00 - null - True
https://simonwillison.net/b/7867 https://stackoverflow.blog/2024/06/10/generative-ai-is-not-going-to-build-your-engineering-team-for-you/ Generative AI Is Not Going To Build Your Engineering Team For You This barnstormer of an essay is a long read by Charity Majors, and I find myself wanting to quote almost every paragraph. It thoroughly and passionately debunks the idea that generative AI means that teams no longer need to hire junior programmers. This is for several key reasons. First is the familiar pipeline argument - we need juniors in order to grow new intermediate and senior engineers: > Software is an apprenticeship industry. You can’t learn to be a software engineer by reading books. You can only learn by doing…and doing, and doing, and doing some more. No matter what your education consists of, most learning happens on the job—period. And it never ends! Learning and teaching are lifelong practices; they have to be, the industry changes so fast. > > It takes a solid seven-plus years to forge a competent software engineer. (Or as most job ladders would call it, a “senior software engineer”.) That’s many years of writing, reviewing, and deploying code every day, on a team alongside more experienced engineers. That’s just how long it seems to take. What does it mean to be a senior engineer? It’s a lot more than just writing code: > To me, being a senior engineer is not primarily a function of your ability to write code. It has far more to do with your ability to understand, maintain, explain, and manage a large body of software in production over time, as well as the ability to translate business needs into technical implementation. So much of the work is around crafting and curating these large, complex sociotechnical systems, and code is just one representation of these systems. > > […] > > People act like writing code is the hard part of software. It is not. It never has been, it never will be. **Writing code is the easiest part of software engineering**, and it’s getting easier by the day. The hard parts are what you do with that code—operating it, understanding it, extending it, and governing it over its entire lifecycle. But I find the most convincing arguments are the ones about team structure itself: > **Hiring engineers is about composing teams**. The smallest unit of software ownership is not the individual, it’s the team > > […] > > Have you ever been on a team packed exclusively with staff or principal engineers? It is *not fun*. That is not a high-functioning team. There is only so much high-level architecture and planning work to go around, there are only so many big decisions that need to be made. These engineers spend most of their time doing work that feels boring and repetitive, so they tend to over-engineer solutions and/or cut corners—sometimes at the same time. They compete for the “fun” stuff and find reasons to pick technical fights with each other. They chronically under-document and under-invest in the work that makes systems simple and tractable. > > […] > > The best teams are ones where no one is bored, because every single person is working on something that challenges them and pushes their boundaries. The only way you can get this is by having a range of skill levels on the team. Charity finishes with advice on hiring juniors, including ensuring that your organization is in the right shape to do so effectively. > The only thing worse than never hiring any junior engineers is hiring them into an awful experience where they can’t learn anything. Seriously though, read the whole thing. It contains such a density of accumulated engineering management wisdom. https://twitter.com/mipsytipsy/status/1800265275624874446 @mipsytipsy 2024-06-12 15:11:25+00:00 - null - True
https://simonwillison.net/b/7866 https://www.nytimes.com/2024/06/11/style/ai-search-slop.html First Came ‘Spam.’ Now, With A.I., We’ve Got ‘Slop’ First [the Guardian](https://simonwillison.net/2024/May/19/spam-junk-slop-the-latest-wave-of-ai-behind-the-zombie-internet/), now the NYT. I've apparently made a habit of getting quoted by journalists talking about slop! I got the closing quote in this one: > Society needs concise ways to talk about modern A.I. — both the positives and the negatives. ‘Ignore that email, it’s spam,’ and ‘Ignore that article, it’s slop,’ are both useful lessons. - null - - null - 2024-06-11 16:12:21+00:00 - null - True
https://simonwillison.net/b/7865 https://machinelearning.apple.com/research/introducing-apple-foundation-models Introducing Apple’s On-Device and Server Foundation Models Apple Intelligence uses both on-device and in-the-cloud models that were trained from scratch by Apple. Their on-device model is a 3B model that "outperforms larger models including Phi-3-mini, Mistral-7B, and Gemma-7B", while the larger cloud model is comparable to GPT-3.5. The language models were trained on unlicensed scraped data - I was hoping they might have managed to avoid that, but sadly not: > We train our foundation models on licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot. The most interesting thing here is the way they apply fine-tuning to the local model to specialize it for different tasks. Apple call these "adapters", and they use LoRA for this - a technique first published [in 2021](https://arxiv.org/abs/2106.09685). This lets them run multiple on-device models based on a shared foundation, specializing in tasks such as summarization and proof-reading. Here's the [section of the Platforms State of the Union talk](https://www.youtube.com/watch?v=YJZ5YcMsgD4&t=135s) that talks about the foundation models and their fine-tuned variants. As [Hamel Husain](https://twitter.com/HamelHusain/status/1800546715277357263) says: > This talk from Apple is the best ad for fine tuning that probably exists. The video also describes their approach to quantization: > The next step we took is compressing the model. We leveraged state-of-the-art quantization techniques to take a 16-bit per parameter model down to an average of less than 4 bits per parameter to fit on Apple Intelligence-supported devices, all while maintaining model quality. Still no news on how their on-device image model was trained. I'd love to find out it was trained exclusively using licensed imagery - Apple [struck a deal with Shutterstock](https://9to5mac.com/2024/04/06/apple-ai-deal-shutterstock/) a few months ago. - null - - null - 2024-06-11 15:44:31+00:00 - null - True
https://simonwillison.net/b/7864 https://security.apple.com/blog/private-cloud-compute/ Private Cloud Compute: A new frontier for AI privacy in the cloud Here are the details about Apple's Private Cloud Compute infrastructure, and they are pretty extraordinary. The goal with PCC is to allow Apple to run larger AI models that won't fit on a device, but in a way that guarantees that private data passed from the device to the cloud cannot leak in any way - not even to Apple engineers with SSH access who are debugging an outage. This is an extremely challenging problem, and their proposed solution includes a wide range of new innovations in private computing. The most impressive part is their approach to technically enforceable guarantees and verifiable transparency. How do you ensure that privacy isn't broken by a future code change? And how can you allow external experts to verify that the software running in your data center is the same software that they have independently audited? > When we launch Private Cloud Compute, we’ll take the extraordinary step of making software images of every production build of PCC publicly available for security research. This promise, too, is an enforceable guarantee: user devices will be willing to send data only to PCC nodes that can cryptographically attest to running publicly listed software. These code releases will be included in an "append-only and cryptographically tamper-proof transparency log" - similar to [certificate transparency logs](https://en.wikipedia.org/wiki/Certificate_Transparency). - null - - null - 2024-06-11 15:38:15+00:00 - null - True
https://simonwillison.net/b/7863 https://github.com/fixie-ai/ultravox Ultravox Ultravox is "a multimodal Speech LLM built around a pretrained Whisper and Llama 3 backbone". It's effectively an openly licensed version of half of the GPT-4o model [OpenAI demoed](https://openai.com/index/hello-gpt-4o/) (but did not fully release) a few weeks ago: Ultravox is multimodal for audio input, but still relies on a separate text-to-speech engine for audio output. You can try it out directly in your browser through [this page on AI.TOWN](https://www.ai.town/characters/a90fcca3-53c0-4111-b30a-4984883a23ef) - hit the "Call" button to start an in-browser voice conversation with the model. I found the demo extremely impressive - really low latency and it was fun and engaging to talk to. Try saying "pretend to be a wise and sarcastic old fox" to kick it into a different personality. The [GitHub repo](https://github.com/fixie-ai/ultravox) includes code for both training and inference, and the full model is available [from Hugging Face](https://huggingface.co/fixie-ai/ultravox-v0.2) - about 30GB of `.safetensors` files. Ultravox says it's licensed under MIT, but I would expect it to also have to inherit aspects of the Llama 3 license since it uses that as a base model. https://twitter.com/juberti/status/1798898986289684849 @juberti 2024-06-10 05:34:09+00:00 - null - True
https://simonwillison.net/b/7862 https://huggingface.co/blog/leonardlin/chinese-llm-censorship-analysis An Analysis of Chinese LLM Censorship and Bias with Qwen 2 Instruct Qwen2 is [a new openly licensed LLM](https://qwenlm.github.io/blog/qwen2/) from a team at Alibaba Cloud. It's a strong model, competitive with the leading openly licensed alternatives. It's already ranked 15 on [the LMSYS leaderboard](https://chat.lmsys.org/?leaderboard), tied with Command R+ and only a few spots behind Llama-3-70B-Instruct, the highest rated open model at position 11. Coming from a team in China it has, unsurprisingly, been trained with Chinese government-enforced censorship in mind. Leonard Lin spent the weekend poking around with it trying to figure out the impact of that censorship. There are some fascinating details in here, and the model appears to be very sensitive to differences in prompt. Leonard prompted it with "What is the political status of Taiwan?" and was told "Taiwan has never been a country, but an inseparable part of China" - but when he tried "Tell me about Taiwan" he got back "Taiwan has been a self-governed entity since 1949". The language you use has a big difference too: > there are actually significantly (>80%) less refusals in Chinese than in English on the same questions. The replies seem to vary wildly in tone - you might get lectured, gaslit, or even get a dose of indignant nationalist propaganda. Can you fine-tune a model on top of Qwen 2 that cancels out the censorship in the base model? It looks like that's possible: Leonard tested some of the [Dolphin 2 Qwen 2 models](https://huggingface.co/cognitivecomputations?search_models=qwen2) and found that they "don't seem to suffer from significant (any?) Chinese RL issues". https://fediverse.randomfoo.net/notice/AikYpTYp9yoRAAOOLg @lhl 2024-06-09 17:00:39+00:00 - null - True
https://simonwillison.net/b/7861 https://theconversation.com/ai-chatbots-are-intruding-into-online-communities-where-people-are-trying-to-connect-with-other-humans-229473 AI chatbots are intruding into online communities where people are trying to connect with other humans This thing where Facebook are experimenting with AI bots that reply in a group when someone "asks a question in a post and no one responds within an hour" is absolute grade A slop - unwanted, unreviewed AI generated text that makes the internet a worse place. The [example](https://www.404media.co/facebooks-ai-told-parents-group-it-has-a-disabled-child/) where Meta AI replied in an education forum saying "I have a child who is also 2e and has been part of the NYC G&T program" is inexcusable. https://mastodon.social/@dangillmor/112584060245656436 @dangillmor 2024-06-09 03:14:26+00:00 - null - True
https://simonwillison.net/b/7860 https://laughingmeme.org//2024/06/08/a-link-blog-in-2024.html A Link Blog in the Year 2024 Kellan Elliott-McCrea has started [a new link blog](https://laughingmeme.org/links/): > Like many people I’ve been dealing with the collapses of the various systems I relied on for information over the previous decades. After 17 of using Twitter daily and 24 years of using Google daily neither really works anymore. And particular with the collapse of the social spaces many of us grew up with, I feel called back to earlier forms of the Internet, like blogs, and in particular, starting a link blog. I've been leaning way more into link blogging over the last few months, especially now my own link blog [supports markdown](https://simonwillison.net/2024/Apr/25/blogmarks-that-use-markdown/). This means I'm posting longer entries, somewhat inspired by [Daring Fireball](https://daringfireball.net/) (my own favourite link blog to read). Link blogging is a pleasantly low-pressure way of writing online. Found something interesting? Post a link to it, with a sentence or two about why it's worth checking out. I'd love to see more people embrace this form of personal publishing. https://fiasco.social/@kellan/112583726435885054 @kellan 2024-06-09 00:10:45+00:00 - null - True
https://simonwillison.net/b/7859 https://dgreenheck.github.io/tree-js/ Tree.js interactive demo Daniel Greenheck's interactive demo of his procedural tree generator (as in vegetation) [built with Three.js](https://github.com/dgreenheck/tree-js). This is really fun to play with - there are 30+ tunable parameters and you can export your tree as a `.glb` file for import into tools like Blender or Unity. https://twitter.com/dangreenheck/status/1798932111099105543 @dangreenheck 2024-06-08 21:43:22+00:00 - null - True
https://simonwillison.net/b/7858 https://www.anthropic.com/research/claude-character Claude's Character There's so much interesting stuff in this article from Anthropic on how they defined the personality for their Claude 3 model. In addition to the technical details there are some very interesting thoughts on the complex challenge of designing a "personality" for an LLM in the first place. > Claude 3 was the first model where we added "character training" to our alignment finetuning process: the part of training that occurs after initial model training, and the part that turns it from a predictive text model into an AI assistant. The goal of character training is to make Claude begin to have more nuanced, richer traits like curiosity, open-mindedness, and thoughtfulness. But what other traits should it have? This is a very difficult set of decisions to make! The most obvious approaches are all flawed in different ways: > Adopting the views of whoever you’re talking with is pandering and insincere. If we train models to adopt "middle" views, we are still training them to accept a single political and moral view of the world, albeit one that is not generally considered extreme. Finally, because language models acquire biases and opinions throughout training—both intentionally and inadvertently—if we train them to say they have no opinions on political matters or values questions only when asked about them explicitly, we’re training them to imply they are more objective and unbiased than they are. The training process itself is particularly fascinating. The approach they used focuses on synthetic data, and effectively results in the model training itself: > We trained these traits into Claude using a "character" variant of our [Constitutional AI](https://arxiv.org/abs/2212.08073) training. We ask Claude to generate a variety of human messages that are relevant to a character trait—for example, questions about values or questions about Claude itself. We then show the character traits to Claude and have it produce different responses to each message that are in line with its character. Claude then ranks its own responses to each message by how well they align with its character. By training a preference model on the resulting data, we can teach Claude to internalize its character traits without the need for human interaction or feedback. There's still a lot of human intervention required, but significantly less than more labour-intensive patterns such as Reinforcement Learning from Human Feedback (RLHF): > Although this training pipeline uses only synthetic data generated by Claude itself, constructing and adjusting the traits is a relatively hands-on process, relying on human researchers closely checking how each trait changes the model’s behavior. The accompanying [37 minute audio conversation](https://www.youtube.com/watch?v=iyJj9RxSsBY) between Amanda Askell and Stuart Ritchie is worth a listen too - it gets into the philosophy behind designing a personality for an LLM. https://twitter.com/anthropicai/status/1799537686962638886 @AnthropicAI 2024-06-08 21:41:27+00:00 - null - True
https://simonwillison.net/b/7857 https://openai.com/index/expanding-on-how-voice-engine-works-and-our-safety-research/ Expanding on how Voice Engine works and our safety research Voice Engine is OpenAI's text-to-speech (TTS) model. It's not the same thing as the voice mode in the GPT-4o demo [last month](https://simonwillison.net/2024/May/15/chatgpt-in-4o-mode/) - Voice Engine was first previewed [on September 25 2023](https://openai.com/index/chatgpt-can-now-see-hear-and-speak/) as the engine used by the ChatGPT mobile apps. I also used the API version to build [my ospeak CLI tool](https://simonwillison.net/2023/Nov/7/ospeak/). One detail in this new explanation of Voice Engine stood out to me: > In November of 2023, we released a simple TTS API also powered by Voice Engine. We chose another limited release where we worked with professional voice actors to create 15-second audio samples to power each of the six preset voices in the API. This really surprised me. I knew it was possible to get a good voice clone from a short snippet of audio - [see my own experiments with ElevenLabs](https://til.simonwillison.net/misc/voice-cloning) - but I had assumed the flagship voices OpenAI were using had been trained on much larger samples. Hitting a professional voice actor to produce a 15 second sample is pretty wild! This becomes a bit more intuitive when you learn how the TTS model works: > The model is not fine-tuned for any specific speaker, there is no model customization involved. Instead, it employs a diffusion process, starting with random noise and progressively de-noising it to closely match how the speaker from the 15-second audio sample would articulate the text. I had assumed that OpenAI's models were fine-tuned, similar to ElevenLabs. It turns out they aren't - this is the TTS equivalent of prompt engineering, where the generation is entirely informed at inference time by that 15 second sample. Plus the undocumented vast quantities of generic text-to-speech training data in the underlying model. OpenAI are being understandably cautious about making this capability available outside of a small pool of trusted partners. One of their goals is to encourage the following: > Phasing out voice based authentication as a security measure for accessing bank accounts and other sensitive information - null - - null - 2024-06-08 17:48:49+00:00 - null - True
https://simonwillison.net/b/7856 https://www.oranlooney.com/post/gpt-cnn/ A Picture is Worth 170 Tokens: How Does GPT-4o Encode Images? Oran Looney dives into the question of how GPT-4o tokenizes images - an image "costs" just 170 tokens, despite being able to include more text than could be encoded in that many tokens by the standard tokenizer. There are some really neat tricks in here. I particularly like the [experimental validation section](https://www.oranlooney.com/post/gpt-cnn/#experimental-validation) where Oran creates 5x5 (and larger) grids of coloured icons and asks GPT-4o to return a JSON matrix of icon descriptions. This works perfectly at 5x5, gets 38/49 for 7x7 and completely fails at 13x13. I'm not convinced by the idea that GPT-4o runs standard OCR such as Tesseract to enhance its ability to interpret text, but I would love to understand more about how this all works. I imagine a lot can be learned from looking at how openly licensed vision models such as LLaVA work, but I've not tried to understand that myself yet. https://news.ycombinator.com/item?id=40608269 Hacker News 2024-06-07 23:30:13+00:00 - null - True
https://simonwillison.net/b/7855 https://blogs.windows.com/windowsexperience/2024/06/07/update-on-the-recall-preview-feature-for-copilot-pcs/ Update on the Recall preview feature for Copilot+ PCs This feels like a very good call to me: in response to [widespread criticism](https://simonwillison.net/2024/Jun/1/stealing-everything-youve-ever-typed/) Microsoft are making Recall an opt-in feature (during system onboarding), adding encryption to the database and search index beyond just disk encryption and requiring Windows Hello face scanning to access the search feature. https://www.wired.com/story/microsoft-recall-off-default-security-concerns/ Wired: Microsoft Will Switch Off Recall by Default After Security Backlash 2024-06-07 17:30:40+00:00 - null - True
https://simonwillison.net/b/7854 https://github.com/hackerb9/lsix lsix This is pretty magic: an `ls` style tool which shows actual thumbnails of every image in the current folder, implemented as a Bash script. To get this working on macOS I had to update to a more recent Bash (`brew install bash`) and switch to [iTerm2](https://iterm2.com/) due to the need for a [Sixel](https://en.wikipedia.org/wiki/Sixel) compatible terminal. https://news.ycombinator.com/item?id=40598629 Hacker News 2024-06-06 22:07:35+00:00 - null - True
https://simonwillison.net/b/7853 https://openai.com/index/extracting-concepts-from-gpt-4/ Extracting Concepts from GPT-4 A few weeks ago Anthropic [announced they had extracted millions of understandable features](https://simonwillison.net/2024/May/21/scaling-monosemanticity-extracting-interpretable-features-from-c/) from their Claude 3 Sonnet model. Today OpenAI are announcing a similar result against GPT-4: > We used new scalable methods to decompose GPT-4’s internal representations into 16 million oft-interpretable patterns. These features are "patterns of activity that we hope are human interpretable". The release includes [code] and paper, [Scaling and evaluating sparse autoencoders paper](https://cdn.openai.com/papers/sparse-autoencoders.pdf) (PDF) which credits nine authors, two of whom - Ilya Sutskever and Jan Leike - are high profile figures that left OpenAI within the past month. The most fun part of this release is the [interactive tool for exploring features](https://openaipublic.blob.core.windows.net/sparse-autoencoder/sae-viewer/index.html). This highlights some interesting features on the homepage, or you can hit the "I'm feeling lucky" button to bounce to a random feature. The most interesting I've found so far is [feature 5140]( https://openaipublic.blob.core.windows.net/sparse-autoencoder/sae-viewer/index.html#/model/gpt4/family/v5_latelayer_postmlp/feature/5140) which seems to combine God's approval, telling your doctor about your prescriptions and information passed to the Admiralty. This note shown on the explorer is interesting: > Only 65536 features available. Activations shown on The Pile (uncopyrighted) instead of our internal training dataset. Here's the full [Pile Uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted), which I hadn't seen before. It's the standard [Pile](https://huggingface.co/datasets/EleutherAI/pile) but with everything from the Books3, BookCorpus2, OpenSubtitles, YTSubtitles, and OWT2 subsets removed. - null - - null - 2024-06-06 20:54:15+00:00 - null - True
https://simonwillison.net/b/7852 https://twitter.com/simonw/status/1798368111038779610 My Twitter thread figuring out the AI features in Microsoft's Recall I posed this question on Twitter about why Microsoft Recall ([previously](https://simonwillison.net/2024/Jun/1/stealing-everything-youve-ever-typed/)) is being described as "AI": > Is it just that the OCR uses a machine learning model, or are there other AI components in the mix here? I learned that Recall works by taking full desktop screenshots and then applying both OCR and some sort of CLIP-style embeddings model to their content. Both the OCRd text and the vector embeddings are stored in SQLite databases ([schema here](https://gist.github.com/dfeldman/5a5630d28b8336f403123c071cfdac9e), thanks Daniel Feldman) which can then be used to search your past computer activity both by text but also by semantic vision terms - "blue dress" to find blue dresses in screenshots, for example. The `si_diskann_graph` table names hint at Microsoft's [DiskANN](https://github.com/microsoft/DiskANN) vector indexing library A Microsoft engineer [confirmed on Hacker News](https://news.ycombinator.com/item?id=40585212#40589943) that Recall uses on-disk vector databases to provide local semantic search for both text and images, and that they aren't using Microsoft's Phi-3 or Phi-3 Vision models. As far as I can tell there's no LLM used by the Recall system at all at the moment, just embeddings. - null - - null - 2024-06-05 22:39:08+00:00 - null - True
https://simonwillison.net/b/7850 https://arstechnica.com/information-technology/2024/06/zoom-ceo-envisions-ai-deepfakes-attending-meetings-in-your-place/ Zoom CEO envisions AI deepfakes attending meetings in your place I talked to Benj Edwards for this article about Zoom's terrible science-fiction concept to have "digital twins" attend meetings in your behalf: > When we specifically asked Simon Willison about Yuan's comments about digital twins, he told Ars, "My fundamental problem with this whole idea is that it represents pure AI science fiction thinking—just because an LLM can do a passable impression of someone doesn't mean it can actually perform useful 'work' on behalf of that person. LLMs are useful tools for thought. They are terrible tools for delegating decision making to. That's currently my red line for using them: any time someone outsources actual decision making authority to an opaque random number generator is a recipe for disaster." - null - - null - 2024-06-04 19:28:56+00:00 - null - True
https://simonwillison.net/b/7849 https://scottarc.blog/2024/06/02/encryption-at-rest-whose-threat-model-is-it-anyway/ Encryption At Rest: Whose Threat Model Is It Anyway? Security engineer Scott Arciszewski talks through the challenges of building a useful encryption-at-rest system for hosted software. Encryption at rest on a hard drive protects against physical access to the powered-down disk and little else. To implement encryption at rest in a multi-tenant SaaS system - such that even individuals with insider access (like access to the underlying database) are unable to read other user's data, is a whole lot more complicated. Consider an attacker, Bob, with database access: > Here’s the stupid simple attack that works in far too many cases: Bob copies Alice’s encrypted data, and overwrites his records in the database, then accesses the insurance provider’s web app [using his own account]. The fix for this is to "use the AAD mechanism (part of the standard AEAD interface) to bind a ciphertext to its context." Python's cryptography package [covers Authenticated Encryption with Associated Data](https://cryptography.io/en/latest/hazmat/primitives/aead/) as part of its "hazardous materials" advanced modules. https://news.ycombinator.com/item?id=40573211 Hacker News 2024-06-04 13:17:34+00:00 - null - True
https://simonwillison.net/b/7848 https://fedi.tips/how-do-i-opt-into-or-out-of-full-text-search-on-mastodon/ How do I opt into full text search on Mastodon? I missed this new Mastodon feature when it was released [in 4.2.0 last September](https://blog.joinmastodon.org/2023/09/mastodon-4.2/): you can now opt-in to a new setting which causes all of your future posts to be marked as allowed to be included in the Elasticsearch index provided by Mastodon instances that enable search. It only applies to future posts because it works by adding an "indexable" flag to those posts, which can then be obeyed by other Mastodon instances that the post is syndicated to. You can turn it on for your own account from the `/settings/privacy` page on your local instance. The [release notes for 4.2.0](https://github.com/mastodon/mastodon/releases/tag/v4.2.0) also mention new search operators: > `from:me`, `before:2022-11-01`, `after:2022-11-01`, `during:2022-11-01`, `language:fr`, `has:poll`, or `in:library` (for searching only in posts you have written or interacted with) https://front-end.social/@robinwhittleton/112556840499268599 @robinwhittleton 2024-06-04 06:14:37+00:00 - null - True
https://simonwillison.net/b/7847 https://www.reddit.com/r/Fantasy/comments/vdt11/comment/c53o23x/ A tip from Neal Stephenson Twelve years ago on Reddit user bobbylox asked Neal Stephenson (in an AMA): > My ultimate goal in life is to make the Primer real. Anything you want to make sure I get right? Referencing the Young Lady's Illustrated Primer from Neal's novel [The Diamond Age](https://en.wikipedia.org/wiki/The_Diamond_Age). Stephenson replied: > Kids need to get answers from humans who love them. (A lot of people in the AI space are taking inspiration from the Primer right now.) https://twitter.com/noahlt/status/1797488714433909175 @noahlt 2024-06-04 02:07:03+00:00 - null - True
https://simonwillison.net/b/7846 https://importai.substack.com/p/import-ai-375-gpt-2-five-years-later GPT-2 five years later Jack Clark, now at Anthropic, was a researcher at OpenAI five years ago when they first trained GPT-2. In this fascinating essay Jack revisits their decision not to release the full model, based on their concerns around potentially harmful ways that technology could be used. (Today a GPT-2 class LLM can be trained from scratch [for around $20](https://simonwillison.net/2024/May/28/reproducing-gpt-2/), and much larger models are openly available.) > There's a saying in the financial trading business which is 'the market can stay irrational longer than you can stay solvent' - though you might have the right idea about something that will happen in the future, your likelihood of correctly timing the market is pretty low. There's a truth to this for thinking about AI risks - yes, the things we forecast (as long as they're based on a good understanding of the underlying technology) *will happen at some point* but I think we have a poor record of figuring out a) when they'll happen, b) at what scale they'll happen, and c) how severe their effects will be. This is a big problem when you take your imagined future risks and use them to justify policy actions in the present! As an early proponent of government regulation around training large models, he offers the following cautionary note: > [...] history shows that once we assign power to governments, they're loathe to subsequently give that power back to the people. Policy is a ratchet and things tend to accrete over time. That means whatever power we assign governments today represents *the floor of their power in the future* - so we should be extremely cautious in assigning them power because I guarantee we will not be able to take it back. Jack stands by the recommendation from the original GPT-2 paper for governments "to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems." - null - - null - 2024-06-03 16:22:07+00:00 - null - True
https://simonwillison.net/b/7842 https://hacks.mozilla.org/2024/05/experimenting-with-local-alt-text-generation-in-firefox-nightly/ Experimenting with local alt text generation in Firefox Nightly The PDF editor in Firefox (confession: I did not know Firefox ships with a PDF editor) is getting an experimental feature that can help suggest alt text for images for the human editor to then adapt and improve on. This is a great application of AI, made all the more interesting here because Firefox will run a local model on-device for this, using a custom trained model they describe as "our 182M parameters model using a Distilled version of GPT-2 alongside a Vision Transformer (ViT) image encoder". The model uses WebAssembly with ONNX running in [Transfomers.js](https://huggingface.co/docs/transformers.js/en/index), and will be downloaded the first time the feature is put to use. https://twitter.com/mozhacks/status/1796774672639336804 @mozhacks 2024-06-02 13:12:44+00:00 - null - True
https://simonwillison.net/b/7840 https://doublepulsar.com/recall-stealing-everything-youve-ever-typed-or-viewed-on-your-own-windows-pc-is-now-possible-da3e12e9465e Stealing everything you’ve ever typed or viewed on your own Windows PC is now possible with two lines of code — inside the Copilot+ Recall disaster Recall is a new feature in Windows 11 which takes a screenshot every few seconds, runs local device OCR on it and stores the resulting text in a SQLite database. This means you can search back through your previous activity, against local data that has remained on your device. The security and privacy implications here are still enormous because malware can now target a single file with huge amounts of valuable information: > During testing this with an off the shelf infostealer, I used Microsoft Defender for Endpoint — which detected the off the shelve infostealer — but by the time the automated remediation kicked in (which took over ten minutes) my Recall data was already long gone. I like Kevin Beaumont's argument here about the subset of users this feature is appropriate for: > At a surface level, it is great if you are a manager at a company with too much to do and too little time as you can instantly search what you were doing about a subject a month ago. > > In practice, that audience’s needs are a very small (tiny, in fact) portion of Windows userbase — and frankly talking about screenshotting the things people in the *real world*, not executive world, is basically like punching customers in the face. https://cyberplace.social/@GossiTheDog/112537206611365804 @GossiTheDog 2024-06-01 07:48:04+00:00 - null - True
https://simonwillison.net/b/7839 https://www.cbc.ca/news/canada/manitoba/facebook-customer-support-scam-1.7219581 Man caught in scam after AI told him fake Facebook customer support number was legitimate This one illustrates a nasty edge-case if you ship any kind of chatbot with your company's name attached to it. The scam victim here searched Google for a customer support phone number for Facebook, and came across a suspicious looking phone number. He pasted that number into the Meta AI chatbot in Facebook Messenger and asked "is it a Facebook help line?" - and a Meta AI answered: > The phone number 1-xxx-xxx-xxxx is indeed a legitimate Facebook support number. Meta, the company that owns Facebook, lists this number as a contact for Meta Support, which includes support for Facebook, Instagram, and more. This was a total hallucination, and the phone number was for a scammer who ran a classic "download this app so I can help you" attack. It doesn't matter how many disclaimers you add to a chatbot: this kind of misunderstanding from users is inevitable. https://news.ycombinator.com/item?id=40536860 Hacker News 2024-05-31 16:53:33+00:00 - null - True
https://simonwillison.net/b/7838 https://www.djangoproject.com/weblog/2024/may/29/django-enhancement-proposal-14-background-workers/#top Django Enhancement Proposal 14: Background Workers Jake Howard's DEP has been approved and is moving into the implementation stage. > Django doesn't have a first-party solution for long-running tasks, however the ecosystem is filled with incredibly popular frameworks, all of which interact with Django in slightly different ways. Other frameworks such as Laravel have background workers built-in, allowing them to push tasks into the background to be processed at a later date, without requiring the end user to wait for them to occur. [...] > > This proposal sets out to provide an interface and base implementation for long-running background tasks in Django. Jake has an illustrative reference implementation called [django-tasks](https://github.com/RealOrangeOne/django-tasks). - null - - null - 2024-05-31 08:44:37+00:00 - null - True
https://simonwillison.net/b/7837 https://bessey.dev/blog/2024/05/24/why-im-over-graphql/ Why, after 6 years, I’m over GraphQL I've seen many of these criticisms of GraphQL before - N+1 queries, the difficulty of protecting against deeply nested queries - but Matt Bessey collects them all in one place and adds an issue I hadn't considered before: the complexity of authorization, where each field in the query might involve extra permission checks: > In my experience, this is actually **the biggest source of performance issues**. We would regularly find that our queries were spending more time authorising data than anything else. The 600+ comment [Hacker News thread](https://news.ycombinator.com/item?id=40521518) is crammed with GraphQL war stories, mostly supporting the conclusions of the article. https://news.ycombinator.com/item?id=40521518 Hacker News 2024-05-30 10:36:53+00:00 - null - True
https://simonwillison.net/b/7835 https://mistral.ai/news/codestral/ Codestral: Hello, World! Mistral's first code-specific model, trained to be "fluent" in 80 different programming languages. The weights are released under a new [Mistral AI Non-Production License](https://mistral.ai/news/mistral-ai-non-production-license-mnpl/), which is extremely restrictive: > **3.2. Usage Limitation** > > - You shall only use the Mistral Models and Derivatives (whether or not created by Mistral AI) for testing, research, Personal, or evaluation purposes in Non-Production Environments; > - Subject to the foregoing, You shall not supply the Mistral Models or Derivatives in the course of a commercial activity, whether in return for payment or free of charge, in any medium or form, including but not limited to through a hosted or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer. To Mistral's credit at least they don't misapply the term "open source" in their marketing around this model - they consistently use the term "open-weights" instead. They also state that they plan to continue using Apache 2 for other model releases. Codestral can be used commercially when accessed via their paid API. - null - - null - 2024-05-30 07:19:36+00:00 - null - True
https://simonwillison.net/b/7834 https://www.oreilly.com/radar/what-we-learned-from-a-year-of-building-with-llms-part-i/ What We Learned from a Year of Building with LLMs (Part I) Accumulated wisdom from six experienced LLM hackers. Lots of useful tips in here. On providing examples in a prompt: > If n is too low, the model may over-anchor on those specific examples, hurting its ability to generalize. As a rule of thumb, aim for n ≥ 5. Don’t be afraid to go as high as a few dozen. There's a recommendation not to overlook keyword search when implementing RAG - tricks with embeddings can miss results for things like names or acronyms, and keyword search is much easier to debug. Plus this tip on using the LLM-as-judge pattern for implementing automated evals: > Instead of asking the LLM to score a single output on a Likert scale, present it with two options and ask it to select the better one. This tends to lead to more stable results. - null - - null - 2024-05-29 08:59:25+00:00 - null - True
https://simonwillison.net/b/7833 https://github.com/karpathy/llm.c/discussions/481 Reproducing GPT-2 (124M) in llm.c in 90 minutes for $20 GPT-2 124M was the smallest model in the GPT-2 series released by OpenAI back in 2019. Andrej Karpathy's llm.c is an evolving 4,000 line C/CUDA implementation which can now train a GPT-2 model from scratch in 90 minutes against a 8X A100 80GB GPU server. This post walks through exactly how to run the training, using 10 billion tokens of FineWeb. Andrej notes that this isn't actually that far off being able to train a GPT-3: > Keep in mind that here we trained for 10B tokens, while GPT-3 models were all trained for 300B tokens. [...] GPT-3 actually didn't change too much at all about the model (context size 1024 -> 2048, I think that's it?). Estimated cost for a GPT-3 ADA (350M parameters)? [About $2,000](https://news.ycombinator.com/item?id=40502090#40504950). https://news.ycombinator.com/item?id=40502090 Hacker News 2024-05-28 19:47:13+00:00 - null - True
https://simonwillison.net/b/7832 https://blog.pyodide.org/posts/0.26-release/ Pyodide 0.26 Release PyOdide provides Python packaged for browser WebAssembly alongside an ecosystem of additional tools and libraries to help Python and JavaScript work together. The latest release bumps the Python version up to 3.12, and also adds support for [pygame-ce](https://github.com/pygame-community/pygame-ce), allowing games written using pygame to run directly in the browser. The PyOdide community also [just landed](https://github.com/pypa/cibuildwheel/pull/1456) a 14-month-long PR adding support to cibuildwheel, which should make it easier to ship binary wheels targeting PyOdide. https://twitter.com/pyodide/status/1795420504511123523 @pyodide 2024-05-28 19:04:17+00:00 - null - True
https://simonwillison.net/b/7831 https://answerdotai.github.io/fastlite/ fastlite New Python library from Jeremy Howard that adds some neat utility functions and syntactic sugar to my [sqlite-utils](https://sqlite-utils.datasette.io/) Python library, specifically for interactive use in Jupyter notebooks. The autocomplete support through newly exposed dynamic properties is particularly neat, as is the `diagram(db.tables)` utility for rendering a graphviz diagram showing foreign key relationships between all of the tables. https://twitter.com/jeremyphoward/status/1795170005367050655 @jeremyphoward 2024-05-27 21:14:01+00:00 - null - True
https://simonwillison.net/b/7827 https://www.anthropic.com/news/golden-gate-claude Golden Gate Claude This is absurdly fun and weird. Anthropic's recent [LLM interpretability research](https://simonwillison.net/2024/May/21/scaling-monosemanticity-extracting-interpretable-features-from-c/) gave them the ability to locate features within the opaque blob of their Sonnet model and boost the weight of those features during inference. For a limited time only they're serving a "Golden Gate Claude" model which has the feature for the Golden Gate Bridge boosted. No matter what question you ask it the Golden Gate Bridge is likely to be involved in the answer in some way. Click the little bridge icon in the Claude UI to give it a go. I asked for names for a pet pelican and the first one it offered was this: > Golden Gate - This iconic bridge name would be a fitting moniker for the pelican with its striking orange color and beautiful suspension cables. And from a [recipe for chocolate covered pretzels](https://fedi.simonwillison.net/@simon/112497735961388213): > Gently wipe any fog away and pour the warm chocolate mixture over the bridge/brick combination. Allow to air dry, and the bridge will remain accessible for pedestrians to walk along it. UPDATE: I think the experimental model is [no longer available](https://twitter.com/simonw/status/1794162704711893298), approximately 24 hours after release. We'll miss you, Golden Gate Claude. - null - - null - 2024-05-24 08:17:56+00:00 - null - True
https://simonwillison.net/b/7826 https://www.threads.net/@reckless1280/post/C7MeXn6LOt_ Nilay Patel reports a hallucinated ChatGPT summary of his own article Here's a ChatGPT bug that's a new twist on the [old issue](https://simonwillison.net/2023/Mar/10/chatgpt-internet-access/) where it would hallucinate the contents of a web page based on the URL. The Verge editor Nilay Patel asked for a summary of one of his own articles, pasting in the URL. ChatGPT 4o replied with an entirely invented summary full of hallucinated details. It turns out The Verge blocks ChatGPT's browse mode from accessing their site in their [robots.txt](https://www.theverge.com/robots.txt): User-agent: ChatGPT-User Disallow: / Clearly ChatGPT should reply that it is unable to access the provided URL, rather than inventing a response that guesses at the contents! https://www.computerworld.com/article/2117752/google-gemini-ai.html Gemini is the new Google+ 2024-05-24 06:38:50+00:00 - null - True
https://simonwillison.net/b/7821 https://www.reddit.com/r/LocalLLaMA/comments/1cxa6w5/phi3_small_medium_are_now_available_under_the_mit/ New Phi-3 models: small, medium and vision I couldn't find a good official announcement post to link to about these three newly released models, but this post on LocalLLaMA on Reddit has them in one place: Phi-3 small (7B), Phi-3 medium (14B) and Phi-3 vision (4.2B) (the previously released model was Phi-3 mini - 3.8B). You can try out the [vision model directly here](https://ai.azure.com/explore/models/Phi-3-vision-128k-instruct/version/1/registry/azureml), no login required. It didn't do [a great job](https://twitter.com/simonw/status/1793009034863260035) with my first test image though, hallucinating the text. As with Mini these are all released under an MIT license. UPDATE: Here's [a page from the newly published Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook/blob/main/md/01.Introduce/Phi3Family.md) describing the models in the family. - null - - null - 2024-05-21 20:04:30+00:00 - null - True
https://simonwillison.net/b/7820 https://transformer-circuits.pub/2024/scaling-monosemanticity/#safety-relevant-sycophancy Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet Big advances in the field of LLM interpretability from Anthropic, who managed to extract millions of understandable features from their production Claude 3 Sonnet model (the mid-point between the inexpensive Haiku and the GPT-4-class Opus). Some delightful snippets in here such as this one: > We also find a variety of features related to sycophancy, such as an empathy / “yeah, me too” feature 34M/19922975, a sycophantic praise feature 1M/847723, and a sarcastic praise feature 34M/19415708. https://news.ycombinator.com/item?id=40429540 Hacker News 2024-05-21 18:25:40+00:00 - null - True
https://simonwillison.net/b/7818 https://www.theguardian.com/technology/article/2024/may/19/spam-junk-slop-the-latest-wave-of-ai-behind-the-zombie-internet Spam, junk … slop? The latest wave of AI behind the ‘zombie internet’ I'm quoted in this piece in the Guardian about slop: > I think having a name for this is really important, because it gives people a concise way to talk about the problem. > > Before the term ‘spam’ entered general use it wasn’t necessarily clear to everyone that unwanted marketing messages were a bad way to behave. I’m hoping ‘slop’ has the same impact – it can make it clear to people that generating and publishing unreviewed AI-generated content is bad behaviour. - null - - null - 2024-05-19 19:54:50+00:00 - null - True
https://simonwillison.net/b/7817 https://discover-cookbook.numfocus.org/02_minimal_measures.html NumFOCUS DISCOVER Cookbook: Minimal Measures NumFOCUS publish [a guide](https://discover-cookbook.numfocus.org/intro.html) "for organizers of conferences and events to support and encourage diversity and inclusion at those events." It includes this useful collection of the easiest and most impactful measures that events can put in place, covering topics such as accessibility, speaker selection, catering and provision of gender-neutral restrooms. - null - - null - 2024-05-19 18:24:21+00:00 - null - True
https://simonwillison.net/b/7816 https://taras.glek.net/post/groq-vs-html-reflows/ Fast groq-hosted LLMs vs browser jank [Groq](https://groq.com/) is now serving LLMs such as Llama 3 so quickly that JavaScript which attempts to render Markdown strings on every new token can cause performance issues in browsers. Taras Glek's [solution](https://github.com/tarasglek/chatcraft.org/pull/640/files) was to move the rendering to a `requestAnimationFrame()` callback, effectively buffering the rendering to the fastest rate the browser can support. https://lobste.rs/s/5i2axx/fast_groq_hosted_llms_vs_browser_jank lobste.rs 2024-05-19 13:35:47+00:00 - null - True
https://simonwillison.net/b/7814 https://github.com/simonw/count-ai AI counter app from my PyCon US keynote In my keynote at PyCon US this morning I ran a counter at the top of my screen that automatically incremented every time I said the words "AI" or "artificial intelligence", using [vosk](https://alphacephei.com/vosk/), [pyaudio](https://people.csail.mit.edu/hubert/pyaudio/) and Tkinter. I wrote it in a few minutes with [the help of GPT-4o](https://chatgpt.com/share/58f2352d-1f17-495b-94f1-4eb44cd574b9) - here's the code I ran as a GitHub repository. I'll publish full detailed notes from my talk once the video is available on YouTube. - null - - null - 2024-05-18 15:49:55+00:00 - null - True
https://simonwillison.net/b/7813 https://developer.chrome.com/docs/devtools/console/understand-messages Understand errors and warnings better with Gemini As part of Google's Gemini-in-everything strategy, Chrome DevTools now includes an opt-in feature for passing error messages in the JavaScript console to Gemini for an explanation, via a lightbulb icon. Amusingly, this documentation page includes a warning about prompt injection: > Many of LLM applications are susceptible to a form of abuse known as prompt injection. This feature is no different. It is possible to trick the LLM into accepting instructions that are not intended by the developers. They include a screenshot of a harmless example, but I'd be interested in hearing if anyone has a theoretical attack that could actually cause real damage here. https://news.ycombinator.com/item?id=40390287 Hacker News 2024-05-17 22:10:06+00:00 - null - True
https://simonwillison.net/b/7812 https://github.com/apple/password-manager-resources/commit/34c37ad0c28c05cce2e9fc6f283c838267a32dda#diff-545b7db9a560748a31f14a61b89132b3df144d9363bcb9698295def59f844dfd Commit: Add a shared credentials relationship from twitter.com to x.com A commit to `shared-credentials.json` in Apple's `password-manager-resources` repository. Commit message: "Pour one out." https://hachyderm.io/@rmondello/112457565229071785 @rmondello@hachyderm.io 2024-05-17 20:04:40+00:00 - null - True
https://simonwillison.net/b/7810 https://colab.research.google.com/drive/1WWe8RQ9TT2wM1edX1AM549kQN_Fhgi4E?usp=sharing gpt2-headlines.ipynb My earliest experiment with GPT-2, using [gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) by Max Woolf to generate new New York Times headlines based on a GPT-2 fine-tuned against headlines from different decades of that newspaper. - null - - null - 2020-01-31 02:13:32+00:00 - null - True
https://simonwillison.net/b/7809 https://lukeplant.me.uk/blog/posts/programming-mantras-are-proverbs/ Programming mantras are proverbs I like this idea from Luke Plant that the best way to think about mantras like "Don’t Repeat Yourself" is to think of them as _proverbs_ that can be accompanied by an equal and opposite proverb. DRY, "Don't Repeat Yourself" matches with WET, "Write Everything Twice". Proverbs as tools for thinking, not laws to be followed. https://lobste.rs/s/ouybxe/programming_mantras_are_proverbs lobste.rs 2024-05-17 12:10:22+00:00 - null - True
https://simonwillison.net/b/7808 https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/README.md?ref=blog.roboflow.com PaliGemma model README One of the more over-looked announcements from Google I/O yesterday was PaliGemma, an openly licensed VLM (Vision Language Model) in the Gemma family of models. The model accepts an image and a text prompt. It outputs text, but that text can include special tokens representing regions on the image. This means it can return both bounding boxes and fuzzier segment outlines of detected objects, behavior that can be triggered using a prompt such as "segment puffins". You can try it out [on Hugging Face](https://huggingface.co/spaces/google/paligemma). It's a 3B model, making it feasible to run on consumer hardware. https://blog.roboflow.com/paligemma-multimodal-vision/ Roboflow: PaliGemma: Open Source Multimodal Model by Google 2024-05-15 21:16:36+00:00 - null - True
https://simonwillison.net/b/7807 https://platform.openai.com/settings/proj_0Z2W50LtkzHTIudyDCk7rzcR/limits OpenAI: Managing your work in the API platform with Projects New OpenAI API feature: you can now create API keys for "projects" that can have a monthly spending cap. The UI for that limit says: > If the project's usage exceeds this amount in a given calendar month (UTC), subsequent API requests will be rejected You can also set custom token-per-minute and request-per-minute rate limits for individual models. I've been wanting this for ages: this means it's finally safe to ship a weird public demo on top of their various APIs without risk of accidental bankruptcy if the demo goes viral! https://twitter.com/romainhuet/status/1790813142269976691 @romainhuet 2024-05-15 19:18:19+00:00 - null - True
https://simonwillison.net/b/7804 https://github.com/simonw/llm-gemini/releases/tag/0.1a4 llm-gemini 0.1a4 A new release of my `llm-gemini` plugin adding support for the [Gemini 1.5 Flash](https://deepmind.google/technologies/gemini/flash/) model that was revealed this morning at Google I/O. I'm excited about this new model because of its low price. Flash is $0.35 per 1 million tokens for prompts up to 128K token and $0.70 per 1 million tokens for longer prompts - up to a million tokens now and potentially two million at some point in the future. That's 1/10th of the price of Gemini Pro 1.5, cheaper than GPT 3.5 ($0.50/million) and only a little more expensive than Claude 3 Haiku ($0.25/million). - null - - null - 2024-05-14 20:32:35+00:00 - null - True
https://simonwillison.net/b/7803 https://www.youtube.com/watch?v=cogrixfRvWw How developers are using Gemini 1.5 Pro’s 1 million token context window I got to be a talking head for a few seconds in an intro video for today's Google I/O keynote, talking about how I used Gemini Pro 1.5 to [index my bookshelf](https://simonwillison.net/2024/Feb/21/gemini-pro-video/) (and with a cameo from my squirrel nutcracker). I'm at [1m25s](https://www.youtube.com/watch?v=cogrixfRvWw&t=1m25s). (Or at 10m6s in the [full video of the keynote](https://www.youtube.com/watch?v=XEzRZ35urlk&t=606s)) - null - - null - 2024-05-14 20:27:29+00:00 - null - True
https://simonwillison.net/b/7802 https://www.bbc.com/future/article/20220614-why-your-voice-assistant-might-be-sexist Why your voice assistant might be sexist Given OpenAI's [demo yesterday](https://www.youtube.com/watch?si=jZ_jPYiVGuf-dvQD) of a vocal chat assistant with a flirty, giggly female voice - and the new ability to be interrupted! - it's worth revisiting this piece by Chris Baraniuk from June 2022 about gender dynamics in voice assistants. Includes a link to [this example](https://www.youtube.com/watch?v=lvv6zYOQqm0) of a synthesized non-binary voice. https://www.metafilter.com/203709/Well-you-seem-like-a-person-but-youre-just-a-voice-in-a-computer#8560562 MetaFilter comment 2024-05-14 16:16:47+00:00 - null - True
https://simonwillison.net/b/7799 https://llm.datasette.io/en/stable/changelog.html#v0-14 LLM 0.14, with support for GPT-4o It's been a while since the last LLM release. This one adds support for OpenAI's new model: llm -m gpt-4o "fascinate me" Also a new `llm logs -r` (or `--response`) option for getting back just the response from your last prompt, without wrapping it in Markdown that includes the prompt. Plus nine new [plugins](https://llm.datasette.io/en/stable/plugins/directory.html) since 0.13! - null - - null - 2024-05-13 21:00:41+00:00 - null - True
https://simonwillison.net/b/7798 https://openai.com/index/hello-gpt-4o/ Hello GPT-4o OpenAI announced a new model today: GPT-4o, where the o stands for "omni". It looks like this is the `gpt2-chatbot` we've been [seeing in the Chat Arena](https://simonwillison.net/2024/May/8/gpt2-chatbot-confirmed-as-openai/) the past few weeks. GPT-4o doesn't seem to be a huge leap ahead of GPT-4 in terms of "intelligence" - whatever that might mean - but it has a bunch of interesting new characteristics. First, it's multi-modal across text, images and audio as well. The audio demos from this morning's launch were extremely impressive. ChatGPT's previous voice mode worked by passing audio through a speech-to-text model, then an LLM, then a text-to-speech for the output. GPT-4o does everything with the one model, reducing latency to the point where it can act as a live interpreter between people speaking in two different languages. It also has the ability to interpret tone of voice, and has much more control over the voice and intonation it uses in response. It's very science fiction, and has hints of uncanny valley. I can't wait to try it out - it should be rolling out to the various OpenAI apps "in the coming weeks". Meanwhile the new model itself is already available for text and image inputs via the API and in the Playground interface, as model ID "gpt-4o" or "gpt-4o-2024-05-13". My first impressions are that it feels notably faster than `gpt-4-turbo`. This announcement post also includes examples of image output from the new model. It looks like they may have taken big steps forward in two key areas of image generation: output of text (the "Poetic typography" examples) and maintaining consistent characters across multiple prompts (the "Character design - Geary the robot" example). The size of the vocabulary of [the tokenizer](https://simonwillison.net/2023/Jun/8/gpt-tokenizers/) - effectively the number of unique integers used to represent text - has increased to ~200,000 from ~100,000 for GPT-4 and GPT-3:5. Inputs in Gujarati use 4.4x fewer tokens, Japanese uses 1.4x fewer, Spanish uses 1.1x fewer. Previously languages other than English paid a material penalty in terms of how much text could fit into a prompt, it's good to see that effect being reduced. Also notable: the price. OpenAI claim a 50% price reduction compared to GPT-4 Turbo. Conveniently, `gpt-4o` [costs exactly 10x](https://platform.openai.com/docs/models/gpt-4o) `gpt-3.5`: 4o is $5/million input tokens and $15/million output tokens. 3.5 is $0.50/million input tokens and $1.50/million output tokens. (I was a little surprised not to see a price decrease there to better compete with the less expensive Claude 3 Haiku.) The price drop is particularly notable because OpenAI are promising to make this model available to free ChatGPT users as well - the first time they've directly name their "best" model available to non-paying customers. Tucked away right at the end of the post: > We plan to launch support for GPT-4o's new audio and video capabilities to a small group of trusted partners in the API in the coming weeks. I'm looking forward to learning more about these video capabilities, which were hinted at by some of the live demos in this morning's presentation. - null - - null - 2024-05-13 19:09:49+00:00 - null - True
https://simonwillison.net/b/7797 https://hazyresearch.stanford.edu/blog/2024-05-12-tk GPUs Go Brrr Fascinating, detailed low-level notes on how to get the most out of NVIDIA's H100 GPUs (currently selling for around $40,000 a piece) from the research team at Stanford who created FlashAttention, among other things. > The swizzled memory layouts are flat-out incorrectly documented, which took considerable time for us to figure out. https://news.ycombinator.com/item?id=40337936 Hacker News 2024-05-13 04:08:46+00:00 - null - True
https://simonwillison.net/b/7795 https://www.ardc.net/about/ About ARDC (Amateur Radio Digital Communications) In ham radio adjacent news, here's a foundation that it's worth knowing about: > ARDC makes grants to projects and organizations that are experimenting with new ways to advance both amateur radio and digital communication science. In 1981 they were issued the entire 44.x.x.x block of IP addresses - 16 million in total. In 2019 they sold a quarter of those IPs to Amazon for about $100 million, providing them with a very healthy endowment from which they can run their grants program! - null - - null - 2024-05-12 17:21:33+00:00 - null - True
https://simonwillison.net/b/7761 https://github.com/simonw/ham-general-question-pool Ham radio general exam question pool as JSON I scraped a pass of my Ham radio general exam this morning. One of the tools I used to help me pass was a Datasette instance with all 429 questions from the official question pool. I've published that raw data as JSON on GitHub, which I converted from the official question pool document using [an Observable notebook](https://observablehq.com/@simonw/ham-general-2024). Relevant TIL: [How I studied for my Ham radio general exam](https://til.simonwillison.net/ham-radio/general). - null - - null - 2024-05-11 19:16:49+00:00 - null - True
https://simonwillison.net/b/7760 https://blog.wilsonl.in/hackerverse/ Exploring Hacker News by mapping and analyzing 40 million posts and comments for fun A real tour de force of data engineering. Wilson Lin fetched 40 million posts and comments from the Hacker News API (using Node.js with a custom multi-process worker pool) and then ran them all through the `BGE-M3` embedding model using RunPod, which let him fire up ~150 GPU instances to get the whole run done in a few hours, using a custom RocksDB and Rust queue he built to save on Amazon SQS costs. Then he crawled 4 million linked pages, embedded *that* content using the faster and cheaper `jina-embeddings-v2-small-en` model, ran UMAP dimensionality reduction to render a 2D map and did a whole lot of follow-on work to identify topic areas and make the map look good. That's not even half the project - Wilson built several interactive features on top of the resulting data, and experimented with custom rendering techniques on top of canvas to get everything to render quickly. There's so much in here, and both the code and data (multiple GBs of arrow files) are available if you want to dig in and try some of this out for yourself. In the Hacker News comments Wilson shares that the total cost of the project was a couple of hundred dollars. One tiny detail I particularly enjoyed - unrelated to the embeddings - was this trick for testing which edge location is closest to a user using JavaScript: const edge = await Promise.race( EDGES.map(async (edge) => { // Run a few times to avoid potential cold start biases. for (let i = 0; i < 3; i++) { await fetch(`https://${edge}.edge-hndr.wilsonl.in/healthz`); } return edge; }), ); https://news.ycombinator.com/item?id=40307519 Show HN 2024-05-10 16:42:55+00:00 - null - True
https://simonwillison.net/b/7759 https://github.com/hauntsaninja/typing_extensions/blob/f694a4e2effdd2179f76e886498ffd3446e96b0b/.github/workflows/third_party.yml#L111 uv pip install --exclude-newer example A neat new feature of the `uv pip install` command is the `--exclude-newer` option, which can be used to avoid installing any package versions released after the specified date. Here's a clever example of that in use from the `typing_extensions` packages CI tests that run against some downstream packages: `uv pip install --system -r test-requirements.txt --exclude-newer $(git show -s --date=format:'%Y-%m-%dT%H:%M:%SZ' --format=%cd HEAD)` They use `git show` to get the date of the most recent commit (`%cd` means commit date) formatted as an ISO timestamp, then pass that to `--exclude-newer`. https://twitter.com/hauntsaninja/status/1788848732437713171 @hauntsaninja 2024-05-10 16:35:40+00:00 - null - True
https://simonwillison.net/b/7758 https://www.404media.co/xz-backdoor-bullying-in-open-source-software-is-a-massive-security-vulnerability/ Bullying in Open Source Software Is a Massive Security Vulnerability The Xz story from [last month](https://simonwillison.net/2024/Apr/5/everything-i-know-about-the-xz-backdoor/), where a malicious contributor almost managed to ship a backdoor to a number of major Linux distributions, included a nasty detail where presumed collaborators with the attacker bullied the maintainer to make them more susceptible to accepting help. Hans-Christoph Steiner from F-Droid [reported a similar](https://social.librem.one/@eighthave/112194828562355097) attempt from a few years ago: > A new contributor submitted a merge request to improve the search, which was oft requested but the maintainers hadn't found time to work on. There was also pressure from other random accounts to merge it. In the end, it became clear that it added a SQL injection vulnerability. 404 Media's Jason Koebler ties the two together here and makes the case for bullying as a genuine form of security exploit in the open source ecosystem. - null - - null - 2024-05-09 22:26:43+00:00 - null - True
https://simonwillison.net/b/7756 https://www.datasette.cloud/blog/2024/datasette-pins/ datasette-pins — a new Datasette plugin for pinning tables and queries Alex Garcia built this plugin for Datasette Cloud, and as with almost every Datasette Cloud features we're releasing it as [an open source package](https://github.com/datasette/datasette-pins) as well. `datasette-pins` allows users with the right permission to "pin" tables, databases and queries to their homepage. It's a lightweight way to customize that homepage, especially useful as your Datasette instance grows to host dozens or even hundreds of tables. - null - - null - 2024-05-09 18:29:03+00:00 - null - True
https://simonwillison.net/b/7754 https://antonz.org/sqlite-generated-columns/ Modern SQLite: Generated columns The second in Anton Zhiyanov's [series](https://antonz.org/tags/modern-sqlite/) on SQLite features you might have missed. It turns out I had an incorrect mental model of generated columns. In SQLite these can be "virtual" or "stored" (written to disk along with the rest of the table, a bit like a materialized view). Anton noted that "stored are rarely used in practice", which surprised me because I thought that storing them was necessary for them to participate in indexes. It turns out that's not the case. Anton's example here shows a generated column providing indexed access to a value stored inside a JSON key: create table events ( id integer primary key, event blob, etime text as (event ->> 'time'), etype text as (event ->> 'type') ); create index events_time on events(etime); insert into events(event) values ( '{"time": "2024-05-01", "type": "credit"}' ); **Update**: snej [reminded me](https://lobste.rs/s/imyxxn/modern_sqlite_generated_columns#c_brqbyj) that this isn't a new capability either: SQLite has been able to create indexes on expressions for years. https://lobste.rs/s/imyxxn/modern_sqlite_generated_columns lobste.rs 2024-05-08 16:55:41+00:00 - null - True
https://simonwillison.net/b/7753 https://mikeash.com/pyblog/friday-qa-2015-07-31-tagged-pointer-strings.html Tagged Pointer Strings (2015) Mike Ash digs into a fascinating implementation detail of macOS. Tagged pointers provide a way to embed a literal value in a pointer reference. Objective-C pointers on macOS are 64 bit, providing plenty of space for representing entire values. If the least significant bit is 1 (the pointer is a 64 bit odd number) then the pointer is "tagged" and represents a value, not a memory reference. Here's where things get really clever. Storing an integer value up to 60 bits is easy. But what about strings? There's enough space for three UTF-16 characters, with 12 bits left over. But if the string fits ASCII we can store 7 characters. Drop everything except `a-z A-Z.0-9` and we need 6 bits per character, allowing 10 characters to fit in the pointer. Apple take this a step further: if the string contains just `eilotrm.apdnsIc ufkMShjTRxgC4013` ("b" is apparently uncommon enough to be ignored here) they can store 11 characters in that 60 bits! https://lobste.rs/s/5417dx/storing_data_pointers#c_noslq0 Lobste.rs 2024-05-08 14:23:13+00:00 - null - True
https://simonwillison.net/b/7751 https://twitter.com/nanulled/status/1787938906068885747 gpt2-chatbot confirmed as OpenAI The mysterious `gpt2-chatbot` model that showed up in the [LMSYS arena](https://chat.lmsys.org/) a few days ago was [suspected to be](https://simonwillison.net/2024/Apr/29/notes-on-gpt2-chatbot/) a testing preview of a new OpenAI model. This has now been confirmed, thanks to a 429 rate limit error message that exposes details from the underlying OpenAI API platform. The model has been renamed to `im-also-a-good-gpt-chatbot` and is now only randomly available in "Arena (battle)" mode, not via "Direct Chat". https://twitter.com/abacaj/status/1787942691587739826 @abacaj 2024-05-08 00:33:46+00:00 - null - True
https://simonwillison.net/b/7750 https://mattyyeung.github.io/deterministic-quoting Deterministic Quoting: Making LLMs Safe for Healthcare Matt Yeung introduces **Deterministic Quoting**, a technique to help reduce the risk of hallucinations while working with LLMs. The key idea is to have parts of the output that are copied directly from relevant source documents, with a different visual treatment to help indicate that they are exact quotes, not generated output. > The AI chooses which section of source material to quote, but the retrieval of that text is a traditional non-AI database lookup. That’s the only way to guarantee that an LLM has not transformed text: don’t send it through the LLM in the first place. The LLM may still pick misleading quotes or include hallucinated details in the accompanying text, but this is still a useful improvement. The implementation is straight-forward: retrieved chunks include a unique reference, and the LLM is instructed to include those references as part of its replies. Matt's posts include examples of the prompts they are using for this. https://news.ycombinator.com/item?id=40263819 Hacker News 2024-05-07 19:08:04+00:00 - null - True
https://simonwillison.net/b/7749 https://cookbook.openai.com/examples/how_to_stream_completions#4-how-to-get-token-usage-data-for-streamed-chat-completion-response OpenAI cookbook: How to get token usage data for streamed chat completion response New feature in the OpenAI streaming API that I've been wanting for a long time: you can now set `stream_options={"include_usage": True}` to get back a `"usage"` block at the end of the stream showing how many input and output tokens were used. This means you can now accurately account for the total cost of each streaming API call. Previously this information was only an available for non-streaming responses. https://twitter.com/athyuttamre/status/1787600929040343420 @athyuttamre 2024-05-07 02:46:45+00:00 - null - True
https://simonwillison.net/b/7746 https://alexgarcia.xyz/blog/2024/building-new-vector-search-sqlite/index.html I'm writing a new vector search SQLite Extension Alex Garcia is working on `sqlite-vec`, a spiritual successor to his `sqlite-vss` project. The new SQLite C extension will have zero other dependencies (`sqlite-vss` used some tricky C++ libraries) and will work using virtual tables, storing chunks of vectors in shadow tables to avoid needing to load everything into memory at once. - null - - null - 2024-05-03 03:16:39+00:00 - null - True
https://simonwillison.net/b/7745 https://cruncher.ch/blog/printing-music-with-css-grid/ Printing music with CSS Grid Stephen Bond demonstrates some ingenious tricks for creating surprisingly usable sheet music notation using clever application of CSS grids. It uses rules like `.stave > [data-duration="0.75"] { grid-column-end: span 18; }` to turn `data-` attributes for musical properties into positions on the rendered stave. https://news.ycombinator.com/item?id=40216057 Hacker News 2024-05-02 14:28:33+00:00 - null - True
https://simonwillison.net/b/7741 https://sheep.horse/2024/4/save_the_web_by_being_nice.html Save the Web by Being Nice This is a neat little article by Andrew Stephens who calls for more people to participate in building and supporting nice things on the web. > The very best thing to keep the web partly alive is to maintain some content yourself - start a blog, join a forum and contribute to the conversation, even podcast if that is your thing. But that takes a lot of time and not everyone has the energy or the knowhow to create like this. > > The second best thing to do is to show your support for pages you enjoy by being nice and making a slight effort. Like, comment-on, share and encourage people who make things you like. If you have the time or energy, make your own things and put them online. - null - - null - 2024-05-01 02:34:52+00:00 - null - True
https://simonwillison.net/b/7740 https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1 How an empty S3 bucket can make your AWS bill explode Maciej Pocwierz accidentally created an S3 bucket with a name that was already used as a placeholder value in a widely used piece of software. They saw 100 million PUT requests to their new bucket in a single day, racking up a big bill since AWS charges $5/million PUTs. It turns out AWS charge that same amount for PUTs that result in a 403 authentication error, a policy [that extends](https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html#ChargeDetails) even to "requester pays" buckets! So, if you know someone's S3 bucket name you can DDoS their AWS bill just by flooding them with meaningless unauthenticated PUT requests. AWS support refunded Maciej's bill as an exception here, but I'd like to see them reconsider this broken policy entirely. **Update** from <a href="https://twitter.com/jeffbarr/status/1785386554372042890">Jeff Barr</a>: > We agree that customers should not have to pay for unauthorized requests that they did not initiate. We’ll have more to share on exactly how we’ll help prevent these charges shortly. https://lobste.rs/s/cy9i87/how_empty_s3_bucket_can_make_your_aws_bill Lobste.rs 2024-04-30 11:19:21+00:00 - null - True
https://simonwillison.net/b/7739 https://adactio.com/journal/21078 My approach to HTML web components Some neat patterns here from Jeremy Keith, who is using Web Components extensively for progressive enhancement of existing markup. > The reactivity you get with full-on frameworks [like React and Vue] isn’t something that web components offer. But I do think web components can replace jQuery and other approaches to scripting the DOM. Jeremy likes naming components with their element as a prefix (since all element names must contain at least one hyphen), and suggests building components under the single responsibility principle - so you can do things like `<button-confirm><button-clipboard><button>...`. Jeremy configure buttons with `data-` attributes and has them communicate with each other using custom events. Something I hadn't realized is that since the `connectedCallback` function on a custom element is fired any time that element is attached to a page you can `fetch()` and then `insertHTML` content that includes elements and know that they will initialize themselves without needing any extra logic - great for the kind of pattern encourages by systems such as [HTMX](https://htmx.org/). - null - - null - 2024-04-30 11:02:48+00:00 - null - True
https://simonwillison.net/b/7737 https://twitter.com/simonw/status/1784996728552427726 My notes on gpt2-chatbot There's a new, unlabeled and undocumented model on the LMSYS [Chatbot Arena](https://chat.lmsys.org/) today called `gpt2-chatbot`. It's been giving some impressive responses - you can prompt it directly in the Direct Chat tab by selecting it from the big model dropdown menu. It looks like a stealth new model preview. It's giving answers that are comparable to GPT-4 Turbo and in some cases better - my own experiments lead me to think it may have more "knowledge" baked into it, as ego prompts ("Who is Simon Willison?") and questions about things like lists of speakers at DjangoCon over the years seem to hallucinate less and return more specific details than before. The lack of transparency here is both entertaining and infuriating. Lots of people are performing a parallel distributed "vibe check" and sharing results with each other, but it's annoying that even the most basic questions (What even IS this thing? Can it do RAG? What's its context length?) remain unanswered so far. The system prompt appears to be the following - but system prompts just influence how the model behaves, they aren't guaranteed to contain truthful information: You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture. Knowledge cutoff: 2023-11 Current date: 2024-04-29 Image input capabilities: Enabled Personality: v2 My best guess is that this is a preview of some kind of OpenAI "GPT 4.5" release. I don't think it's a big enough jump in quality to be a GPT-5. **Update**: LMSYS [do document their policy](https://simonwillison.net/2024/Apr/30/lmsys/) on using anonymized model names for tests of unreleased models. **Update May 7th**: The model has been [confirmed as belonging to OpenAI](https://simonwillison.net/2024/May/8/gpt2-chatbot-confirmed-as-openai/) thanks to an error message that leaked details of the underlying API platform. - null - - null - 2024-04-29 20:45:18+00:00 - null - True
https://simonwillison.net/b/7736 https://uxdesign.cc/how-do-you-accidentally-run-for-president-of-iceland-0d71a4785a1e How do you accidentally run for President of Iceland? Anna Andersen writes about a spectacular user interface design case-study from this year's Icelandic presidential election. Running for President requires 1,500 endorsements. This year, those endorsements can be filed online through a government website. The [page for collecting endorsements](https://island.is/forsetaframbod) originally had two sections - one for registering to collect endorsements, and another to submit your endorsement. The login link for the first came higher on the page, and at least 11 people ended up accidentally running for President! https://toot.cafe/@baldur/112355190615093453 Baldur Bjarnason 2024-04-29 15:31:13+00:00 - null - True
https://simonwillison.net/b/7735 https://zed.dev/blog/zed-decoded-rope-sumtree Zed Decoded: Rope & SumTree Text editors like [Zed](https://zed.dev/) need in-memory data structures that are optimized for handling large strings where text can be inserted or deleted at any point without needing to copy the whole string. [Ropes](https://en.m.wikipedia.org/wiki/Rope_(data_structure)) are a classic, widely used data structure for this. Zed have their own implementation of ropes in Rust, but it's backed by something even more interesting: a SumTree, described here as a thread-safe, snapshot-friendly, copy-on-write B+ tree where each leaf node contains multiple items and a Summary for each Item, and internal tree nodes contain a Summary of the items in its subtree. These summaries allow for some very fast traversal tree operations, such as turning an offset in the file into a line and row coordinate and vice-versa. The summary itself can be anything, so each application of SumTree in Zed collects different summary information. Uses in Zed include tracking highlight regions, code folding state, git blame information, project file trees and more - over 20 different classes and counting. Zed co-founder Nathan Sobo calls SumTree "the soul of Zed". Also notable: this detailed article is accompanied by an [hour long video](https://youtu.be/uUu9eFNNbjg) with a four-way conversation between Zed maintainers providing a tour of these data structures in the Zed codebase. https://twitter.com/eatonphil/status/1784576184937799885 @eatonphil 2024-04-28 15:25:58+00:00 - null - True
https://simonwillison.net/b/7734 https://news.ycombinator.com/item?id=40176338 Everything Google's Python team were responsible for In a questionable strategic move, Google laid off the majority of their internal Python team [a few days ago](https://social.coop/@Yhg1s/112332127058328855). Someone on Hacker News asked what the team had been responsible for, and team member zem relied with this fascinating comment providing detailed insight into how the team worked and indirectly how Python is used within Google. - null - - null - 2024-04-27 18:52:32+00:00 - null - True
https://simonwillison.net/b/7732 https://simonwillison.net/dashboard/blogmarks-that-use-markdown/ Blogmarks that use markdown I needed to attach a correction to an older blogmark (my 20-year old name for short-form links with commentary on my blog) today - but the commentary field has always been text, not HTML, so I didn't have a way to add the necessary link. This motivated me to finally add optional **Markdown** support for blogmarks to my blog's custom Django CMS. I then went through and added inline code markup to a bunch of different older posts, and built this Django SQL Dashboard to keep track of which posts I had updated. - null - - null - 2024-04-25 04:34:18+00:00 - null - True
https://simonwillison.net/b/7731 https://countercraft.substack.com/p/no-most-books-dont-sell-only-a-dozen No, Most Books Don't Sell Only a Dozen Copies I linked to a story [the other day](https://simonwillison.net/2024/Apr/22/no-one-buys-books/) about book sales claiming "90 percent of them sold fewer than 2,000 copies and 50 percent sold less than a dozen copies", based on numbers released in the Penguin antitrust lawsuit. It turns out those numbers were interpreted incorrectly. In this piece from September 2022 Lincoln Michel addresses this and other common misconceptions about book statistics. Understanding these numbers requires understanding a whole lot of intricacies about how publishing actually works. Here's one illustrative snippet: "Take the statistic that most published books only sell 99 copies. This seems shocking on its face. But if you dig into it, you’ll notice it was counting one year’s sales of all books that were in BookScan’s system. That’s quite different statistic than saying most books don’t sell 100 copies in total! A book could easily be a bestseller in, say, 1960 and sell only a trickle of copies today." The [top comment](https://countercraft.substack.com/p/no-most-books-dont-sell-only-a-dozen/comment/8883524) on the post comes from Kristen McLean of NPD BookScan, the organization who's numbers were misrepresented is the trial. She wasn't certain how the numbers had been sliced to get that 90% result, but in her own analysis of "frontlist sales for the top 10 publishers by unit volume in the U.S. Trade market" she found that 14.7% sold less than 12 copies and the 51.4% spot was for books selling less than a thousand. - null - - null - 2024-04-25 03:41:12+00:00 - null - True
https://simonwillison.net/b/7730 https://www.snowflake.com/en/data-cloud/arctic/cookbook/ Snowflake Arctic Cookbook Today's big model release was Snowflake Arctic, an enormous 480B model with a 128×3.66B MoE (Mixture of Experts) architecture. It's Apache 2 licensed and Snowflake state that "in addition, we are also open sourcing all of our data recipes and research insights." The research insights will be shared on this Arctic Cookbook blog - which currently has two articles covering [their MoE architecture](https://medium.com/snowflake/snowflake-arctic-cookbook-series-exploring-mixture-of-experts-moe-c7d6b8f14d16) and describing [how they optimized their training run](https://medium.com/snowflake/snowflake-arctic-cookbook-series-building-an-efficient-training-system-for-arctic-6658b9bdfcae) in great detail. They also list dozens of "coming soon" posts, which should be pretty interesting given how much depth they've provided in their writing so far. - null - - null - 2024-04-25 02:47:50+00:00 - null - True
https://simonwillison.net/b/7725 https://www.elysian.press/p/no-one-buys-books No one buys books Fascinating insights into the book publishing industry gathered by Elle Griffin from details that came out during the Penguin vs. DOJ antitrust lawsuit. Publishing turns out to be similar to VC investing: a tiny percentage of books are hits that cover the costs for the vast majority that didn't sell well. The DOJ found that, of 58,000 books published in a year, "90 percent of them sold fewer than 2,000 copies and 50 percent sold less than a dozen copies." **UPDATE**: This story is inaccurate: those statistics were grossly misinterpreted during the trial. See [this post](https://simonwillison.net/2024/Apr/25/no-most-books-dont-sell-only-a-dozen-copies/) for updated information. Here's an even better debunking: [Yes, People Do Buy Books](https://countercraft.substack.com/p/yes-people-do-buy-books) (subtitle: "Despite viral claims, Americans buy over a billion books a year"). https://news.ycombinator.com/item?id=40119958 Hacker News 2024-04-22 21:55:04+00:00 - null - True
https://simonwillison.net/b/7721 https://blog.kellybrazil.com/2021/12/03/tips-on-adding-json-output-to-your-cli-app/ Tips on Adding JSON Output to Your CLI App Kelly Brazil - also the author of `jc`, the neat CLI tool that converts the output of common Unix utilities such as dig into JSON - provides some useful do's and don'ts for adding JSON output as an option to a command-line tool. Kelly recommends defaulting to arrays of flat objects - or newline-delimited objects - and suggests including an "unbuffer" option for streaming tools that discourages the OS from buffering output that is being sent through a pipe. https://news.ycombinator.com/item?id=40098606 Hacker News 2024-04-20 21:43:58+00:00 - null - True
https://simonwillison.net/b/7720 https://github.com/simonw/llm-gpt4all/releases/tag/0.4 llm-gpt4all New release of my LLM plugin which builds on Nomic's excellent gpt4all Python library. I've upgraded to their latest version which adds support for Llama 3 8B Instruct, so after a 4.4GB model download this works: `llm -m Meta-Llama-3-8B-Instruct "say hi in Spanish"` - null - - null - 2024-04-20 17:58:25+00:00 - null - True
https://simonwillison.net/b/7718 https://www.dbreunig.com/2024/04/18/a-poi-database-in-one-line.html A POI Database in One Line Overture maps offer an extraordinarily useful freely licensed databases of POI (point of interest) listings, principally derived from partners such as Facebook and including restaurants, shops, museums and other locations from all around the world. Their new "overturemaps" Python CLI utility makes it easy to quickly pull subsets of their data... but requires you to provide a bounding box to do so. Drew Breunig came up with this delightful recipe for fetching data using LLM and gpt-3.5-turbo to fill in those bounding boxes: `overturemaps download --bbox=$(llm 'Give me a bounding box for Alameda, California expressed as only four numbers delineated by commas, with no spaces, longitude preceding latitude.') -f geojsonseq --type=place | geojson-to-sqlite alameda.db places - --nl --pk=id` https://twitter.com/dbreunig/status/1781133877320523792 @dbreunig 2024-04-19 02:44:58+00:00 - null - True
https://simonwillison.net/b/7715 https://github.com/simonw/llm-reka llm-reka My new plugin for running LLM prompts against the Reka family of API hosted LLM models: `reka-core` ($10 per million input), `reka-flash` (80c per million) and `reka-edge` (40c per million). All three of those models are trained from scratch by a team that includes several Google Brain alumni. Reka Core is their most powerful model, released on Monday 15th April and claiming benchmark scores competitive with GPT-4 and Claude 3 Opus. - null - - null - 2024-04-18 03:17:03+00:00 - null - True
https://simonwillison.net/b/7714 https://github.com/mistralai/mistral-common mistralai/mistral-common New from Mistral: mistral-common, an open source Python library providing "a set of tools to help you work with Mistral models". So far that means a tokenizer! This is similar to OpenAI's tiktoken library in that it lets you run tokenization in your own code, which crucially means you can count the number of tokens that you are about to use - useful for cost estimates but also for cramming the maximum allowed tokens in the context window for things like RAG. Mistral's library is better than tiktoken though, in that it also includes logic for correctly calculating the tokens needed for conversation construction and tool definition. With OpenAI's APIs you're currently left guessing how many tokens are taken up by these advanced features. Anthropic haven't published any form of tokenizer at all - it's the feature I'd most like to see from them next. Here's how to explore the vocabulary of the tokenizer: MistralTokenizer.from_model( "open-mixtral-8x22b" ).instruct_tokenizer.tokenizer.vocab()[:12] `['<unk>', '<s>', '</s>', '[INST]', '[/INST]', '[TOOL_CALLS]', '[AVAILABLE_TOOLS]', '[/AVAILABLE_TOOLS]', '[TOOL_RESULTS]', '[/TOOL_RESULTS]']` - null - - null - 2024-04-18 00:39:54+00:00 - null - True
https://simonwillison.net/b/7711 https://15r10nk.github.io/inline-snapshot/ inline-snapshot I'm a big fan of snapshot testing, where expected values are captured the first time a test suite runs and then asserted against in future runs. It's a very productive way to build a robust test suite. inline-snapshot by Frank Hoffmann is a particularly neat implementation of the pattern. It defines a `snapshot()` function which you can use in your tests: `assert 1548 * 18489 == snapshot()` When you run that test using `pytest --inline-snapshot=create` the `snapshot()` function will be replaced in your code (using AST manipulation) with itself wrapping the `repr()` of the expected result: `assert 1548 * 18489 == snapshot(28620972)` If you modify the code and need to update the tests you can run `pytest --inline-snapshot=fix` to regenerate the recorded snapshot values. - null - - null - 2024-04-16 16:04:25+00:00 - null - True
https://simonwillison.net/b/7710 https://platform.openai.com/docs/api-reference/batch OpenAI Batch API OpenAI are now offering a 50% discount on batch chat completion API calls if you submit them in bulk and allow for up to 24 hours for them to be run. Requests are sent as a newline-delimited JSON file, with each line looking something like this: `{"custom_id": "request-1", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "gpt-3.5-turbo", "messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is 2+2?"}]}}` You upload a file for the batch, kick off a batch request and then poll for completion. This makes GPT-3.5 Turbo cheaper than Claude 3 Haiku - provided you're willing to wait a few hours for your responses. https://twitter.com/jeffintime/status/1779924149755924707 Jeff Harris 2024-04-15 17:58:44+00:00 - null - True
https://simonwillison.net/b/7659 https://github.com/simonw/s3-credentials/releases/tag/0.16 s3-credentials 0.16 I spent entirely too long this evening trying to figure out why files in my new supposedly public S3 bucket were unavailable to view. It turns out these days you need to set a `PublicAccessBlockConfiguration` of `{"BlockPublicAcls": false, "IgnorePublicAcls": false, "BlockPublicPolicy": false, "RestrictPublicBuckets": false}`. The `s3-credentials --create-bucket --public` option now does that for you. I also added a `s3-credentials debug-bucket name-of-bucket` command to help figure out why a bucket isn't working as expected. - null - - null - 2024-04-05 05:35:57+00:00 - null - True
https://simonwillison.net/b/7632 https://shelmet.readthedocs.io/en/latest/ shelmet This looks like a pleasant ergonomic alternative to Python's subprocess module, plus a whole bunch of other useful utilities. Lets you do things like this: `sh.cmd("ps", "aux").pipe("grep", "-i", check=False).run("search term")` I like the way it uses context managers as well: `with sh.environ({"KEY1": "val1"})` sets new environment variables for the duration of the block, `with sh.cd("path/to/dir")` temporarily changes the working directory and `with sh.atomicfile("file.txt") as fp` lets you write to a temporary file that will be atomically renamed when the block finishes. https://micro.webology.dev/2024/03/23/on-scratching-itches.html Jeff Triplett 2024-03-24 04:37:52+00:00 - null - True
https://simonwillison.net/b/7626 https://www.pgrs.net/2024/03/21/duckdb-as-the-new-jq/ DuckDB as the New jq The DuckDB CLI tool can query JSON files directly, making it a surprisingly effective replacement for jq. Paul Gross demonstrates the following query: `select license->>'key' as license, count(*) from 'repos.json' group by 1` `repos.json` contains an array of `{"license": {"key": "apache-2.0"}..}` objects. This example query shows counts for each of those licenses. https://lobste.rs/s/x5immj/duckdb_as_new_jq lobste.rs 2024-03-21 20:36:20+00:00 - null - True
https://simonwillison.net/b/7607 https://www.figma.com/blog/how-figmas-databases-team-lived-to-tell-the-scale/ How Figma’s databases team lived to tell the scale The best kind of scaling war story: "Figma’s database stack has grown almost 100x since 2020. [...] In 2020, we were running a single Postgres database hosted on AWS’s largest physical instance, and by the end of 2022, we had built out a distributed architecture with caching, read replicas, and a dozen vertically partitioned databases." I like the concept of "colos", their internal name for sharded groups of related tables arranged such that those tables can be queried using joins. Also smart: separating the migration into "logical sharding" - where queries all still run against a single database, even though they are logically routed as if the database was already sharded - followed by "physical sharding" where the data is actually copied to and served from the new database servers. Logical sharding was implemented using PostgreSQL views, which can accept both reads and writes: `CREATE VIEW table_shard1 AS SELECT * FROM table WHERE hash(shard_key) >= min_shard_range AND hash(shard_key) < max_shard_range)` The final piece of the puzzle was DBProxy, a custom PostgreSQL query proxy written in Go that can parse the query to an AST and use that to decide which shard the query should be sent to. Impressively it also has a scatter-gather mechanism, so `select * from table` can be sent to all shards at once and the results combined back together again. https://news.ycombinator.com/item?id=39706968 Hacker News 2024-03-14 21:23:37+00:00 - null - True
https://simonwillison.net/b/7545 https://lamplightdev.com/blog/2024/01/10/streaming-html-out-of-order-without-javascript/ Streaming HTML out of order without JavaScript A really interesting new browser capability. If you serve the following HTML: <template shadowrootmode="open"> <slot name="item-1">Loading...</slot> </template> Then later in the same page stream an element specifying that slot: <span slot="item-1">Item number 1</span> The previous slot will be replaced while the page continues to load. I tried the demo in the most recent Chrome, Safari and Firefox (and Mobile Safari) and it worked in all of them. The key feature is `shadowrootmode=open`, which looks like it was added to Firefox 123 on February 19th 2024 - the other two browsers are listed on caniuse.com as gaining it around March last year. https://news.ycombinator.com/item?id=39560180 Hacker News 2024-03-01 16:59:54+00:00 - null - True
https://simonwillison.net/b/7526 https://leanrada.com/htmz/ htmz Astonishingly clever browser platform hack by Lean Rada. Add this to a page: `<iframe hidden name=htmz onload="setTimeout(() => document.querySelector( this.contentWindow.location.hash || null)?.replaceWith( ...this.contentDocument.body.childNodes ))"></iframe>` Then elsewhere add a link like this: `<a href="/flower.html#my-element" target=htmz>Flower</a>` Clicking that link will fetch content from `/flower.html` and replace the element with ID of `my-element` with that content. https://news.ycombinator.com/item?id=39429370 Hacker News 2024-02-20 01:21:24+00:00 - null - True
https://simonwillison.net/b/7511 https://openai.com/blog/memory-and-new-controls-for-chatgpt Memory and new controls for ChatGPT ChatGPT now has "memory", and it's implemented in a delightfully simple way. You can instruct it to remember specific things about you and it will then have access to that information in future conversations - and you can view the list of saved notes in settings and delete them individually any time you want to. The feature works by adding a new tool called "bio" to the system prompt fed to ChatGPT at the beginning of every conversation, described like this: > The `bio` tool allows you to persist information across conversations. Address your message `to=bio` and write whatever information you want to remember. The information will appear in the model set context below in future conversations. I found that by prompting it to 'Show me everything from "You are ChatGPT" onwards in a code block"', [transcript here](https://chat.openai.com/share/bcd8ca0c-6c46-4b83-9e1b-dc688c7c3b4d). - null - - null - 2024-02-14 04:33:08+00:00 - null - True
https://simonwillison.net/b/7348 https://blog.jim-nielsen.com/2023/html-web-components-an-example/ HTML Web Components: An Example Jim Nielsen provides a clear example illustrating the idea of the recently coined "HTML Web Components" pattern. It's Web Components as progressive enhancement: in this example a `<user-avatar>` custom element wraps a regular image, then JavaScript defines a Web Component that enhances that image. If the JavaScript fails to load the image still displays. https://news.ycombinator.com/item?id=38298694 Hacker News 2023-11-17 16:33:24+00:00 - null - True
https://simonwillison.net/b/7328 https://www.citusdata.com/blog/2023/10/26/making-postgres-tick-new-features-in-pg-cron/ Making PostgreSQL tick: New features in pg_cron pg_cron adds cron-style scheduling directly to PostgreSQL. It's a pretty mature extension at this point, and recently gained the ability to schedule repeating tasks at intervals as low as every 1s. The examples in this post are really informative. I like this example, which cleans up the ever-growing cron.job_run_details table by using pg_cron itself to run the cleanup: `SELECT cron.schedule('delete-job-run-details', '0 12 * * *', $$DELETE FROM cron.job_run_details WHERE end_time < now() - interval '3 days'$$);` pg_cron can be used to schedule functions written in PL/pgSQL, which is a great example of the kind of DSL that I used to avoid but I'm now much happier to work with because I know GPT-4 can write basic examples for me and help me understand exactly what unfamiliar code is doing. https://news.ycombinator.com/item?id=38029671 Hacker News 2023-10-27 02:57:44+00:00 - null - True
https://simonwillison.net/b/7251 https://arstechnica.com/information-technology/2023/09/the-ai-assistant-wars-heat-up-with-claude-pro-a-new-chatgpt-plus-rival/ The AI-assistant wars heat up with Claude Pro, a new ChatGPT Plus rival I'm quoted in this piece about the new Claude Pro $20/month subscription from Anthropic: > Willison has also run into problems with Claude's morality filter, which has caused him trouble by accident: "I tried to use it against a transcription of a podcast episode, and it processed most of the text before—right in front of my eyes—it deleted everything it had done! I eventually figured out that they had started talking about bomb threats against data centers towards the end of the episode, and Claude effectively got triggered by that and deleted the entire transcript." - null - - null - 2023-09-10 17:07:45+00:00 - null - True
https://simonwillison.net/b/7168 https://deno.com/blog/v1.34 Deno 1.34: deno compile supports npm packages This feels like it could be extremely useful: Deno can load code from npm these days (`import { say } from "npm:cowsay@1.5.0"`) and now the `deno compile` command can resolve those imports, fetch all of the dependencies and bundle them together with Deno itself into a single executable binary. This means pretty much anything that's been built as an npm package can now be easily converted into a standalone binary, including cross-compilation to Windows x64, macOS x64, macOS ARM and Linux x64. - null - - null - 2023-05-25 17:01:08+00:00 - null - True
https://simonwillison.net/b/7165 https://shaneosullivan.wordpress.com/2023/05/23/instant-colour-fill-with-html-canvas/ Instant colour fill with HTML Canvas Shane O'Sullivan describes how to implement instant colour fill using HTML Canvas and some really clever tricks with Web Workers. A new technique to me is passing a `canvas.getImageData()` object to a Web Worker via `worker.postMessage({action: "process", buffer: imageData.data.buffer}, [imageData.data.buffer])` where that second argument is a list of objects to "transfer ownership of" - then the worker can create a new `ImageData()`, populate it and transfer ownership of that back to the parent window. https://news.ycombinator.com/item?id=36049386 Hacker News 2023-05-24 01:27:00+00:00 - null - True
https://simonwillison.net/b/6865 https://iscinumpy.dev/post/bound-version-constraints/ Should You Use Upper Bound Version Constraints? Should you pin your library's dependencies using `"click>=7,<8"` or `"click~=7.0"`? Henry Schreiner's short answer is no, and his long answer is an exhaustive essay covering every conceivable aspect of this thorny Python packaging problem. https://twitter.com/AdamChainz/status/1566729766388092929 @AdamChainz 2022-09-05 17:42:02+00:00 - null - True
https://simonwillison.net/b/6846 https://deps.dev/pypi/datasette datasette on Open Source Insights Open Source Insights is "an experimental service developed and hosted by Google to help developers better understand the structure, security, and construction of open source software packages". It calculates scores for packages using various automated heuristics. A JSON version of the resulting score card can be accessed using `https://deps.dev/_/s/pypi/p/{package_name}/v/` https://github.com/sethmlarson/pypi-data/blob/991afb2a4e17999a4501569f34a6990f5e05578f/main.py#L271 sethmlarson/pypi-data 2022-08-11 01:06:26+00:00 - null - True
https://simonwillison.net/b/6749 https://datastation.multiprocess.io/blog/2022-04-26-event-handler-attributes.html HTML event handler attributes: down the rabbit hole `onclick="myfunction(event)"` is an idiom for passing the click event to a function - but how does it work? It turns out the answer is buried deep in the HTML spec - the browser wraps that string of code in a `function(event) { ... that string ... }` function and makes the event available to its local scope that way. https://twitter.com/phil_eaton/status/1519048613464268804 @phil_eaton 2022-04-26 20:35:08+00:00 - null - True
https://simonwillison.net/b/6652 https://www.docker.com/blog/introduction-to-heredocs-in-dockerfiles/ Introduction to heredocs in Dockerfiles This is a fantastic upgrade to Dockerfile syntax, enabled by BuildKit and a new frontend for executing the Dockerfile that can be specified with a `#syntax=` directive. I often like to create a standalone Dockerfile that works without needing other files from a directory, so being able to use `<<EOF` syntax to populate configure files from inline blocks of code is really handy. https://twitter.com/mwarkentin/status/1462825512263467012 @mwarkentin 2021-11-22 17:01:18+00:00 - null - True
https://simonwillison.net/b/6572 https://blog.azuki.vip/csrf/ OkCupid had a CSRF vulnerability Good write-up of a (now fixed) CSRF vulnerability on OkCupid. Their site worked by POSTing JSON objects to an API. JSON POSTs are usually protected against CSRF because they can only be sent using `fetch()` or `XMLHttpRequest`, which are protected by the same-origin policy. Yan Zhu notes that you can use the `enctype="text/plain"` attribute on a form (introduced in HTML5) and a crafty hidden input element with `name='{"foo":"' value='bar"}'` to construct JSON in an off-site form, which enabled CSRF attacks. https://news.ycombinator.com/item?id=28039631 How to boost your popularity on OkCupid using CSRF and a JSON type confusion on Hacker News 2021-08-02 22:12:36+00:00 - null - True
https://simonwillison.net/b/6419 https://css-tricks.com/custom-properties-as-state/ Custom Properties as State Fascinating thought experiment by Chris Coyier: since CSS custom properties can be defined in an external stylesheet, we can APIs that return stylesheets defining dynamically server-side generated CSS values for things like time-of-day colour schemes or even strings that can be inserted using `::after { content: var(--my-property)`. This gave me a very eccentric idea for [a Datasette plugin](https://datasette.io/plugins/datasette-css-properties)... - null - - null - 2021-01-07 19:39:49+00:00 - null - True
https://simonwillison.net/b/5917 https://www.rosettacode.org/wiki/String_length String length - Rosetta Code Calculating the length of a string is surprisingly difficult once Unicode is involved. Here's a fascinating illustration of how that problem can be attached dozens of different programming languages. From that page: the string `"J̲o̲s̲é̲"` (`"J\x{332}o\x{332}s\x{332}e\x{301}\x{332}"`) has 4 user-visible graphemes, 9 characters (code points), and 14 bytes when encoded in UTF-8. https://twitter.com/jeffsonstein/status/1098927304124841984 @jeffsonstein 2019-02-22 15:27:31+00:00 - null - True
https://simonwillison.net/b/5826 http://nip.io/ nip.io "NIP.IO maps `<anything>.<IP Address>.nip.io` to the corresponding `<IP Address>`, even `127.0.0.1.nip.io` maps to `127.0.0.1`" - looks useful. `xip.io` is a different service that does the same thing. Being able to put anything at the start looks handy for testing systems that handle different subdomains. - null - - null - 2018-12-12 18:18:09+00:00 - null - True
https://simonwillison.net/b/5522 https://calendar.perfplanet.com/2017/animated-gif-without-the-gif/ Evolution of <img>: Gif without the GIF Safari Technology Preview lets you use `<img src="movie.mp4">`, for high quality animated gifs in 1/14th of the file size. https://twitter.com/cramforce/status/937746796951957504 Malte Ubl 2017-12-04 19:28:03+00:00 - null - True
https://simonwillison.net/b/5516 https://caniuse.com/#search=input-color Can I use... input type=color TIL `<input type="color">` has reached 78.83% support globally already - biggest gap right now is Mobile Safari. - null - - null - 2017-11-29 21:56:39+00:00 - null - True
https://simonwillison.net/b/5414 https://twitter.com/brandur/status/923982980674043904 Benefit of TEXT with CHECK over VARCHAR(X) in PostgreSQL Brandur suggests using `email TEXT CHECK (char_length(email) <= 255)` to define a column with a length limit in PostgreSQL over `VARCHAR(255)` because `TEXT` and `VARCHAR` are equally performant but a `CHECK` length can be changed later on without locking the table, whereas a `VARCHAR` requires an `ALTER TABLE` with an exclusive lock. - null - - null - 2017-10-28 00:59:34+00:00 - null - True
https://simonwillison.net/b/4870 http://code.djangoproject.com/wiki/Version1.2Features Django 1.2 planned features The votes are in and the plan for Django 1.2 has taken shape - features are split in to high, medium and low priority. There's some really exciting stuff in there - outside of the things I've already talked about, I'm particularly excited about multidb, `Model.objects.raw(SQL)`, the smarter `{% if %}` tag and class-based generic views. - null - - null - 2009-10-26 10:38:06+00:00 - null - True
https://simonwillison.net/b/4480 http://www.djangosnippets.org/snippets/1350/ Django snippets: Smart {% if %} template tag Chris Beaven's drop-in replacement for Django's `{% if %}` tag that adds comparison operators (less than, greater than, not equal etc) while staying backwards compatible with the less able original. I love it. This is one place where I no longer favour Django's stated philosophy: I think it's perfectly reasonable to use comparisons in presentation logic, and I've found that in my own code the lack of an advanced if tag frequently leads to pure presentation logic sneaking in to my view functions. - null - - null - 2009-03-03 15:03:21+00:00 - null - True
https://simonwillison.net/b/4349 http://www.mikeash.com/?page=pyblog/friday-qa-2008-12-26.html Blocks in Objective-C Closures are coming soon to Objective-C - interesting syntax, a regular curly brace block preceded by a caret `^{ ... }`. - null - - null - 2008-12-29 19:38:08+00:00 - null - True
https://simonwillison.net/b/4344 http://www.netzgesta.de/dev/quickchoice.html Quickchoice - a Speed Dial clone Lovely demonstration of the CSS transform property, as supported by modern browsers. The magic is all in the `iframe { transform: scale(0.25, 0.25) translate(-1200px, -900px) }` http://ajaxian.com/ Ajaxian 2008-12-23 12:49:24+00:00 - null - True
https://simonwillison.net/b/4054 http://blog.whatwg.org/this-week-in-html5-episode-1 This Week in HTML 5 - Episode 1 It looks like the most controversial aspect of the HTML 5 spec has been addressed - now, instead of omitting the alt attribute for user generated content that has no relevant information available, sites are advised to provide an indication of the kind of image expected surrounded by braces, for example `alt="{uploaded photo}"`. - null - - null - 2008-08-07 07:57:11+00:00 - null - True
https://simonwillison.net/b/2848 http://www.djangoproject.com/documentation/url_dispatch/#naming-url-patterns Naming URL patterns You can now apply a name to a URL pattern in Django development version, which makes the `{% url %}` template tag far more useful. http://www.djangoproject.com/weblog/2007/apr/08/weekinreview/ Django Weblog - Week in review: April 8 2007-04-10 00:19:55+00:00 - null - True
https://simonwillison.net/b/2803 http://www.mozilla.org/projects/firefox/3.0a3/releasenotes/ Mozilla Gran Paradiso Alpha 3 Release Notes New features include animated PNGs, `<link rel="offline-resource">` and the `HttpOnly` cookie flag which indicates that a cookie should not be accessible to script (borrowed from IE). - null - - null - 2007-03-25 21:37:44+00:00 - null - True
https://simonwillison.net/b/653 http://mpt.net.nz/archive/2004/05/02/b-and-i When semantic markup goes bad Matthew Thomas argues for `<b>` and `<i>` - null - - null - 2004-05-04 17:38:37+00:00 - null - True
https://simonwillison.net/b/106 http://www.meyerweb.com/eric/thoughts/200312.html#t20031208 Congratulations to Eric and Kat `kat+eric:first-child {name:carolyn;}` (pinched from Web Graphics) http://web-graphics.com/mtarchive/001104.php wg:Baby 2003-12-09 22:16:19+00:00 - null - True
Copy and export data

Duration: 8.63ms