Friday, 11th July 2025
Grok: searching X for “from:elonmusk (Israel OR Palestine OR Hamas OR Gaza)”
If you ask the new Grok 4 for opinions on controversial questions, it will sometimes run a search to find out Elon Musk’s stance before providing you with an answer.
[... 1,495 words]Postgres LISTEN/NOTIFY does not scale (via) I think this headline is justified. Recall.ai, a provider of meeting transcription bots, noticed that their PostgreSQL instance was being bogged down by heavy concurrent writes.
After some spelunking they found this comment in the PostgreSQL source explaining that transactions with a pending notification take out a global lock against the entire PostgreSQL instance (represented by database 0) to ensure "that queue entries appear in commit order".
Moving away from LISTEN/NOTIFY to trigger actions on changes to rows gave them a significant performance boost under high write loads.
Generationship: Ep. #39, Simon Willison. I recorded this podcast episode with Rachel Chalmers a few weeks ago. We talked about the resurgence of blogging, the legacy of Google Reader, learning in public, LLMs as weirdly confident interns, AI-assisted search, prompt injection, human augmentation over replacement and we finished with this delightful aside about pelicans which I'll quote here in full:
Rachel: My last question, my favorite question. If you had a generation ship, a star ship that takes more than a human generation to get to Alpha Centauri, what would you call it?
Simon: I'd call it Squadron, because that is the collective noun for pelicans. And I love pelicans.
Rachel: Pelicans are the best.
Simon: They're the best. I live in Half Moon Bay. We have the second largest mega roost of the California brown pelican in the world, in our local harbor [...] last year we had over a thousand pelicans diving into the water at the same time at peak anchovy season or whatever it was.
The largest mega roost, because I know you want to know, is in Alameda, over by the aircraft carrier.
Rachel: The hornet.
Simon: Yeah. It's got the largest mega roost of the California brown pelican at certain times of the year. They're so photogenic. They've got charisma. They don't look properly shaped for flying.
Rachel: They look like the Spruce Goose. They've got the big front. And they look like they're made of wood.
Simon: That's such a great comparison, because I saw the Spruce Goose a couple of years ago. Up in Portland, there's this museum that has the Spruce Goose, and I went to see it. And it's incredible. Everyone makes fun of the Spruce Goose until you see the thing. And it's this colossal, beautiful wooden aircraft. Until recently it was the largest aircraft in the world. And it's such a stunning vehicle.
So yeah, pelicans and the Spruce Goose. I'm going to go with that one.
Following the widespread availability of large language models (LLMs), the Django Security Team has received a growing number of security reports generated partially or entirely using such tools. Many of these contain inaccurate, misleading, or fictitious content. While AI tools can help draft or analyze reports, they must not replace human understanding and review.
If you use AI tools to help prepare a report, you must:
- Disclose which AI tools were used and specify what they were used for (analysis, writing the description, writing the exploit, etc).
- Verify that the issue describes a real, reproducible vulnerability that otherwise meets these reporting guidelines.
- Avoid fabricated code, placeholder text, or references to non-existent Django features.
Reports that appear to be unverified AI output will be closed without response. Repeated low-quality submissions may result in a ban from future reporting
— Django’s security policies, on AI-Assisted Reports
moonshotai/Kimi-K2-Instruct (via) Colossal new open weights model release today from Moonshot AI, a two year old Chinese AI lab with a name inspired by Pink Floyd’s album The Dark Side of the Moon.
My HuggingFace storage calculator says the repository is 958.52 GB. It's a mixture-of-experts model with "32 billion activated parameters and 1 trillion total parameters", trained using the Muon optimizer as described in Moonshot's joint paper with UCLA Muon is Scalable for LLM Training.
I think this may be the largest ever open weights model? DeepSeek v3 is 671B.
I created an API key for Moonshot, added some dollars and ran a prompt against it using my LLM tool. First I added this to the extra-openai-models.yaml file:
- model_id: kimi-k2
model_name: kimi-k2-0711-preview
api_base: https://api.moonshot.ai/v1
api_key_name: moonshot
Then I set the API key:
llm keys set moonshot
# Paste key here
And ran a prompt:
llm -m kimi-k2 "Generate an SVG of a pelican riding a bicycle" \
-o max_tokens 2000
(The default max tokens setting was too short.)
This is pretty good! The spokes are a nice touch. Full transcript here.
This one is open weights but not open source: they're using a modified MIT license with this non-OSI-compliant section tagged on at the end:
Our only modification part is that, if the Software (or any derivative works thereof) is used for any of your commercial products or services that have more than 100 million monthly active users, or more than 20 million US dollars (or equivalent in other currencies) in monthly revenue, you shall prominently display "Kimi K2" on the user interface of such product or service.
Update: MLX developer Awni Hannun reports:
The new Kimi K2 1T model (4-bit quant) runs on 2 512GB M3 Ultras with mlx-lm and mx.distributed.
1 trillion params, at a speed that's actually quite usable