<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: jason-liu</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/tags/jason-liu.atom" rel="self"/><id>http://simonwillison.net/</id><updated>2025-09-06T17:20:27+00:00</updated><author><name>Simon Willison</name></author><entry><title>Quoting Jason Liu</title><link href="https://simonwillison.net/2025/Sep/6/jason-liu/#atom-tag" rel="alternate"/><published>2025-09-06T17:20:27+00:00</published><updated>2025-09-06T17:20:27+00:00</updated><id>https://simonwillison.net/2025/Sep/6/jason-liu/#atom-tag</id><summary type="html">
    &lt;blockquote cite="https://twitter.com/jxnlco/status/1964050092312211636"&gt;&lt;p&gt;I am once again shocked at how much better image retrieval performance you can get if you embed highly opinionated summaries of an image, a summary that came out of a visual language model, than using CLIP embeddings themselves. If you tell the LLM that the summary is going to be embedded and used to do search downstream. I had one system go from 28% recall at 5 using CLIP to 75% recall at 5 using an LLM summary.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://twitter.com/jxnlco/status/1964050092312211636"&gt;Jason Liu&lt;/a&gt;&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/embeddings"&gt;embeddings&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/vision-llms"&gt;vision-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jason-liu"&gt;jason-liu&lt;/a&gt;&lt;/p&gt;



</summary><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="embeddings"/><category term="vision-llms"/><category term="jason-liu"/></entry><entry><title>What is prompt optimization?</title><link href="https://simonwillison.net/2024/May/22/what-is-prompt-optimization/#atom-tag" rel="alternate"/><published>2024-05-22T16:02:10+00:00</published><updated>2024-05-22T16:02:10+00:00</updated><id>https://simonwillison.net/2024/May/22/what-is-prompt-optimization/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://jxnl.co/writing/2024/05/22/what-is-prompt-optimization/"&gt;What is prompt optimization?&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Delightfully clear explanation of a simple automated prompt optimization strategy from Jason Liu. Gather a selection of examples and build an evaluation function to return a numeric score (the hard bit). Then try different shuffled subsets of those examples in your prompt and look for the example collection that provides the highest averaged score.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://twitter.com/jxnlco/status/1793279761696895047"&gt;@jxnlco&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/prompt-engineering"&gt;prompt-engineering&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jason-liu"&gt;jason-liu&lt;/a&gt;&lt;/p&gt;



</summary><category term="ai"/><category term="prompt-engineering"/><category term="generative-ai"/><category term="llms"/><category term="jason-liu"/></entry></feed>