Search my code examples

Run searches against all of the code examples I have ever included on my blog.

Owned by simonw, visibility: Public

Query parameters

SQL query
with results_stripped as (
  select id, title,
    replace(replace(replace(replace(replace(regexp_replace(
      (regexp_matches(body, '<pre>(.*?)</pre>', 'g'))[1],
        E'<[^>]+>', '', 'gi'
      ), '&quot' || chr(59), '"'), '&gt' || chr(59), '>'), '&lt' || chr(59), '<'), '&#039' || chr(59), chr(39)), '&amp' || chr(59), '&'
    ) as code
  from
    blog_entry
  where
    body ~ '<pre>.*<pre>'
)
select id, title, code, 'https://simonwillison.net/e/' || id as link from results_stripped
where code like '%%' || %(search)s || '%%' order by id desc limit 10

10 rows

id title code link
8811 Structured data extraction from unstructured content using LLM schemas sqlite3 "$(llm logs path)" 'select count(*) from responses' https://simonwillison.net/e/8811
8684 Trying out QvQ - Qwen's new visual reasoning model ========== Image: ['pelicans-on-bicycles-veo2.jpg'] Prompt: <|im_start|>system You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step.<|im_end|> <|im_start|>user describe this<|vision_start|><|image_pad|><|vision_end|><|im_end|> <|im_start|>assistant Alright, I've got this "Text to Video" tool to explore. It seems pretty advanced, allowing me to turn text descriptions into actual videos. The interface looks user-friendly, with a dark theme that's easy on the eyes. On the left side, there's a panel where I can input my text prompt. It already has an example filled in: "A pelican riding a bicycle along a coastal path overlooking a harbor." That sounds interesting! I can see four different video thumbnails on the right side, each showing variations of this scene. One of them even has the pelican wearing a helmet, which adds a humorous touch. Another shows the pelican standing next to a bicycle, almost as if it's ready to hop on and start pedaling. Below the text input, there are some style options like "photorealistic," "35mm film," "minimal," "sketchy," "handmade," and "abstract." These probably let me choose the visual style of the video. For now, I'll stick with the default settings. At the bottom of the left panel, there are two buttons: "Start over" and "Create videos." I guess "Create videos" is the one I need to press to generate the videos based on my text prompt. Let me think about this. I want to see what kind of videos this tool can produce based on my description. The example prompt is already filled in, so maybe I should just go ahead and click "Create videos" to see what happens. Wait a minute, the prompt mentions a coastal path overlooking a harbor. In the thumbnails, I can see a background with water and boats, which matches the description. It's cool how the tool has interpreted the text into visual elements. I should also pay attention to the "Visual Prompt" section, which says "View only." That probably means I can't edit it directly, but I can view the generated videos based on this prompt. Hmm, maybe I should try changing the prompt to see different results. For example, instead of a pelican riding a bicycle, maybe I could describe a different scene. But for now, I'll stick with the given example to understand how it works. After clicking "Create videos," I expect to see the videos being generated. There might be a loading process, depending on the complexity of the prompt and the tool's processing speed. Once the videos are ready, I can preview them to see how well they match my description. If I'm not satisfied, I can always go back and adjust the prompt or try different style options. Overall, this seems like a powerful tool for creating custom videos based on textual descriptions. It could be really useful for storytelling, presentations, or even just for fun, experimenting with different scenes and styles. I think I'll spend some time exploring the different style options to see how they affect the final videos. For instance, selecting "35mm film" might give the videos a vintage, cinematic look, while "abstract" could produce more artistic and less realistic visuals. Also, I wonder if there's a way to customize more specific aspects, like the camera angle, lighting, or movement. Maybe after generating the initial videos, there are additional editing options available. In summary, I'm excited to use this Text to Video tool and see what kind of creative possibilities it opens up. With a bit of practice, I could probably create some impressive visual content just by describing what I want to see. **Final Answer** To proceed with generating videos based on the provided text prompt, I need to click on the "Create videos" button. This action will initiate the process of converting the textual description into visual content, resulting in multiple video thumbnails that can be previewed for further use or adjustments. ========== Prompt: 0.870 tokens-per-sec Generation: 7.694 tokens-per-sec https://simonwillison.net/e/8684
8613 Ask questions of SQLite databases and CSV/JSON files in your terminal System prompt: You will be given a SQLite schema followed by a question. Generate a single SQL query to answer that question. Return that query in a ```sql ... ``` fenced code block. Example: How many repos are there? Answer: ```sql select count(*) from repos ``` Prompt: ... CREATE TABLE [stats] ( [package] TEXT, [date] TEXT, [downloads] INTEGER, PRIMARY KEY ([package], [date]) ); ... how many sqlite-utils pypi downloads in 2024? https://simonwillison.net/e/8613
8586 Visualizing local election results with Datasette, Observable and MapLibre GL // Select the contest viewof contest = Inputs.select(contests, { label: "Choose a contest" }) // --- // And the candidate viewof candidate = Inputs.radio( candidates, { label: "Choose a candidate", value: candidates[0] } ) // --- // Show the map itself Plot.plot({ width, height: 600, legend: true, color: { scheme: "blues", legend: true }, projection: { type: "mercator", domain: data2 }, marks: [ Plot.geo(data2, { strokeOpacity: 0.1, fill: "ratio", tip: true }) ] }) # --- data2 = ({ type: "FeatureCollection", features: raw_data2.map((d) => ({ type: "Feature", properties: { precinct: d.Precinct_name, total_ballots: d.total_ballots, ratio: JSON.parse(d.votes_by_candidate)[candidate] / d.total_ballots }, geometry: JSON.parse(d.geometry) })) }) // --- raw_data2 = query( `select Precinct_name, precincts.geometry, total_ballots, json_grop_object( candidate_name, total_votes ) as votes_by_candidate from election_results join precincts on election_results.Precinct_name = precincts.precinct_id where Contest_title = :contest group by Precinct_name, precincts.geometry, total_ballots;`, { contest } ) // --- raw_data2 = query( `select Precinct_name, precincts.geometry, total_ballots, json_group_object( candidate_name, total_votes ) as votes_by_candidate from election_results join precincts on election_results.Precinct_name = precincts.precinct_id where Contest_title = :contest group by Precinct_name, precincts.geometry, total_ballots;`, { contest } ) // --- // Fetch the available contests contests = query("select distinct Contest_title from election_results").then( (d) => d.map((d) => d.Contest_title) ) // --- // Extract available candidates for selected contest candidates = Object.keys( JSON.parse(raw_data2[0].votes_by_candidate) ) // --- function query(sql, params = {}) { return fetch( `https://datasette-public-office-hours.datasette.cloud/data/-/query.json?${new URLSearchParams( { sql, _shape: "array", ...params } ).toString()}`, { headers: { Authorization: `Bearer ${secret}` } } ).then((r) => r.json()); } https://simonwillison.net/e/8586
8586 Visualizing local election results with Datasette, Observable and MapLibre GL sql = ` select Precinct_name, precincts.geometry, Split_name, Reporting_flag, Update_count, Pct_Id, Pct_seq_nbr, Reg_voters, Turn_Out, Contest_Id, Contest_seq_nbr, Contest_title, Contest_party_name, Selectable_Options, candidate_id, candidate_name, Candidate_Type, cand_seq_nbr, Party_Code, total_ballots, total_votes, total_under_votes, total_over_votes, [Vote Centers_ballots], [Vote Centers_votes], [Vote Centers_under_votes], [Vote Centers_over_votes], [Vote by Mail_ballots], [Vote by Mail_votes], [Vote by Mail_under_votes], [Vote by Mail_over_votes] from election_results join precincts on election_results.Precinct_name = precincts.precinct_id where "Contest_title" = "Granada Community Services District Members, Board of Directors" limit 101;` https://simonwillison.net/e/8586
8586 Visualizing local election results with Datasette, Observable and MapLibre GL select Precinct_name, precincts.geometry, total_ballots, json_group_object( candidate_name, total_votes ) as votes_by_candidate from election_results join precincts on election_results.Precinct_name = precincts.precinct_id where Contest_title = "Granada Community Services District Members, Board of Directors" group by Precinct_name, precincts.geometry, total_ballots; https://simonwillison.net/e/8586
8480 Optimizing Datasette (and other weeknotes) select count(*) from ( select * from libfec_SA16 limit 10001 ) https://simonwillison.net/e/8480
8480 Optimizing Datasette (and other weeknotes) select date(column_to_test) from ( select * from mytable ) where column_to_test glob "????-??-*" limit 100; https://simonwillison.net/e/8480
8480 Optimizing Datasette (and other weeknotes) select date(column_to_test) from ( select * from mytable limit 100 ) where column_to_test glob "????-??-*" https://simonwillison.net/e/8480
8382 Building search-based RAG using Claude, Datasette and Val Town select blog_entry.id, blog_entry.title, blog_entry.body, blog_entry.created from blog_entry join blog_entry_fts on blog_entry_fts.rowid = blog_entry.rowid where blog_entry_fts match :search order by rank limit 10 https://simonwillison.net/e/8382
Copy and export data

Duration: 184.01ms