Simon Willison’s Weblog

Subscribe

Quotations in Apr, 2023

Filters: Type: quotation × Year: 2023 × Month: Apr × Sorted by date


The Consumer Financial Protection Bureau (CFPB) supervises, sets rules for, and enforces numerous federal consumer financial laws and guards consumers in the financial marketplace from unfair, deceptive, or abusive acts or practices and from discrimination [...] the fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.

The Consumer Financial Protection Bureau (PDF) # 26th April 2023, 12:36 am

A lot of people who claim to be doing prompt engineering today are actually just blind prompting. “Blind Prompting” is a term I am using to describe the method of creating prompts with a crude trial-and-error approach paired with minimal or no testing and a very surface level knowedge of prompting. Blind prompting is not prompt engineering. [...] In this blog post, I will make the argument that prompt engineering is a real skill that can be developed based on real experimental methodologies.

Mitchell Hashimoto # 23rd April 2023, 4:08 am

Other tech-friendly journalists I know have been going through something similar: Suddenly, we’ve got something like a jetpack to strap to our work. Sure, the jetpack is kinda buggy. Yes, sometimes it crashes and burns. And the rules for its use aren’t clear, so you’ve got to be super careful with it. But sometimes it soars, shrinking tasks that would have taken hours down to mere minutes, sometimes minutes to seconds.

Farhad Manjoo # 21st April 2023, 8:41 pm

The AI Writing thing is just pivot to video all over again, a bunch of dead-eyed corporate types willing to listen to any snake oil salesman who offers them higher potential profits. It’ll crash in a year but scuttle hundreds of livelihoods before it does.

Dan Sheehan # 21st April 2023, 4:38 pm

Although fine-tuning can feel like the more natural option—training on data is how GPT learned all of its other knowledge, after all—we generally do not recommend it as a way to teach the model knowledge. Fine-tuning is better suited to teaching specialized tasks or styles, and is less reliable for factual recall. [...] In contrast, message inputs are like short-term memory. When you insert knowledge into a message, it’s like taking an exam with open notes. With notes in hand, the model is more likely to arrive at correct answers.

Ted Sanders, OpenAI # 15th April 2023, 1:44 pm

One way to avoid unspotted prediction errors is for the technology in its current state to have early and frequent contact with reality as it is iteratively developed, tested, deployed, and all the while improved. And there are creative ideas people don’t often discuss which can improve the safety landscape in surprising ways — for example, it’s easy to create a continuum of incrementally-better AIs (such as by deploying subsequent checkpoints of a given training run), which presents a safety opportunity very unlike our historical approach of infrequent major model upgrades.

Greg Brockman # 14th April 2023, 6:08 pm

Before we scramble to deeply integrate LLMs everywhere in the economy, can we pause and think whether it is wise to do so?

This is quite immature technology and we don’t understand how it works.

If we’re not careful we’re setting ourselves up for a lot of correlated failures.

Jan Leike, Alignment Team lead, OpenAI # 13th April 2023, 7:08 pm

Graphic designers had a similar sea change ~20-25 years ago.

Flyers, restaurant menus, wedding invitations, price lists... That sort of thing was bread and butter work for most designers. Then desktop publishing happened and a large fraction of designers lost their main source of income as the work shifted to computer assisted unskilled labor.

The field still thrives today, but that simple work is gone forever.

Janne Moren # 12th April 2023, 3:28 am

I literally lost my biggest and best client to ChatGPT today. This client is my main source of income, he’s a marketer who outsources the majority of his copy and content writing to me. Today he emailed saying that although he knows AI’s work isn’t nearly as good as mine, he can’t ignore the profit margin. [...] Please do not think you are immune to this unless you are the top 1% of writers. I just signed up for Doordash as a driver. I really wish I was kidding.

u/Ashamed_Apricot6626 # 11th April 2023, 6:20 pm

My strong hunch is that the GIL does not need removing, if a) subinterpreters have their own GILs and b) an efficient way is provided to pass (some) data between subinterpreters lock free and c) we find good patterns to make working with subinterpreters work.

Armin Ronacher # 11th April 2023, 4:47 pm

The progress in AI has allowed things like taking down hate speech more efficiently—and this is due entirely to large language models. Because we have large language models [...] we can do a better job than we ever could in detecting hate speech in most languages in the world. That was impossible before.

Yann LeCun # 7th April 2023, 7:32 pm

For example, if you prompt GPT-3 with “Mary had a,” it usually completes the sentence with “little lamb.” That’s because there are probably thousands of examples of “Mary had a little lamb” in GPT-3’s training data set, making it a sensible completion. But if you add more context in the prompt, such as “In the hospital, Mary had a,” the result will change and return words like “baby” or “series of tests.”

Benj Edwards # 7th April 2023, 3:36 am

Several libraries let you declare objects with type-hinted members and automatically derive validation rules and serialization/deserialization from the type hints – Pydantic is the most popular, but alternatives like msgspec are out there too. There’s also a whole new generation of web frameworks like FastAPI and Starlite which use type hints at runtime to do not just input validation and serialization/deserialization but also things like dependency injection.

Personally, I’ve seen more significant gains in productivity from those runtime usages of Python’s type hints than from any static ahead-of-time type checking, which mostly is only useful to me as documentation.

James Bennett # 7th April 2023, 2:19 am

Projectories have power. Power for those who are trying to invent new futures. Power for those who are trying to mobilize action to prevent certain futures. And power for those who are trying to position themselves as brokers, thought leaders, controllers of future narratives in this moment of destabilization. But the downside to these projectories is that they can also veer way off the railroad tracks into the absurd. And when the political, social, and economic stakes are high, they can produce a frenzy that has externalities that go well beyond the technology itself. That is precisely what we’re seeing right now.

danah boyd # 7th April 2023, 2:04 am

[On AI-assisted programming] I feel like I got a small army of competent hackers to both do my bidding and to teach me as I go. It’s just pure delight and magic.

It’s riding a bike downhill and playing with legos and having a great coach and finishing a project all at once.

Matt Bateman # 5th April 2023, 11:50 pm

My guess is that MidJourney has been doing a massive-scale reinforcement learning from human feedback (“RLHF”)—possibly the largest ever for text-to-image.

When human users choose to upscale an image, it’s because they prefer it over the alternatives. It’d be a huge waste not to use this as a reward signal—cheap to collect, and *exactly* aligned with what your user base wants.

The more users you have, the better RLHF you can do. And then the more users you gain.

Jim Fan # 5th April 2023, 4:45 am

More capable models can better recognize the specific circumstances under which they are trained. Because of this, they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their user’s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated.

Sam Bowman # 5th April 2023, 3:44 am

Scaling laws allow us to precisely predict some coarse-but-useful measures of how capable future models will be as we scale them up along three dimensions: the amount of data they are fed, their size (measured in parameters), and the amount of computation used to train them (measured in FLOPs). [...] Our ability to make this kind of precise prediction is unusual in the history of software and unusual even in the history of modern AI research. It is also a powerful tool for driving investment since it allows R&D teams to propose model-training projects costing many millions of dollars, with reasonable confidence that these projects will succeed at producing economically valuable systems.

Sam Bowman # 5th April 2023, 3:32 am

Beyond these specific legal arguments, Stability AI may find it has a “vibes” problem. The legal criteria for fair use are subjective and give judges some latitude in how to interpret them. And one factor that likely influences the thinking of judges is whether a defendant seems like a “good actor.” Google is a widely respected technology company that tends to win its copyright lawsuits. Edgier companies like Napster tend not to.

Timothy B. Lee # 3rd April 2023, 3:38 pm