Simon Willison’s Weblog

Subscribe

Quotations in Nov

Filters: Type: quotation × Month: Nov × Sorted by date


This is what I constantly tell my students: The hard part about doing a tech product for the most part isn’t the what beginners think makes tech hard — the hard part is wrangling systemic complexity in a good, sustainable and reliable way.

Many non-tech people e.g. look at programmers and think the hard part is knowing what this garble of weird text means. But this is the easy part. And if you are a person who would think it is hard, you probably don’t know about all the demons out there that will come to haunt you if you don’t build a foundation that helps you actively keeping them away.

atoav # 30th November 2023, 9:18 pm

This is nonsensical. There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs’ books.

U.S. District Judge Vince Chhabria # 26th November 2023, 4:13 am

To some degree, the whole point of the tech industry’s embrace of “ethics” and “safety” is about reassurance. Companies realize that the technologies they are selling can be disconcerting and disruptive; they want to reassure the public that they’re doing their best to protect consumers and society. At the end of the day, though, we now know there’s no reason to believe that those efforts will ever make a difference if the company’s “ethics” end up conflicting with its money. And when have those two things ever not conflicted?

Lucas Ropek # 23rd November 2023, 8:41 pm

We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.

@OpenAI # 22nd November 2023, 6:04 am

I remember that they [Ev and Biz at Twitter in 2008] very firmly believed spam was a concern, but, “we don’t think it’s ever going to be a real problem because you can choose who you follow.” And this was one of my first moments thinking, “Oh, you sweet summer child.” Because once you have a big enough user base, once you have enough people on a platform, once the likelihood of profit becomes high enough, you’re going to have spammers.

Del Harvey # 22nd November 2023, 4:59 am

Sam Altman expelling Toner with the pretext of an inoffensive page in a paper no one read would have given him a temporary majority with which to appoint a replacement director, and then further replacement directors. These directors would, naturally, agree with Sam Altman, and he would have a full, perpetual board majority—the board, which is the only oversight on the OA CEO. Obviously, as an extremely experienced VC and CEO, he knew all this and how many votes he (thought he) had on the board, and the board members knew this as well—which is why they had been unable to agree on replacement board members all this time.

Gwern # 22nd November 2023, 3:53 am

The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing.

Ilya Sutskever # 21st November 2023, 6:59 pm

And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.

Matt Levine, in a hypothetical # 20th November 2023, 9:12 pm

The company pressed forward and launched ChatGPT on November 30. It was such a low-key event that many employees who weren’t directly involved, including those in safety functions, didn’t even realize it had happened. Some of those who were aware, according to one employee, had started a betting pool, wagering how many people might use the tool during its first week. The highest guess was 100,000 users. OpenAI’s president tweeted that the tool hit 1 million within the first five days. The phrase low-key research preview became an instant meme within OpenAI; employees turned it into laptop stickers.

Inside the Chaos at OpenAI # 20th November 2023, 4:38 am

The EU AI Act now proposes to regulate “foundational models”, i.e. the engine behind some AI applications. We cannot regulate an engine devoid of usage. We don’t regulate the C language because one can use it to develop malware. Instead, we ban malware and strengthen network systems (we regulate usage). Foundational language models provide a higher level of abstraction than the C language for programming computer systems; nothing in their behaviour justifies a change in the regulatory framework.

Arthur Mensch, Mistral AI # 16th November 2023, 11:29 am

I’ve resigned from my role leading the Audio team at Stability AI, because I don’t agree with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’.

[...] I disagree because one of the factors affecting whether the act of copying is fair use, according to Congress, is “the effect of the use upon the potential market for or value of the copyrighted work”. Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use.

But setting aside the fair use argument for a moment — since ‘fair use’ wasn’t designed with generative AI in mind — training generative AI models in this way is, to me, wrong. Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works.

Ed Newton-Rex # 15th November 2023, 9:31 pm

[On Meta’s Galactica LLM launch] We did this with a 8 person team which is an order of magnitude fewer people than other LLM teams at the time.

We were overstretched and lost situational awareness at launch by releasing demo of a *base model* without checks. We were aware of what potential criticisms would be, but we lost sight of the obvious in the workload we were under.

One of the considerations for a demo was we wanted to understand the distribution of scientific queries that people would use for LLMs (useful for instruction tuning and RLHF). Obviously this was a free goal we gave to journalists who instead queried it outside its domain. But yes we should have known better.

We had a “good faith” assumption that we’d share the base model, warts and all, with four disclaimers about hallucinations on the demo—so people could see what it could do (openness). Again, obviously this didn’t work.

Ross Taylor # 15th November 2023, 1:15 am

Two things in AI may need regulation: reckless deployment of certain potentially harmful AI applications (same as any software really), and monopolistic behavior on the part of certain LLM providers. The technology itself doesn’t need regulation anymore than databases or transistors. [...] Putting size/compute caps on deep learning models is akin to putting size caps on databases or transistor count caps on electronics. It’s pointless and it won’t age well.

François Chollet # 13th November 2023, 1:46 am

Did you ever wonder why the 21st century feels like we’re living in a bad cyberpunk novel from the 1980s?

It’s because these guys read those cyberpunk novels and mistook a dystopia for a road map. They’re rich enough to bend reality to reflect their desires. But we’re [sci-fi authors] not futurists, we’re entertainers! We like to spin yarns about the Torment Nexus because it’s a cool setting for a noir detective story, not because we think Mark Zuckerberg or Andreesen Horowitz should actually pump several billion dollars into creating it.

Charles Stross # 11th November 2023, 1:09 am

One of my fav early Stripe rules was from incident response comms: do not publicly blame an upstream provider. We chose the provider, so own the results—and use any pain from that as extra motivation to invest in redundant services, go direct to the source, etc.

Michael Schade # 5th November 2023, 10:53 pm

If posts in a social media app do not have URLs that can be linked to and viewed in an unauthenticated browser, or if there is no way to make a new post from a browser, then that program is not a part of the World Wide Web in any meaningful way.

Consign that app to oblivion.

JWZ # 28th November 2022, 6:22 am

... it [ActivityPub] is crucially good enough. Perfect is the enemy of good, and in ActivityPub we have a protocol that has flaws but, crucially, that works, and has a standard we can all mostly agree on how to implement—and eventually, I hope, agree on how to improve.

Andrew Godwin # 19th November 2022, 4:02 pm

These kinds of biases aren’t so much a technical problem as a sociotechnical one; ML models try to approximate biases in their underlying datasets and, for some groups of people, some of these biases are offensive or harmful. That means in the coming years there will be endless political battles about what the ‘correct’ biases are for different models to display (or not display), and we can ultimately expect there to be as many approaches as there are distinct ideologies on the planet. I expect to move into a fractal ecosystem of models, and I expect model providers will ‘shapeshift’ a single model to display different biases depending on the market it is being deployed into. This will be extraordinarily messy.

Jack Clark # 16th November 2022, 11:04 pm

htmlspecialchars was a very early function. Back when PHP had less than 100 functions and the function hashing mechanism was strlen(). In order to get a nice hash distribution of function names across the various function name lengths names were picked specifically to make them fit into a specific length bucket. This was circa late 1994 when PHP was a tool just for my own personal use and I wasn’t too worried about not being able to remember the few function names.

Rasmus Lerdorf # 22nd November 2021, 7:23 pm

Many Web3 boost­ers see them­selves as disruptors, but “tokenize all the things” is noth­ing if not an obe­di­ent con­tin­u­a­tion of “market-ize all the things”, the cam­paign started in the 1970s, hugely suc­cessful, ongoing. I think the World Wide Web was the real rupture — “Where … is the money?”—which Web 2.0 smoothed over and Web3 now attempts to seal totally.

Robin Sloan # 18th November 2021, 9:55 pm

One could never price a thirty year mortgage in bitcoin because its volatility makes it completely unpredictable and no sensible bank could calculate the risk of covering that debt. A world in which Elon Musk can tweet two emojis and your home depreciates 80% in value is a dystopia.

Stephen Diehl # 10th November 2021, 7:45 am

The open secret Jennings filled me in on is that OpenStreetMap (OSM) is now at the center of an unholy alliance of the world’s largest and wealthiest technology companies. The most valuable companies in the world are treating OSM as critical infrastructure for some of the most-used software ever written. The four companies in the inner circle— Facebook, Apple, Amazon, and Microsoft— have a combined market capitalization of over six trillion dollars.

Joe Morrison # 20th November 2020, 9:11 pm

In general, reviewers should favor approving a CL [code review] once it is in a state where it definitely improves the overall code health of the system being worked on, even if the CL isn’t perfect.

Google Standard of Code Review # 28th November 2019, 5:40 am

With a sufficient number of users of an API,
it does not matter what you promise in the contract:
all observable behaviors of your system
will be depended on by somebody.

Hyrum's Law # 21st November 2019, 10:45 pm

I have sometimes wondered how I would fare with a problem where the solution really isn’t in sight. I decided that I should give it a try before I get too old.

I’m going to work on artificial general intelligence (AGI).

I think it is possible, enormously valuable, and that I have a non-negligible chance of making a difference there, so by a Pascal’s Mugging sort of logic, I should be working on it.

John Carmack # 14th November 2019, 1:18 am

Whether you like it or not, whether you approve it or not, people outside of your design team are making significant design choices that affect your customers in important ways. They are designing your product. They are designers.

Daniel Burka # 25th November 2018, 7:03 pm

React is “value UI”. Its core principle is that UI is a value, just like a string or an array. You can keep it in a variable, pass it around, use JavaScript control flow with it, and so on. That expressiveness is the point — not some diffing to avoid applying changes to the DOM.

Dan Abramov # 24th November 2018, 5:58 pm

The premise of “The Good Place” is absurdly high concept. It sounds less like the basis of a prime-time sitcom than an experimental puppet show conducted, without a permit, on the woodsy edge of a large public park.

Sam Anderson # 10th November 2018, 9:48 pm

If you stop thinking in terms of MVC you might notice that at its core, React is a runtime for effectful functions that don’t execute “once”, but run continuously while being anchored to a call tree.

Dan Abramov # 3rd November 2018, 9:51 pm

Every bitcoin proof-of-work mined is an incremental addition to a vast distributed summoning ritual powering the demon-soul at the heart of the maze, the computational equivalent of a Buddhist prayer wheel spinning in a Himalayan breeze.

Charles Stross # 3rd November 2018, 5:47 pm