The problem that you face is that it's relatively easy to take a model and make it look like it's aligned. You ask GPT-4, “how do I end all of humans?” And the model says, “I can't possibly help you with that”. But there are a million and one ways to take the exact same question - pick your favorite - and you can make the model still answer the question even though initially it would have refused. And the question this reminds me a lot of coming from adversarial machine learning. We have a very simple objective: Classify the image correctly according to the original label. And yet, despite the fact that it was essentially trivial to find all of the bugs in principle, the community had a very hard time coming up with actually effective defenses. We wrote like over 9,000 papers in ten years, and have made very very very limited progress on this one small problem. You all have a harder problem and maybe less time.
Recent articles
- Six short video demos of LLM and Datasette projects - 22nd January 2025
- DeepSeek-R1 and exploring DeepSeek-R1-Distill-Llama-8B - 20th January 2025
- My AI/LLM predictions for the next 1, 3 and 6 years, for Oxide and Friends - 10th January 2025