11th July 2025
Following the widespread availability of large language models (LLMs), the Django Security Team has received a growing number of security reports generated partially or entirely using such tools. Many of these contain inaccurate, misleading, or fictitious content. While AI tools can help draft or analyze reports, they must not replace human understanding and review.
If you use AI tools to help prepare a report, you must:
- Disclose which AI tools were used and specify what they were used for (analysis, writing the description, writing the exploit, etc).
- Verify that the issue describes a real, reproducible vulnerability that otherwise meets these reporting guidelines.
- Avoid fabricated code, placeholder text, or references to non-existent Django features.
Reports that appear to be unverified AI output will be closed without response. Repeated low-quality submissions may result in a ban from future reporting
— Django’s security policies, on AI-Assisted Reports
Recent articles
- My fireside chat about agentic engineering at the Pragmatic Summit - 14th March 2026
- Perhaps not Boring Technology after all - 9th March 2026
- Can coding agents relicense open source through a “clean room” implementation of code? - 5th March 2026