Note: heavily inspired by https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.md
Poppy has strict rules for AI usage:
-
Non code AI usage in any form must be disclosed. A vast majority of code contributions are AI-assisted by default. If you produce documentation, notion pages, slack messages that were AI-assisted, you must state the tool you used (e.g. Claude, Gemini) along with the extent that the work was AI-assisted and the prompt.
-
The human-in-the-loop must fully understand all AI produced output. You must be able to explain what your changes do and how they interact with the greater system without the aid of AI tools.
Please remember that Poppy is operated by humans.
Every discussion and issue is read and reviewed by humans (and sometimes machines, too). It is a boundary point at which people interact with each other and the work done. It is rude and disrespectful to approach this boundary with low-effort, unqualified work, since it puts the burden of validation on the maintainer.
In a perfect world, AI would produce high-quality, accurate work every time. But today, that reality depends on the driver of the AI.
Poppy now operates with plenty of AI assistance, and we embrace AI tools as a productive tool in our workflow. As a company, we welcome AI as a tool!
Our reason for the strict AI policy is not due to an anti-AI stance, but instead due to the growing number of slopppy contributions. It's the people, not the tools, that are the problem.
Feel free to open a PR to contribute to this discussion. The tech evolves rapidly so should our stance on these matters.