Conversation
Signed-off-by: Ludovic Ortega <ludovic.ortega@adminafk.fr>
📝 WalkthroughWalkthroughAdds a new AGENTS.md documentation file that provides guidelines for AI agents on contributing to the project, covering project overview, development setup, build and test procedures, database migrations, code style, translations, pull request conventions, security reporting, documentation structure, and code of conduct. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~5 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AGENTS.md`:
- Line 72: Clarify that the example DB credentials in the AGENTS.md command
(DB_TYPE, DB_USER, DB_PASS) are only for local development and must not be used
in production; update the line containing "DB_TYPE=\"postgres\" DB_USER=postgres
DB_PASS=postgres pnpm migration:generate
server/migration/postgres/YourMigrationName" to include a short caveat or note
that these are local-dev defaults and recommend using secure,
environment-specific credentials (or a secrets manager) for non-local
environments.
- Around line 54-60: Update the CI workflow description to mention the project's
type-checking commands: add `pnpm typecheck` (and note the available variants
`typecheck:client` and `typecheck:server`) to the list of checks agents should
run locally and in CI (refer to the CI workflow summary and command list in
AGENTS.md); also rephrase the hedged sentence about tests to a direct statement
such as "This project does not include unit tests." so readers know there are no
unit test scripts to run.
|
I've personally never used an agent file. I have the feeling that it's very hard to not write things that are counter productive. Not that I have a lot of experience. Recently there has been a study that seems to support this intuition: https://news.ycombinator.com/item?id=47034087 Sometimes it helps, often it doesn't. I leave this as a comment, this is just an intuition of mine. I especially have little experience with the agents... Feel free to ignore. |
I do agree with the sentiment, however people will continue to use agents to contribute regardless of what whether or not we publish the agents.md file. I think adding this in would at least enhance clarity on AI usage, style, and contribution standards so that it will at a minimum be in a stlye and standard aided by us. |
I think the main question is: do we want to allow AI agents for authoring the PR code? I'm fine with using it to explain the codebase, things like inline-suggestions, or ask some specific questions. But from what I reviewed here, most of the outputs produced by agents are low-quality, very verbose and touching a lot of stuff it shouldn't. |
I do agree with the sentiment, however people will continue to use agents to contribute regardless of what whether or not we publish the agents.md file. I think adding this in would at least enhance clarity on AI usage, style, and contribution standards so that it will at a minimum be in a stlye and standard aided by us.
Agreed, then should we publish an update to our docs to state as much? that any PR from AI is prohibited but can be used to explain the codebase for those who wish to contribute? |
I'd say something like: "PRs fully authored by AI agents are forbidden." The rest is fine. |
|
For me it's already the case and covered by the first line of our contribution guide :
|
I think this is too harsh or maybe said too broadly. Using AI to explain general concepts or some snippet of code should not need AI disclosure. Similar as to using a general search engine to do that or other analytical tools. And grasping the context and explaining things is what AIs excel at, anyway. And I don't think it is that helpful to know whether the person used AI to search the codebase or they did it manually - both of these actions are prone to misinterpretation or overlooking too. The only exception that I could see is using an agentic AI crawling the entire codebase and drawing conclusions from there (compared to generative/assistive AI that requires continuous user input). But this file is intended for agents anyway, so the wording might not matter that much here. |
Fully authored AI-generated code with bad human review is still garbage. That's often what is happening. |
Description
The purpose of this file is to provide specific, actionable instructions for AI agents and automated tooling to follow when contributing to this repository. This helps ensure that AI-assisted contributions align with the project's standards, conventions, and procedures.
This file was generated by the Gemini CLI under my supervision.
How Has This Been Tested?
Screenshots / Logs (if applicable)
Checklist:
pnpm buildpnpm i18n:extractSummary by CodeRabbit