- Is anyone else tired of AI generated blog posts about AI generated code? What does the author even get out of it? Upvotes?
- This seems to be focused on Python, but for all the TS devs out there, what you'll see will be implicit `any` errors. Quick word of warning on having LLMs fix those - they love to use explicit `any`s or perform `as any` casts. This makes the lint error disappear, but keeps the actual logic bug in the code.
Even if you ask it not to use any at all, it'll cast the type to `unknown` and "narrow" it by performing checks. The problem is that this may be syntactically correct but completely meaningless, since it'll narrow it down to a type that doesn't exist.
The biggest problem here is that all of these are valid code patterns, but LLMs tend to abuse them more than using it correctly.
- Use a linter that can auto fix some of the problems and have an automatic formatter. Ruff can do both. It will decrease your cleanup workload.
Don't get too hanged up on typing. Pythons duck typing is a feature not a bug. It's ok to have loose types.
On duplicate code, in general you should see at least two examples of a pattern before trying to abstract it. Make sure the duplication/similarity is semantic and not incidental, if you abstract away incidental duplication, you will very quickly find yourself in a situation where the cases diverge and your abstraction will get in your way.
In general coding agents are technical debt printers. But you can still pay it off.
- I am somewhat confused by this post. If the AI assistant is doing such a bad job that it lights up the linting tool, and further, is incapable of processing the lint output to fix the issues, then... maybe the AI tool is the problem?
If I hired a junior dev and had to give them explicit instructions to not break the CI/lint, and they found NEW ways to break the CI/lint again that were outside of my examples, I'd hopefully be able to just let them go before their probation period expired.
Has the probation period for AI already expired? Are we stuck with it? Am I allowed to just write code anymore?
- Part of my pattern now is forcing lint before push and also requiring code coverage % to stay above a certain threshold and all tests to pass. Sometimes this goes awry but honestly I have same problem with dev teams. This same thing should be done with dev teams. And I’ve had devs fix lint errors these bad ways same as llm as well as “fix” tests in and ways. Llm actually listens to my rules a bit better tha human devs — and the pre commit checks and pre merge checks enforce it.
- would PostToolUse be a better place to do it than pre-commit? (trigger on `"^(Edit|Write|MultiEdit)$"`)
for lint issues that are autofixable, the tool use can trigger formatting on that file and fix it right away
for type issues (ts, pyright), you can return something like `{\"hookSpecificOutput\":{\"additionalContext\":$escaped},\"continue\":true}"` to let the edit complete but let Claude know that there are errors to fix next turn
- If AI is writing and fixing all code, does linting even matter?
- Linting and proper tests are the reason why I can use even simple models to get a lot done—preferably writing the tests with a second model.
- TL;DR: Enable strict linting on CI, don't allow AI to change linting configuration.
