While AI tools can assist in maintaining coding standards, identifying potential bugs, and suggesting improvements that align with our established best practices and patterns, code approvals are performed by Bitwarden team members.
This doesn’t address the fundamental underlying issues. By using these tools you are contributing to the larger systemic issues through validation. They are trained unethically on datasets without the consent or permission of the authors, and they require massive amounts of resources.
So you’ve said yourself here that a colleague made a mistake that you would never have caught. Who’s to say Copilot didn’t also make an error that you didn’t catch? You’ve proven by example that even with human review, errors go undetected, and AI is error prone. And unlike your colleague, whom you can ask directly, you will never fully understand the reasoning behind an AI’s decisions.
How many small bugs have these AI tools already left littered throughout the code base that the marlins of the world have failed to catch? When these small issues begin to compound into larger issues, and you’re not able to ask the AI why such-and-such code was made weeks ago, or months, or a year, because AI does not work that way, what then?