Chapter 6 - Testing with AI
Testing matters more than ever with AI. Because AI hallucinates and eventually will mess up your existing code, and will stop working, so when will you catch that? Hopefully before your users do. Because AI is not 100% predictable and has precision issues, you must have tests to compensate for that. Some tests are harder than orders but we will need to have them, in different shapes and forms.
Otherwise, AI will break your code; it's just a matter of time, it will happen. Your mocks might fool you, but you will discover in production, maybe not the best way possible. As long as you have good testing diversity and decent coverage, you will be fine because tests are the ultimate guardrails for AI.
So before AI, 100% tests sounded crazy, but now it's actually a must-have. Otherwise, AI will introduce a lot of P0 bugs more often than you can imagine. Before we do any other more advanced use case for coding agents using AI, we need to get more tests and get good and reliable tests in place.
Testing is a spectrum. You test left and you test right (yes you test in production too). We do assertions, property-based testing, we test all languages including lower level ones like C, we should test DevOps Terraform too, tests on the frontend, contract-based testing, test lambdas, we test things that talk to AWS too, even hard things like static/final are tested, we test our own tests, test rest apis, we mock what we need, now I think my point is clear, test all the things, there is no such thing as too many tests anymore.