Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?
Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.
But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?
So true lol. Mgmt just announced a directive at my work last week that code must have 95-100% coverage.
Meanwhile they hire contractors from india that write the dumbest, most useless tests possible. I’ve worked with many great Indian devs but the contractors we use today all seem like a step down in quality. More work for me I guess
It’s always fun to hear management pushing code coverage. It’s a fairly useless metric. It’s easy to get coverage without actually testing anything. I’ve seen unit tests that consist simply of starting the whole program and running it without asserting anything or checking outputs.
deleted by creator