A couple of things this week reminded me how important it is to have a decent set of tests for your code.
The first came out of me being asked to add a piece of functionality to an application. It turned out that said functionality had always existed, but since launch had been completely broken. Further investigation showed a stored procedure with a long chain of IF ... ELSE IF ... ELSE IF blocks. Near the bottom, a developer had added a new feature, guarded by an IF statement. This broke the existing logic, meaning a large number of cases, including the default, could never be executed.
I could picture immediately what had happened. The new feature had been added, a quick functional test that the new feature worked was performed, and everyone involved went away happy without realising that they'd broken almost every other path through the code. If only there had been a suite of tests to go red at the point where defenceless code met careless programming.
The second was a team who got behind on their sprint because an application they were working on had some unexpected, undocumented functionality which didn't come to light until they deployed a new version to their staging environment and started end-to-end regression testing. (Yes, this team was a lot more conscientious than the previous example.) Which is the other side of having a decent set of tests; they don't merely give you an indication of whether something is broken or not. A good set of tests will reveal intention, tell you what the application is supposed to do, and warn of any awkward corner cases.
(It's also likely to be more current than a wiki page. Unlike documentation, it's harder for a test to fall out of date as the feedback is immediate - your test suite fails when the original criteria no longer match those expressed in the code.)
In both cases the time lost to investigate the problem was far greater than it would have taken to code the tests in the first place, with no certainty that the fixes wouldn't themselves create or expose some other unexpected problem. Which is why I'm not convinced by the idea of skipping even the most basic level of test coverage; it's not so much saving time as borrowing it from the future.