Coders hate writing unit tests. They hate having to think about how their code is used by other modules, and they really hate having to cover all the bases of what might go wrong, because, as they say, noone will ever use my module in that way. I know this, because I'm as guilty as every other coder on this point.
GUI developers are often the worst, which is not helped by the fact that many GUI frameworks do not lend themselves easily to automated testing. Often, they're not designed to run in a headless server environment, which will be how your tests are run if you are into continuous integration, so GUI developers will struggle to get coverage levels above 50% in some cases. That doesn't mean a decent level of code coverage should be abandoned, however.
But unit testing matters, because even if it proves nothing else, it proves that your code does basically do what it says on the tin. If you freelance, this will cover you the next time there is a major issue in your software: fundamentally, that piece of your code works, and you can prove it to your client or boss at the touch of a button. You can run all your tests at the touch of a button, right?
But how do you prove how much you tested? You need a test coverage tool to check this. In the Java world, that might be cobertura, jcoverage or something similar, but fundamentally they all do the same job: produce a report on how much of your code (lines, branches etc.) is actually being tested, preferably in HTML or some other human-readable format.
Coverage Alone is Useless
Of course, coverage itself is actually useless. Noone can prevent a lazy (or overworked) developer from just writing "pass through" unit tests that never actually, er, test anything except that the code doesn't crash. Unless, your opinion of the code being tested is so low that not crashing can be classified as a success, that's a pretty worthless test, of course. So coverage alone does not tell you how well a module is tested. However, a lack of coverage tells you that the module is not tested at all.
A well-written unit test will check expectations after the code has been run and fail the test if those expectations are wrong. This is what we use assertions for (in JUnit or TestNG). Thus we know that the right things happened when the code was executed. We also should expect that changing an algorithm should cause some tests to fail. If not, then maybe the original algorithm wasn't that well tested after all! I would go so far as to say that I actually want tests to fail when I reimplement a piece of code (or change what it does, of course) because that proves the original tests actually did something useful.