Why I Don’t Do TDD as a Java Developer | by Shai Almog | Nov, 2022

the good, the bad and the worse

I recently gave a talk about debugging to the London Java Community. During the Q&A portion of the conversation, someone asked me about my approach to Test Driven Development. In the past I viewed that practice in a more positive light. Writing a lot of tests. How bad can it be?

But as time goes on, I see it in a different light. I see it as a very limited tool with very specific use cases. It doesn’t fit the type of projects I build and often hinders the fluid processes it fosters. But let’s step back for a second. I really liked this post Which separates the types and problems in TDD. But let’s simplify it a bit, clarify that every PR should have good coverage. This is not TDD. It’s just good programming.

TDD is much more than that. In this we need to define the constraints and then solve the problem. Is that approach better to solve the problem and then verify the constraints? This is the basic premise of TDD vs just writing good test coverage.

TDD is an interesting approach. This is especially useful when working with loosely typed languages. In those situations, TDD is wonderful because it fills the role of strict compiler and linter.

There are other cases where it makes sense. When we are building a system which has very well defined inputs and outputs. I have encountered many of these cases while creating courses and materials. While working on real world data it sometimes happens when we have middleware which processes the data and outputs it in a predefined format.

The idea is to construct an equation with a hidden variable in the middle. Then coding starts to fill into the equation. In such cases it is very convenient. Coding fills in the blanks.

“test driven Development Is double-entry bookkeeping. Same discipline. Same logic. Same result. -Uncle Bob Martin

I would argue that testing is like double-entry bookkeeping. Yes, we should get testing done. The question is should we build our code based on our tests or vice versa? Here the answer is not so simple.

If we have a pre-existing system with tests, TDD makes all the sense in the world. But testing a system that wasn’t even built yet. There are some cases where it makes sense, but not as often as one would think.

The big claim for TDD is “the design of it”. Tests are effectively the system design, and then we implement that design. The problem with this is that we cannot even debug a design. In the past, I worked on a project for a large Japanese company. This company had the largest, most comprehensive set of attachment design books. Based on these design specifications, the company created thousands of tests. We had to pass a huge amount of tests with our system. Note that most were not even automatic.

There were bugs in the tests. There were several competing implementations but none of them were found to have bugs in tests. Why? They all used the same reference implementation source code. We were the first team to leave it out and do a cleanroom implementation. This allowed these bugs to persist in the code, some of them serious performance bugs that affected all previous releases.

But the real problem was slow progress. The company could not grow fast enough. TDD proponents will be quick to comment that a TDD project is easier to refactor because the tests guarantee us that we won’t have regressions. But this applies to projects with testing done after the fact.

TDD focuses heavily on fast unit testing. It is impractical to run slow integration tests or long running tests that run overnight on a TDD system. How do you verify scale and integration into a major system?

In a perfect world, everything would just fall into place like Legos. I don’t live in a world where integration tests fail miserably. These are the worst failures with hard to track down bugs. I want to fail at unit tests, so I have. It’s easy to fix them. But even with perfect coverage they don’t test the interconnects properly. We need integration tests and they find the most hideous bugs.

As a result, TDD places more emphasis on “nice to have” unit tests than on necessary integration tests. Yes, you must have both. But I must have integration tests. They don’t fit cleanly into the TDD process.

I test the way I choose on a case-by-case basis. If I have a case where it’s natural to test beforehand, I’ll use that. But in most cases, it just feels more natural to me to write the code first. Reviewing coverage numbers is very helpful when writing tests and it’s something I do after the fact.

As I mentioned earlier, I only do coverage checking for integration tests. I like to unit test and monitor coverage there because I want good coverage there too. But for quality, only integration tests matter. A PR needs unit tests, I don’t care if we wrote them before implementation. We must judge by the results.

When Tesla was building their Model 3 factories, they went through production hell. The source of the problems was his attempt to automate everything. Pareto principle Perfectly applies to automation. Some things are just too resistant to automation and make the whole process that much worse.

The one point in UI testing where it really fails. Solutions like Selenium etc. have made huge strides in testing web front ends. Still, the complexity is tremendous and the tests are very delicate. We end up with hard-to-maintain tests. Worse, we find the UI hard to refactor because we don’t want to rewrite the tests.

We can probably exceed 80% of the tested functionality, but there is a point of diminishing returns for automation. TDD is problematic in those environments. The functionality is easy but creating tests becomes unstable.

I’m not against TDD but I don’t recommend it and effectively I don’t use it. When it makes sense to start with a test I can do that, but that’s not really TDD. I judge code based on results. TDD can provide good results but often it puts more emphasis on unit tests. Integration tests are more important for quality in the long run.

Automation is great. until it stops. There’s a point where automated tests make little sense. It would save us a lot of time and effort to accept it and focus our efforts in a productive direction.

This is from my bias as a Java developer who prefers type-safe, strict languages. Languages ​​such as JavaScript and Python can benefit from a large amount of testing due to their flexibility. That’s why TDD makes more sense in those environments.

In short, testing is good. TDD doesn’t make testing better though. That’s an interesting method if it works for you. In some cases it is huge. But the idea that TDD is necessary or even that it will significantly improve the resulting code doesn’t make sense.

Leave a Reply