Read Time:11 Minute, 25 Second

I’ve remained silent on this topic for far too long. But now I’m about to go off. Buckle up…

In the last few weeks I’ve had some experiences with unit tests during coding evaluations that have left me exasperated and infuriated. This isn’t the first time that I’ve run into these types of issues. But I’m finally fed-up enough to proclaim loudly and proudly that:

Unit tests should have no place in the coding evaluations that you foist upon prospective candidates.

That may sound like heresy to some of you. (HINT: I don’t care.) But if it really bothers you that much, there’s a good chance that you’re part of the problem.

Image description

I’m not a heretic

Before I explain exactly why unit tests should play no part in your coding evaluations, I want to make it clear that I’m not taking a stand against unit testing in general. Of course you should be using unit tests in your dev environment. Of course you can vastly improve the quality of your coding efforts through copious unit testing. Of course you want all of your devs to be comfortable with unit testing.

So of course you should be including it in all of your coding evaluations… Right???


Allow me to explain…

Image description

Writing tests wastes time and can be disrespectful to the candidate

During the normal day-to-day operations in your dev environment, it’s typically unacceptable for someone to say, “I finished this task and submitted the work – but I didn’t write any unit tests because I simply ran out of time.” During normal “dev operations”, a task isn’t actually done until you’ve accounted for proper testing. But this standard should not apply during coding evaluations.

Most coding evaluations are timed. They give a set period in which you must fix a bug or build a feature or alter an existing feature in some way. Often, these time constraints can feel daunting – even to senior devs.

Imagine you have one hour to build some feature in React. Maybe it’s not that hard. Maybe you’re feeling pretty confident that you can knock this out. But in the middle of building it, you run into some kinda bug. You know… the kind where you sit there for a few minutes and just think, “Wait… What the heck’s going on?” Maybe you forgot to hook up a key handler and it takes you some time to realize what you’ve overlooked. Maybe you just made some stupid typo that’s not immediately apparent in your IDE. Regardless, the point is that, even in the simplest of tasks, sometimes you can “burn” 10-15 minutes rectifying something that you screwed up.

Eventually, you fix it. And you go on to build the complete feature right under the hour deadline. In fact, you built it well. You’re feeling pretty confident about the code that you cranked out.

But you didn’t get time to add any tests…

If your code is solid, and you completed the task, and you demonstrated a solid understanding of React, there’s no way in hell you should ever be marked down (or eliminated from contention) merely because you didn’t also have the time to slap unit tests onto your solution.

Let me be absolutely clear here. The mere act of writing a unit test is generally quite easy. Most of the testing frameworks have very similar syntaxes and they’re designed to help you write tests in a way that makes semantic sense. They use verbiage like:

it('Should enqueue the items to the queue', (done) => {...});
Enter fullscreen mode Exit fullscreen mode

Enter fullscreen mode Exit fullscreen mode

So it should feel fairly natural (to any established dev) to write tests in this manner.

But even though they can be syntactically self-explanatory, it can still take a little… nuance (and nuance equates to: time) to add unit tests that are actually meaningful and properly account for edge cases. The time it takes to implement these should not be a burden in your normal dev cycle. But it’s absolutely a burden when you’re staring down a timer during a coding evaluation.

A few weeks ago I completed a coding test where they wanted me to add a lot of features to an existing React codebase. And they told me that it should all be done… in 45 minutes. It wasn’t just that I had to add new components and get all the event handlers hooked up, but I had to be careful to do it in the exact style that already existed in the rest of the codebase. Furthermore, there were numerous CSS requirements that dictated precisely how the solution should look. So it wasn’t enough just to get the logic working. I also had to get everything matching the design spec. And again, I had to do that all in 45 minutes.

But of course, this wasn’t all they wanted. The requirements also said that all existing tests should pass and I should write new tests for any additional features that were added. And I was supposed to do all of that in 45 minutes. It was patently ridiculous.

If I’ve coded up a fully-functioning solution that meets the task’s requirements, but I didn’t get time to put proper unit tests on my new features, and you still want to eliminate me from contention, then… good. Believe me when I say that I don’t wanna work on your crappy dev team anyway.

But these aren’t the only problems with unit tests in coding challenges…

Image description

Broken tests

So maybe you’re not asking me to write tests. Maybe you just have a buncha unit tests in the codebase that need to pass before I can submit my solution? Sounds reasonable, right?


On numerous occasions, I’ve opened up a new codebase, in which I’m supposed to write some new solution, and found that the tests don’t pass out-of-the-box. Granted, if the “test” is that the codebase contains a bug, and the unit tests are failing because of that bug, and you now want me to fix that bug, then… OK. Fine. I get it.

But I’ve encountered several scenarios where I was supposed to be building brand new functionality. Yet when I open the codebase and run the tests that only exist to verify the legacy functionality – they fail. So then I spend the first 15 minutes of my precious evaluation time figuring out how to fix the tests on the base functionality, before I’ve even had a chance to write a single line of my new solution.

If this is the kinda test you wanna give me, then I don’t wanna work on your crappy dev team anyway.

Image description

Secret requirements

Here’s another <sarcasm>delightful</sarcasm> headache I’ve run into from your embedded unit tests: You’re not asking me to write new unit tests, but you’ve loaded a whole bunch of tests into the codebase that are designed to determine whether the new feature I’ve built performs according to the spec. So I carefully read through all the instructions, and I crank out a solution that satisfies all of the instructions, and it runs beautifully in a browser, but then I run the tests…

And they fail.

Then I go back and re-read all of the instructions, even more carefully than I did the first time. And – lo and behold – I’ve followed the instructions to a tee. But the unit tests still FAIL.

How do I remedy this? Well, I’ve gotta open up the unit test files and start working backward from the failures, trying to figure out why my instructions-compliant solution still fails your unit tests. That’s when I realize that the unit tests contain secret requirements.

For example, I ran into this scenario just yesterday. The React task had many features that should only display conditionally. Like, when the API returns no results, you should display a “No results found” <div>. But that div should not display if the API did in fact return results. And I coded it up to comply with that requirement. But the test still failed.

Why did it fail? Because the test was looking, hyper-specifically, for the “No results” <div> to be NULL. I coded it to use display: none. The original requirement merely stated that the <div> should not be displayed. It never stated that the resulting <div> must in fact be NULL. So to get the test to pass, I had to go back into my solution (the one that perfectly complied with the written instructions), and change the logic so it would NULL-out the <div>.

I had to do the same for several other elements that had similar logic. Because those elements also had their own unit tests – that were all expecting an explicit value of NULL.

If this had been made clear to me in the instructions, then I would’ve coded it that way from the beginning. But it was never stated as such in the instructions. So I had to waste valuable time in the coding test going back and refactoring my solution. I had to do this because the unit tests contained de facto “secret requirements”.

If you can’t be bothered to ensure that the unit tests in your coding challenge don’t contain secret requirements, then I absolutely have no desire to work on your crappy dev team anyway.

Image description

Illogical unit tests

Maybe you’re not asking me to crank out new unit tests, and maybe you’re not hiding “secret requirements” in your unit tests, and maybe all of the tests tied to the legacy code work just fine out-of-the-box. So that shouldn’t be any problem for me to comply with, right??


Yesterday I was coding an asynchronous queue in Node and I ran into this gem of a unit test:

it('Should dequeue items every 250ms from the queue', (done) => { queue.enqueue(1); queue.enqueue(2); queue.start(); queue.getCurrentInterval().should.eql(250); var onDequeueSpy = sinon.spy(); queue.on('dequeued', onDequeueSpy); setTimeout(() => {; onDequeueSpy.firstCall.args[0].should.eql(1); }, 260); setTimeout(() => {; onDequeueSpy.getCall(1).args[0].should.eql(2); done(); }, 510);
Enter fullscreen mode Exit fullscreen mode

Here’s what this unit test does:

  1. It adds two items to the queue with queue.enqueue().

  2. It starts the dequeuing process with queue.start().

  3. It ensures that the interval is set to the default value of 250 milliseconds.

  4. It then waits 260 milliseconds and checks that the queue has only been called once. (This PASSES.)

  5. It then waits exactly 250 additional milliseconds.

  6. It then checks that the queue has been called exactly twice.

By setting the wait time to be exactly 250 milliseconds after the first check, it sets up a race condition whereby this test intermittently fails roughly 50% of the time.

You may be thinking, “But, the second check should happen 510 milliseconds after the queue has been started, which should allow the queue to have been fired twice.” But since this only allows 10 milliseconds of leeway, it leads to intermittent scenarios where sometimes the test passes – and sometimes it fails.

Of course, one easy way to “fix” this issue is to give the second call a little more breathing room. But you can’t update the test files. If you try to do so, your git push is blocked.

I played around with this for a long time, trying to get it to consistently work without altering the test files. To no avail.

Here’s another jewel from the same coding challenge:

it('Should continue to listen for new data even on pausing the dequeue process', (done) => { var onDequeueSpy = sinon.spy(); var onEnqueueSpy = sinon.spy(); queue.on('dequeued', onDequeueSpy); queue.start(); queue.enqueue(2); queue.enqueue(3); queue.enqueue(4); queue.pause(); setTimeout(() => { onDequeueSpy.callCount.should.eql(0); queue.start(); queue.emit('interval', 50); queue.on('enqueued', onEnqueueSpy); queue.enqueue(95); queue.enqueue(110); }, 260); setTimeout(() => { queue.enqueue(221); onEnqueueSpy.callCount.should.eql(3); onDequeueSpy.callCount.should.eql(1); queue.print().should.eql([3, 4, 95, 110, 221]); done(); }, 320);
Enter fullscreen mode Exit fullscreen mode

This test FAILS on this line:

Enter fullscreen mode Exit fullscreen mode

Here’s why it fails. The queue is started with a default value of 250 milliseconds. Then it pauses the queue. Then, in the first setTimeout(), it re-starts the queue. And then it sets the interval to 50 milliseconds. But it doesn’t set the interval to 50 milliseconds until after it re-started the queue. And when it restarts the queue, it will initially start with a delay of 250 milliseconds.

This means that the first 250 millisecond delay will need to be run in full before the next queue call can run with a delay of 50 milliseconds.

Of course, it doesn’t matter that the unit test itself was janky AF. All that matters to the evaluator is that the (illogical) test didn’t pass.

In the end, I suppose it’s a good thing that they use these illogical tests, because guess what? I sure-as-hell don’t want to work on their crappy dev team anyway. But I’m still annoyed as hell because I wasted hours of my time yesterday doing their challenge, after I’d already had a great live interview with them and after I’d already coded a working solution, because they can’t be bothered to write logical unit tests.