Cleaning up a feature test

Writing fully integrated tests can be a point of contention testing, especially when it comes to writing Capybara test that need to wait JavaScript to load items.

Having a Rails API that is powered by some select Angular code can be challenging when items don't load when you need them to.

This happens occasionally in our test suite, causing some tests to inconsistently fail. My first thought with these failures is to add a line for sleep 5 in the problem expectations in order to manipulate the test to wait for clickable items to appear on the page.

Rather than continue with this process, I looked to implementing a more elegant solution, but stumbled upon Rspec::Wait, which implements a nice :wait_for method to replace random expectations. Adding :wait_for methods to sleep is still a bandaid, but at least it makes the intention of the test more clear(my canary in the coal mine).

The Real issue:

Having multiple expectations within one test Bloc always looks weird to me, they are extremely hard to debug. I also find them mostly inexcusable in a unit test, but when for feature test they are necessity to simulate a user flow, for example:

*Note: There are multiple expectations that are being created only to check for the existence of certain buttons prior to the Capybara :click. The challenge is figuring out what is needed and what we can live with out.

The goal of these expectations is to be a smoke screen for broken functionality of the page, but these expectation placements did not seem right. This issue seemed like an ideal place to implement the RSpec::Wait. I went ahead and replaced these expectations with :wait_for's instead but noticed something instead.

While running the RSpec test repeatedly I was tipped to figuring out there was a bit of expectation debt happening, which was the root issue for the buttons not loading. The backlog of things to do in the test added the button loading slow downs.

While TDDing a feature we tend to add extra expectations to guide us into writing code to match, but after the feature is implemented, the existence of some of these expectations are no longer needed.

I just deleted a few expectations and notice a instant speed boost (3 second runtime improvement, even with RSpec::Wait adding test time to :wait_for items) in the test as well as a clear improvement in the test no longer randomly failing due to buttons not loading in time, thanks to combination of test deletion and :wait_for. I did notice a few infrequent TimeOut::Errors, but despite that specific frustration, I chose to deploy my small test improvements anyway.

So this all got me to ask the question,

Do we even need these Capybara test in Rails if we already have unit test coverage and could we replace the Capybara test more with JavaScript Integration test?

I am in the process of trying out the recreation of these test in our JavaScript test suite, since JS is the tool causing the hold up and doing all the work in these test. Looking forward to reporting on my results in that in a later post.

As far as this test, I got some great feedback after deploying the test changes. The overall goal of the test wasn't clear, especially when the test waits for different types of content to appear.

The original purpose of this test was to make sure the student retained their mentor when they froze their enrollment, that's it, but the test waits for messages and buttons to appear -- slowing down the test and causing timeouts.

I deleted the last section of the test where the test waits repeats the same steps for unfreezing in the same scenario(entirely separate scenario). With this change I saw the test runtime get cut in half with no more infrequent TimeOut::Errors. The test now only has one clear goal and a cleaner path to eventually check if the mentor is present.

I do have considerations to add my deletions back in a separate scenario, but in this case, the best cleanup ended up being the trashcan.