Friday, October 8, 2010

JUnit & Jacoco Testing of Failbot.java

Link to Source
So this week's assignment was to create six test cases for our Robocode robots using JUnit.  I'm sad to say that Failbot was the first to be culled from the competition, as it was paired up against a non-wall-hugging robot.  On the bright side, it did win a Free-For-All match against everyone in the class, and managed to take down one of the toughest competitors in the class in a last-minute duel, which ranked it 8th out of 25.  But enough of my robotic exploits...

My six cases consisted of:
  • Often beating SpinBot (>50%) (acceptance test)
  • Consistently beating SittingDuck(acceptance test)
  • Unit test of the toTurn() method
  • Unit test of the getPow() method
  • Behavioral test to ensure the bot hugs the walls
  • Behavioral test to ensure the bot goes for the top left corner at the beginning of the turn.
The first two tests were dead give aways, as they were simple to implement.  The first behavioral test was also a give away, as it was provided as an example.  The other three tests (the two unit tests, and the "top-left" test) were a bit harder to come up with and required me to re-design Failbot's code to make it easier to test.  It was somewhat difficult to figure out what parts of the code should be tested, and what algorithms could be made into stand-alone methods for testing.  Honestly, Failbot is quite simple, and coming up with 6 different tests was a real stretch. 

We were given a Robocode control class wrapper, which I found somewhat useful, but not powerful enough to get the data I needed.  The wrapper allowed for "snapshots" to be taken during specific (read: asynchronous) times in the battle where certain parts of the bot's code could be accessed.  Robocode really doesn't provide a nice, easy way to get at the bots while they're in the arena.  I wanted to override some of Failbots methods, and inject accessors in order to get at the robot's variables while it was in action.  Unfortunately, it was impossible, so I created some fake events and called the bot's methods with the aforementioned events.  It seemed completely inefficient to have to break out such small parts of the bot's algorithms into stand-alone methods simply to test them.  But I suppose that those were areas where bugs could potentially go unnoticed by larger-scale debugging attempts.  Breaking out these little snippets of code can be compared to disassembling the lunar lander...It's painful.

While creating the tests was a ton of work, in the end, I feel like Failbot got a thorough diagnostic check up on it's algorithms (I even found a bug in Robocode!)*.  Jacoco says that the test suite gave 100% coverage of Failbot, but in reality, there was a fair bit of code (~15-20%) that wasn't specifically addressed.  I do however feel that all of Failbot's core functionality, as well as it's more complicated features were well addressed and inspected. 

After having to run such thorough tests on my bot, I now have a fairly good idea of how a program should be written in order to make testing much simpler.  Some general guidelines for testing:
  1. If something can be broken into two simpler methods, do it.  Each method should do one simple task.
  2. If a block of code seems murky or suspicious, break it up and run tests on each component.
  3. Try to avoid creating two similar but slightly different methods, you'll have to run the same test twice.
  4. Choose good test cases.  Use equivalence classes and test boundary conditions.
  5. Junit is your friend. 
  6. Jacoco is like a fortune cookie, it doesn't do anything very meaningful, but every so often, it'll tell you something insightful.
Basically its like data normalization: break up all of the chunks. 

No comments:

Post a Comment