So this week's assignment was to create six test cases for our Robocode robots using JUnit. I'm sad to say that Failbot was the first to be culled from the competition, as it was paired up against a non-wall-hugging robot. On the bright side, it did win a Free-For-All match against everyone in the class, and managed to take down one of the toughest competitors in the class in a last-minute duel, which ranked it 8th out of 25. But enough of my robotic exploits...
My six cases consisted of:
- Often beating SpinBot (>50%) (acceptance test)
- Consistently beating SittingDuck(acceptance test)
- Unit test of the toTurn() method
- Unit test of the getPow() method
- Behavioral test to ensure the bot hugs the walls
- Behavioral test to ensure the bot goes for the top left corner at the beginning of the turn.
We were given a Robocode control class wrapper, which I found somewhat useful, but not powerful enough to get the data I needed. The wrapper allowed for "snapshots" to be taken during specific (read: asynchronous) times in the battle where certain parts of the bot's code could be accessed. Robocode really doesn't provide a nice, easy way to get at the bots while they're in the arena. I wanted to override some of Failbots methods, and inject accessors in order to get at the robot's variables while it was in action. Unfortunately, it was impossible, so I created some fake events and called the bot's methods with the aforementioned events. It seemed completely inefficient to have to break out such small parts of the bot's algorithms into stand-alone methods simply to test them. But I suppose that those were areas where bugs could potentially go unnoticed by larger-scale debugging attempts. Breaking out these little snippets of code can be compared to disassembling the lunar lander...It's painful.
While creating the tests was a ton of work, in the end, I feel like Failbot got a thorough diagnostic check up on it's algorithms (I even found a bug in Robocode!)*. Jacoco says that the test suite gave 100% coverage of Failbot, but in reality, there was a fair bit of code (~15-20%) that wasn't specifically addressed. I do however feel that all of Failbot's core functionality, as well as it's more complicated features were well addressed and inspected.
After having to run such thorough tests on my bot, I now have a fairly good idea of how a program should be written in order to make testing much simpler. Some general guidelines for testing:
- If something can be broken into two simpler methods, do it. Each method should do one simple task.
- If a block of code seems murky or suspicious, break it up and run tests on each component.
- Try to avoid creating two similar but slightly different methods, you'll have to run the same test twice.
- Choose good test cases. Use equivalence classes and test boundary conditions.
- Junit is your friend.
- Jacoco is like a fortune cookie, it doesn't do anything very meaningful, but every so often, it'll tell you something insightful.
No comments:
Post a Comment