Tag Archives: Cucumber

Test Automation Curriculum

Two things happened to me lately. First, I was trying to find a career tester in the San Diego area that knows at least a little bit about automated testing. It isn’t going well. I’ve reviewed a lot of resumes. all the submitters are career manual testers.

Surely somebody sometime must have wondered if they need to learn more about automation. Elisabeth Hendrickson once asked Do Testers Have to Write Code? They did a survey to figure out what companies were looking for from tester skills. In our case, we aren’t looking for somebody to write the test code, but to write and review the cucumber scenarios. Just the same, even on a light desire level, I was disappointed.

Second, a younger person asked me what he should learn in test automation last week. I had already been contemplating writing this curriculum, so I was resolved to do it. Srini, here it is.

Other people who don’t work with LAMPs, such as .Net environments etc. will probably not appreciate this list. Make your own list on your own blog and put the link in a comment here. I don’t begrudge anybody doing something else. I just don’t want to go there.

I created this curriculum for testers learning test automation. While some addresses how and why, most of the list is about tools that can help create a full solution. Anyway, here is my list in priority order:

  • An open source tool such as Watir-Webdriver or Selenium/Java – do not mess around with the QTP and TestComplete. The cargo cults that buy those tools will expect “anybody can automate”.  With open source tools, you can download your own learning playground and incorporate that with the other products.
    • Learn how to create page objects. Even if you take advantage of a library like WatirMark or Page-Objects, you will have to do some tailoring yourself. I have been working with Selenium/Java so I am developing my skills on that combination now. Either way, you need to know how to work on that in an efficient way. In fact, you can address a lot of the entries in here just be using Cheezy’s book Cucumbers and Cheese (well worth the $15). I swear that I do not get a dime from it or Cheezy’s work, it’s just such a big benefit for anybody learning that I cannot miss the chance to say how good it is.
    • An open source framework such as Cucumber, Cucumber-jvm, or RSpec.
  • Github and Git – there are other good source control tools out there, including subversion. Git is easy to use locally for managing your own practice code. It’s easy go get copies of other people’s public projects onto your own system (how did they do that?). CodeSchool has a free course on git. There is also a nice paper on the differences between git/mercurial and subversion so you can understand the differences.
  • Ant and Maven if you use Java. Most of what I learned was through osmosis, but being able to shoehorn cucumber into your project is good to know.
  • Jenkins or Hudson, CruiseControl, or some other open source continuous integration tool. If you ever work at a place that will be introducing automated testing for the first time, this is great to know how to set it up.
  • Performance testing in JMeter – I think you can find a ruby alternative (BlitzIO or Grinder) but you don’t really need this tool to be in a ruby language. The importance is to learn the different kinds of testing you find under this umbrella (incorrectly) called Performance Testing. The other important skill is creating the right monitors so you can discover where things are bottled up.
  • Owasp‘s ZAProxy – learn how to capture the http calls between your browser or simulator and the server under test. You will learn a lot there. While you are there, download the GoatWeb project where you can learn about security vulnerabilities through practice.
  • Monitoring tools (Splunk or Graylog2) – One way to find the errors that are occurring on the system under test is through logging. Those are deleted nearly every time the server is redeployed. You can monitor those logs and server performance much better through a monitoring server.
  • A true startup is probably not going to hire a newb unless they are cost-control-centered. But if you find you get there are there is no issue tracking, it would be good to know how to set up issue tracking and integrate to your version control and continuous integration server. I’ve tried RedMine and it was fine.

If you see that you think should be on the list that is not there, please add a comment.

Continous Integration (Jenkins)

I fancy myself an automated tester. What’s so automatic about it if I have to run a command to make the test run? I guess it’s not. So how to I change that? In the past, I have created schedulers and such to kick off tests. Then I made listeners that would figure out if a new build existed.

Those were Mickey Mouse (please Disney, don’t sue me) solutions compared to the ones I started working with for the past four years – integrated continuous build systems, including Cruise Control, Hudson, Jenkins, and Bamboo (my chronological experience). None of those were my doing. Even when my framework was integrated, somebody smarter than me created that solution.

So in favor of not being left in the dark, I felt I should be able to create the same kind of integration. Step 1 is to install a server. I chose Jenkins, not for any particular reason other than I thought that I would be spending more time using it than figuring it out. I think that I was right.

Basic Set Up

At first I started installing the Ubuntu native Jenkins, then I decided it would be easier to download the jenkins.war file. First feature was to get it running. OK, that’s not a feature, but it was important. I hate apps/app servers that use port 8080 so I changed it.

java -jar /devroot/tools/jenkins/jenkins.war --httpPort=5050

I did not have to install any database for it. I will research that more. I added Jenkins GIT plugin to support integration with GIT. That was easy. Then I was able to link my learning project (downloaded Jeff Morgan’s Puppies app) via file link as:

file:///home/dave/dev/puppies/

I set it to poll every minute (hourly doesn’t seem like continuous integration to me).

Advanced Set Up

Finally, I created a command script (what happens when something in the GIT repository changes) with something to do:

cd /home/dave/dev/puppies
rails s &
cucumber --tags @full

To verify, I checked in a change. Here was my ultimate output (I didn’t mind the failures because the cukes are in disarray):

JenkinsTest

8 scenarios (8 failed)
45 steps (9 failed, 6 skipped, 1 undefined, 29 passed)
2m41.479s

You can implement step definitions for undefined steps with these snippets:

Then /^I should see the following error messages: "(.*?)"$/ do |arg1|
  pending # express the regexp above with the code you wish you had
endBuild step 'Execute shell' marked build as failure
Finished: FAILURE

After I proved to myself that the command scripts ran for check-ins, I added a little bit more. I set up basic authentication for users, allowing everybody to do anything (it’s just me). I also started working on an ssh key, but I didn’t finish that yet.

Summary

Jenkins was easy to install, set up for my repository (even easier for svn and cvs, though I am not sure why), and easy to create mock scripts. I found it easy to integration my test framework. Jenkins even knew that it failed.

I would recommend that any tester that is working on automation should feel comfortable installing and integrating with Jenkins.

The Gold Standard

Image

Our software was able to update the settings on device via wireless module and/or writing to an SD card, which is then inserted into the flow generator. We had a good record for automating test scenarios using cucumber-jvm, but we might not get to include this one.
“But we can’t automate taking the SD card out of the card reader and putting it into the flow-generator! If we can’t automate it from end-to-end, we can’t put it in CI, and there’s no point to investing in it.” How to solve this? We decided to use the gold standard.

We created scenario outlines with combinations of settings in the examples. In the scenario, an authorized person would log in, then navigate to the device settings page, change the settings per the example record, then save them to an SD card. The first time, we would take those files on the SD card, insert them to the device to authenticate that the files “work right” in updating the device with those settings. We could store those files in source control. Then on subsequent runs, when our automated scenarios write new files, they can be compared to the Gold Standard files – the ones we validated already. When this runs in CI (or locally), the scenario fails if there is no match!

And I approve the written settings against the expected file set “<ApprovedFileset>”

Steve Gargan, who wrote the step definitions even did one better. If we were running the scenario locally, not in CI, and designated an approving=true parameter, then should the files not match, we would get asked on-screen whether the originals were right or the new set was right. If we select the new ones, then they would replace the old ones.

Then we created a similar feature for PDF reports. We are testing charts, mostly, so we convert the pdf into images to make gold standards. In addition, we capture the ‘criteria’ for the chart (description of what it should look like based on the data going into it). If the image or the criteria does not match, the scenario fails unless we are in approving=true mode.

I’d like to take credit for this but it was Steve’s implementation, inspired by Llewellyn Falco and Dan Gilkerson at:

http://approvaltests.sourceforge.net/
> Llewellyn Falco
Twitter: @LlewellynFalco
Blog: http://llewellynfalco.blogspot.com
> Dan Gilkerson
Contact info on: http://dangilkerson.com