Category Archives: Uncategorized

Test Automation Curriculum

Two things happened to me lately. First, I was trying to find a career tester in the San Diego area that knows at least a little bit about automated testing. It isn’t going well. I’ve reviewed a lot of resumes. all the submitters are career manual testers.

Surely somebody sometime must have wondered if they need to learn more about automation. Elisabeth Hendrickson once asked Do Testers Have to Write Code? They did a survey to figure out what companies were looking for from tester skills. In our case, we aren’t looking for somebody to write the test code, but to write and review the cucumber scenarios. Just the same, even on a light desire level, I was disappointed.

Second, a younger person asked me what he should learn in test automation last week. I had already been contemplating writing this curriculum, so I was resolved to do it. Srini, here it is.

Other people who don’t work with LAMPs, such as .Net environments etc. will probably not appreciate this list. Make your own list on your own blog and put the link in a comment here. I don’t begrudge anybody doing something else. I just don’t want to go there.

I created this curriculum for testers learning test automation. While some addresses how and why, most of the list is about tools that can help create a full solution. Anyway, here is my list in priority order:

  • An open source tool such as Watir-Webdriver or Selenium/Java – do not mess around with the QTP and TestComplete. The cargo cults that buy those tools will expect “anybody can automate”.  With open source tools, you can download your own learning playground and incorporate that with the other products.
    • Learn how to create page objects. Even if you take advantage of a library like WatirMark or Page-Objects, you will have to do some tailoring yourself. I have been working with Selenium/Java so I am developing my skills on that combination now. Either way, you need to know how to work on that in an efficient way. In fact, you can address a lot of the entries in here just be using Cheezy’s book Cucumbers and Cheese (well worth the $15). I swear that I do not get a dime from it or Cheezy’s work, it’s just such a big benefit for anybody learning that I cannot miss the chance to say how good it is.
    • An open source framework such as Cucumber, Cucumber-jvm, or RSpec.
  • Github and Git – there are other good source control tools out there, including subversion. Git is easy to use locally for managing your own practice code. It’s easy go get copies of other people’s public projects onto your own system (how did they do that?). CodeSchool has a free course on git. There is also a nice paper on the differences between git/mercurial and subversion so you can understand the differences.
  • Ant and Maven if you use Java. Most of what I learned was through osmosis, but being able to shoehorn cucumber into your project is good to know.
  • Jenkins or Hudson, CruiseControl, or some other open source continuous integration tool. If you ever work at a place that will be introducing automated testing for the first time, this is great to know how to set it up.
  • Performance testing in JMeter – I think you can find a ruby alternative (BlitzIO or Grinder) but you don’t really need this tool to be in a ruby language. The importance is to learn the different kinds of testing you find under this umbrella (incorrectly) called Performance Testing. The other important skill is creating the right monitors so you can discover where things are bottled up.
  • Owasp‘s ZAProxy – learn how to capture the http calls between your browser or simulator and the server under test. You will learn a lot there. While you are there, download the GoatWeb project where you can learn about security vulnerabilities through practice.
  • Monitoring tools (Splunk or Graylog2) – One way to find the errors that are occurring on the system under test is through logging. Those are deleted nearly every time the server is redeployed. You can monitor those logs and server performance much better through a monitoring server.
  • A true startup is probably not going to hire a newb unless they are cost-control-centered. But if you find you get there are there is no issue tracking, it would be good to know how to set up issue tracking and integrate to your version control and continuous integration server. I’ve tried RedMine and it was fine.

If you see that you think should be on the list that is not there, please add a comment.

Advertisements

Article: Secrets to Success in Web Test Automation

Automating tests is an investment that is valuable as long as the investment is not too great. We want to get to the payback more quickly. The true investment is not only the cost of creating and maintaining automated tests. Learn more in my article for Software Test Magazine at Secrets to Successful Test Automation with Watir.

Popularity on Twitter (for my blog tags)

I went to a good presentation on twitter tags in January by Kym Raines.

She showed us how to identify has tags that are popular using #HashTagBattle. The idea is that if you use the hashtag that more people use, the more your tweet can show up in people’s twitter searches (yes, some people search by hashtags). So I looked into it because a lurker like me should make the most out of the few tweets I make. I found that HashTagBattle uses a search engine called Topsy. Just for fun, I created a script to capture the popularity of the tags I use in this blog.

Script:

#in IRB
load 'hashcount.rb'
list = """ all the tags scraped out of my blog """
tags = list.split(" ")
tags.each do |tag|
  h = Hashcount.new(tag)
  puts "#{tag} : #{h.get(MONTH)}"
end

#hashcount.rb
require 'hpricot'
require 'mechanize'
MONTH = 'off tab-m'
class Hashcount
    URL = "http://topsy.com/s?"
    def initialize(hash)
        mech = Mechanize.new
        @doc = Hpricot(mech.get("http://topsy.com/s?q=%23#{hash}").body)
    end
    def get(t)
        xpath = %Q(//li[@class='xxxx']/a/span[@class='count'])
        begin
            return @doc.search(xpath.sub('xxxx',t))[0].inner_html
        rescue Exception => e
            # puts e.message             
        end
    end
end

And here is the output of them:

adjuster : 44
author : 19K
automatedtesting : 11
Automation : 3,006
books : 84K
brand : 23K
Bugzilla :
caine : 126
Change : 31K
character : 6,140
children : 30K
choose : 1,661
commitment : 5,058
communityservice : 1,716
competence : 167
Compuware : 104
Confidence : 8,212
Continuousintegration : 44
Cucumber : 2,914
Cucumber-JVM : 0
Culture : 40K
database : 3,837
decision : 1,791
delegates : 83
dictators : 218
ElasticSearch : 357
Exceptional : 316
experience : 6,955
exploratorytesting : 2
Functionaltesting : 4
future : 38K
Git : 5,368
Graphicaluserinterface : 2
grasshopper :
graylog2 : 6
HardWork : 27K
help : 129K
helpingothers : 333
interactiveautomation :
interest : 1,067
investigate : 197
IRB : 274
java : 29K
Jenkins : 449
JeremyLin : 373
job : 1M
Kids : 75K
Leader : 13K
leaders : 7,144
Leadership* : 59K
Learn : 11K
Libraries : 3,339
Library : 16K
Linux : 68K
listener : 228
manager : 19K
masterpo : 1
MongoDB : 2,790
NBA : 246K
NewYorkKnicks : 1,478
Opensource : 10K
Opportunity : 5,545
organization : 2,771
Organizations : 276
payitforward : 4,025
Perserverance : 169
planner : 2,509
practice : 14K
ProblemSolving : 408
problems : 12K
program : 5,688
pursuer : 1
recognition : 1,660
recognize : 1,541
Redmine : 112
reluctantleader : 0
Resistance : 2,759
Revisioncontrol : 0
rewarding : 372
Ruby* : 12K
RubyOnRails : 844
scripts : 1,210
servant : 493
Service : 17K
software : 28K
softwaredevelopment : 1,483
softwaretest : 19
Softwaretesting : 976
speaker : 2,599
Splunk : 273
success : 63K
team : 60K
testautomation : 78
testmanager : 17
TestStack : 0
testingautomationgeneratecases :
tests : 2,539
toolsupported :
Trac : 245
training : 41K
trust : 30K
Unittesting : 70
vendor : 429
vendortesttools :
vendortools :
victimofchanges : 1
VM(operatingsystem) : 1,364
WatirPodcast : 0

Note: When I scraped them, I removed the spaces from between words as is typical of twitter hash tags.

Issue & Story Tracking

Everywhere that I’ve ever worked at had a defect tracking system, requirements system, and sometimes merged into one. Some were off the shelf, some were homemade, and others were services. I never felt the need to install one for work, but I did want to see what was involved with doing this, including how to integrate it with other products in the test stack.

I didn’t look around much for an open source solution. I know Bugzilla is commonly used. I also looked at Mantis and Trac. I decided to go with Redmine because it’s a Ruby On Rails app.

Installation

The biggest issue was to identify which ports to use. I already had a Puppies application that I was testing as part of my Cucumber and Cheese book. I added a Graylog2 application, also Ruby On Rails for the web interface (server component is java).

I originally chose to use sqlite3 because it was already installed on my system. I knew that I could use IRB to crack into the database if I needed to. Turned out that I did. I had to create an admin user (default one did not work). I eventually switched to MySQL.

Integrations

I integrated it with Git. I had trouble getting it working until I discovered that I needed to point it to the .git directory of my project directory. By including issue id’s in my checking comments, the issue automatically associated the issue with the checkin. The comment showed up on the repository page and the issue page. It supports diffing, too.
Redmin_GIT

I was able to install a Jenkins Plugin (technically it is for Hudson but works on Jenkins) to integrate with my Jenkins. This allowed me to identify which builds the associated checkins would be built. I also installed a Redmine plugin to Jenkins could also track builds on Jenkins to show Redmine issues associated with the build in aggregate.

It also supported multiple projects, and sub-projects. The wiki and news seemed relevant for keeping track of project information.

There is role based access control. That probably isn’t necessary for an agile team so I ignored this capability. Also non-agile (my opinion) is time tracking and Gantt charts.

Conclusion

I found all the features to be useful. If I were going to build my own stack then I would like using this component.

Good Practices For Automating Functional Tests

Why

I spend a lot of time talking about the benefits of automating tests – automated checks as explained by Michael Bolton. I call them automated tests out of habit (and clarity to the uninitiated). Part of the responsibility of teaching this subject is teaching people to follow good practices. Almost all of these practices I lifted from somebody else.

Why do I like automated tests? Because I want to reduce the time between when a problem is introduced and when it is reported to the person that introduced the problem. Some time in history, man measured the cost of fixing defects and it was more expensive the longer it went unnoticed. This is a generalization but I take it seriously because my job is to help people make good products. My value is in the reduction of costs and increase in benefit I create for the pay I receive. When I do that more effectively, my value increases. Since I work in the computer software business, I ought to be using technology to reach those ends.

Balance in what you test (Unit, Service, GUI)

I have heard about the automated testing pyramid from several people and read on many blogs such as Michael Cohn’s – I don’t know who originated the idea, but I heard about it from Janet Gregory when she trained my team at HP. If all of the tests for a product are through the graphical user interface, then there’s a lot that isn’t getting tested, the tests will tend to be at a higher cost to maintain because interfaces tend to be change more than the classes and services, and they will be found later in the product cycle as the GUI tests tend to require all of the product layers are built (regardless of the order they are created).

Not Everything Should be Automated

James Bach recently wrote a blog post on the skill required to create scripted tests. The lesson I received was that scripting tests to the point they become checks is high even if people are running the. The investment goes even higher when the interpreter is a computer program because the instruction needs to be so much more precise. The return is lower because the interpreter only sees what it is told to see.

Then where is the value? Automating the activities that cost less to automate than to perform manually. Consider partial automation as a great alternative to the automate/manual question.

  • Inject data to set up the test scenario – possible sql queries or web service calls
  • Verifying the unseen changes – sql queries (new/changed records), parsing logs, or web service calls
  • Navigation to the location of the test – this could be opening web pages, logging in, and going to a certain web page.
  • Capturing screen shots for human eyes to review
  • Notification of environment changes
I recently wrote a blog post on using Interactive Ruby to support manual testing with automation which may help you see the automation and manual thought process marry.
One more thing to consider: do not automate tests that will not be run repeatedly. Do not even script them. Keep notes on what was done just in case you need to do some forensic analysis. Just don’t automate them!

Pass/Fail Criteria

The first mistake I ever made in automated tests was to think I automated a test by creating the automated navigation. Without some criteria to know if the test passed, the “result” is useless.

I like to know which tests failed separately from which did not complete. If the problem is not what you are testing for, it’s an exception. When the test fails, somebody has to figure out if there is a failure in the product, if the product changed without a like change in the tests, or if there is a failure in the tests (presumably, a good practice was not followed). Assuming there is a failure in the product, a defect that will be fixed or not fixed – it is a known problem. In the case of the incomplete test, the problem is unknown because we haven’t actually seen what happens when we get to the end – you probably do not know if the test would pass or not.

Layered Framework

The first thing I got right when I started automating tests was to create layers. In fact, I was so sure that it the only possibility for being successful that I was shocked to hear Dorothy Graham talk about the idea in Lightning Strikes the Keynote at StarWest 2010 (as if it were a new idea). Maybe some automation tools make this separation so difficult that some people don’t do it.

The best way to describe this is to separate the what from the how. The what should be your test cases (like general instructions for a manual tester). The how should be your test framework (classes and methods). The what should be the business logic that needs to be tested (or the workflow, or whatever). The how should be the specific instructions for dealing with the interface (click this, fill in that). The what is your customers actions. The how is the implementation of your interface that supports what the customer wants to do.

The purpose is to simplify maintenance when the product creators (dev team in my case) changes the how. We often do not see that separation in step-by-step manual test scripts (also called checks by the experts). If we change the submit form action from clicking a button to a gesture (such as a Be-Witched nose wiggle), there is one place to change the code so that all tests incorporating that form submission will work.

I have seen the separation in many different ways and levels. In some cases, the framework supported domain specific actions (log in, set field, submit form). In other cases, the framework supported transactions (create account, update profile, purchase subscription).

Run 1 Check per Test

I like the idea of separating each test from the others. Not because I care how many test cases exist but because I want separation of failing and passing results. I do not find value in doing and checking 10 things if they all pass or they all fail because one check failed. This also means to go straight to the tested functionality. Did you want to test the navigation? Do that in another test.

What about an end-to-end test? For example, suppose you want to test creating an order, fulfilling the order, charging a credit card, and notification of completion? If it’s a straight through “complete” use case, then you are testing one thing.

Timing Dependencies

There are two timing-related considerations here. First, sometimes tests must wait. They wait for a web page to load, they wait for JavaScript to be completed. You should not wait by guessing how long is necessary. Many tools come with wait_until type functions such as browser.div(name=>’javascript complete’).wait_until_exists which will delay the script the right amount of time. If you don’t get that in your tool, create loops that check for existence with a timeout.

The second timing consideration is dependence on something happening that we don’t know when it’s going to happen. Suppose I create a situation that will trigger a notification when the notification scheduler runs but I don’t know exactly when it will run. I can either help it along will triggering the notification scheduler manually or … consider not doing that test. Nobody wants the automated tests hung up for 35 minutes.

Do Not Assume the Data Exists

In a previous job that I had, the product under test came with a sample database. I found that it was often used to support manual tests. The problem with the assumption that the sample data will be there to support the test is that other tests could change or remove the data. Manual testers will make the adjustment by creating the data at that point. An automated test will… stop.

When we converting the manual tests into automated tests, one of the first features we added to the framework was to allow us to create the data needed to support the test. In that case, we used web services calls.

The Most Reliable Way to Use Data

A long time ago I worked on a system that had almost no data to support my tests. I would spend an hour creating data through the web interface. I hope that nobody does what I did, not even with a web automation tool. I solved that problem by learning how to import xml files with the data needed to support my tests. Since then, I have used api’s (including web services api’s) and sql scripts to inject data. Use the most reliable way possible.

Clean Up After Your Tests

“A job isn’t finished until you have cleaned up after yourself” said my father. I say the same thing about testing. For the sake of other tests that will run after yours, you may consider cleaning up.

Summary

I spend years learning about these practices. Sometimes I learned the hard way, other times I was fortunate enough to learn them from a seasoned professional. I did not want to call this best practices because that would assume I know the context. There are so many situations that I could not know the best practice for each of them. Consider each of them with the help of your team before making a decision. If the others that depend on the test results understand these practices, they can often make implementing the practices much easier.

Additional Acknowledgments

In addition to the ones already in the post, I would like to recognize my co-workers Jean-Philippe Boucharlat and David C. Cooper, whom I worked with at Hewlett-Packard, for sharing their insights to automation best practices.

Update Notes – February 10 2012

I have updated the original post to improve the article from the feedback of the kind readers.