Category Archives: Technical Debt

Work created for the future. Manual tests, broken automated tests, unresolved defects, poor requirements, poor functions, etc.

My Ubuntu Java Environment Setup Script

I know this has been done. And I know mine isn’t so great (I left out ALL error checking), but this is my quick set up script for when I install Ubuntu on a laptop or VM. I put it on a flash drive just in case because just in case happened to me 3 times in the past month or so. Note: a couple lines wrapped so I used \ to mark them.

# export BACKUP=/media/MYFLASHDRIVE/backup
export BACKUP=/media/LIFESTUDIO/MediaBackup/Downloads
export ME=dave
export GRP=dave

delete() {
	if [ -f $1 ]
		sudo rm -f $1
		echo "does not exist"

install() {
	if [ ! -x /usr/bin/$1 ]
		sudo apt-get install -y $1

install curl
install git
install zsh
if [ ! -x ~/.oh-my-zsh ]
	curl -L \ | sh
install autojump
install ack-grep
cd ~/.oh-my-zsh
tar -xvzpf $DOWN/custom.tar.gz
sudo chmod +rwx $ME:$GRP ~/.oh-my-zsh/custom
cp $DOWN/config.zshrc ~/.zshrc

if [ ! -x /usr/share/jdk1.7.0_40 ]
	tar -xvzpf $BACKUP/jdk-7u40-linux-x64.tar.gz
	sudo mv jdk1.7.0_40 /usr/share
	cd /usr/bin
	delete java
	sudo ln -s /usr/share/jdk1.7.0_40/bin/java

if [ ! -x /usr/bin/mvn ]
	sudo apt-get install maven

#Intellij Idea
if [ ! -x /usr/share/idea-IC-129.713 ]
	tar -xvzpf $BACKUP/ideaIC-12.1.4.tar.gz
	sudo mv idea-IC-129.713 /usr/share
	cd /usr/bin
	delete idea
	sudo ln -s /usr/share/idea-IC-129.713/bin/ idea

install jmeter

if [ ! -x /home/dave/.rvm/rubies/ruby-2.0.0-p247/bin/ruby ]
	install curl
	curl -L | bash -s stable --ruby
	/bin/bash --login
	rvm install ruby-2.0.0-p247
	sudo chown -R dave:dave .gem
	gem install map_by_method
	gem install what_methods
	gem install bundler

#Sublime Text 2
if [ ! -x /usr/bin/sublime ]
	cd ~
	tar xf $BACKUP/Sublime_Text_2.0.2_x64.tar.bz2
	sudo mv 'Sublime Text 2' /usr/share/Sublime_Text_2
	cd /usr/bin
	sudo ln -s /usr/share/Sublime_Text_2/sublime_text
	sudo ln -s /usr/share/Sublime_Text_2/sublime_text sublime
	sudo cp $BACKUP/sublime.desktop /usr/share/applications
	cat /usr/share/applications/defaults.list | \
		sed s/gedit.desktop/sub\lime.desktop/g > ~/defaults.list
	sudo cp ~/defaults.list /usr/share/applications/

#Skype and recorder
if [ ! -x /usr/bin/skype ]
	sudo dpkg -i $BACKUP/skype-ubuntu-precise_4.2.0.11-1_i386.deb
if [ ! -x /usr/bin/skype-call-recorder ]
	sudo dpkg -i $BACKUP/skype-call-recorder-ubuntu_0.10_amd64.deb
	sudo apt-get -f install

#Favorite Browser
if [ ! -x /opt/google/chrome ]
	install libxss1
	sudo dpkg -i $BACKUP/google-chrome-stable_current_amd64.deb

#other personalizations
if [ ! -x /usr/bin/dconf-editor ]
	sudo apt-get install -y dconf-tools
install nautilus-open-terminal
install ushare
install gimp


Improving the Value of Testing – Security!

Do what you say and say what you do.

I think I got that from an ISO 2001 audit preparation meeting in the mid-90’s during an effort to sell fax machines that we were manufacturing at HP to the EU. I like that so I try to use it.

Do What You Say

I said that I was going to try Improving the Value of Testing. What would be better than security testing? A bunch of things, you might say. But the reality is that security is the highest risk you are facing in your products. The bad guys understand more than you do, and probably more than the people who make the security tools you use already. For me, I do not even understand much about what the tools do, or know the difference between sql injection and cross-site scripting. 

Say What You Do

So I am going to venture into this a little by trying to do some security testing with tools that I get from where ever. I will even make some home-grown tools if possible because I like to build and I like control. That would help amp my understanding to a higher level, in my opinion.

My first attempt was to crack open an old book How to Break Web Software by Mike Andrews and James A. Whittaker. Things change a lot in 6 computer years. All the web services are in SOAP – yuck. That’s like getting your mouth washed out. And almost all the tools are for Windows, but I primarily use a Mac. Still, I think I can get some concepts out of this. I try the paso proxy, but it’s not working for me yet.

So I move on to SoapUI. That’s an old friend, but I have never used it for security nor Rest. I spent some time on trying to simply send a request (POST) to my system under test but the SoapUI crashed. And crashed. And crashed. I tried five times before I went to their forum and found an recent unanswered post called Clean Install: Mac OSX beach ball of death. Oh dear. 

I spent a lot of time on those without getting anywhere. Edison would have said that I learned some ways that it doesn’t work. I will add more as I have time and additional information!

Improving the Value of Automated Testing

I have an idea. I will not get too high on it except that it is intriguing to me. Maybe the idea is not new to other people. The idea came to me while I was thinking about the test automation pyramid (or ice cream cone as automation expert Alister Scott recognized the typical shape). I am fond of the three-layer concept – enough so that I made myself learn how to write unit tests and service-layer tests, in addition to the GUI-based tests that are commonly practiced by software testers.

I am thinking about the shape of the triangle, which part should be how big, etc. Suddenly the problem hits me. The problem is not isn’t the investment in automated tests. It isn’t the maintenance (which I thought maintenance was a big issue). Alright, I am lying. Investment is half of the equation – the return on investment calculation. The problem is the (lack of) focus on the return for the investment.

So much is invested in tests that will find what? Low severity defects that do not halt releases? I do not care about low severity defects until I have sussed the high severity defects. I care even less about them when I am faced with making a substantial investment to find them. Where does that leave me? Focus on the big bugs. Automation for big bugs. Which bugs? The ones we do not want to ship with them in. The ones that require a patch if you miss them. The ones that make you slip release dates if you catch them late.

“I don’t care about low severity defects until

I have sussed the high severity defects.”

I expect to get criticized for this idea. Why? I have no experience on this. I have never tried it. It is an untried idea. What could be less valuable than a hypothesis? I can try it, but this kind of thing would take a while to prove itself out. But I cannot worry about that while I am brainstorming an idea. The idea will be flawed in many people’s minds. Automated tests are really just automated checks. That is not new. People are not particularly suspicious that the results are false positives, they do not like that the test are little positives – that is they test so little compared what a person can do. I believe this concept too.

What I really believe in is the ends, not the means. The means is how I will get it done. The ends is what I want to accomplish. I care about the automation only when it affectedly helps me get the good defects. What are those? In my world, those are the defects that reduce my organization’s ability to meet agreed or assumed service level agreements. I call those Enterprise Readiness defects. Examples include poor performance, poor performance after time (example: caused by a memory leak), poor resilience during or after high loads, problems with failover, and problems with data retention.

How can I accomplish that? By remembering that automation is just a tool in the toolbox.

More on this later…

Good Practices For Automating Functional Tests


I spend a lot of time talking about the benefits of automating tests – automated checks as explained by Michael Bolton. I call them automated tests out of habit (and clarity to the uninitiated). Part of the responsibility of teaching this subject is teaching people to follow good practices. Almost all of these practices I lifted from somebody else.

Why do I like automated tests? Because I want to reduce the time between when a problem is introduced and when it is reported to the person that introduced the problem. Some time in history, man measured the cost of fixing defects and it was more expensive the longer it went unnoticed. This is a generalization but I take it seriously because my job is to help people make good products. My value is in the reduction of costs and increase in benefit I create for the pay I receive. When I do that more effectively, my value increases. Since I work in the computer software business, I ought to be using technology to reach those ends.

Balance in what you test (Unit, Service, GUI)

I have heard about the automated testing pyramid from several people and read on many blogs such as Michael Cohn’s – I don’t know who originated the idea, but I heard about it from Janet Gregory when she trained my team at HP. If all of the tests for a product are through the graphical user interface, then there’s a lot that isn’t getting tested, the tests will tend to be at a higher cost to maintain because interfaces tend to be change more than the classes and services, and they will be found later in the product cycle as the GUI tests tend to require all of the product layers are built (regardless of the order they are created).

Not Everything Should be Automated

James Bach recently wrote a blog post on the skill required to create scripted tests. The lesson I received was that scripting tests to the point they become checks is high even if people are running the. The investment goes even higher when the interpreter is a computer program because the instruction needs to be so much more precise. The return is lower because the interpreter only sees what it is told to see.

Then where is the value? Automating the activities that cost less to automate than to perform manually. Consider partial automation as a great alternative to the automate/manual question.

  • Inject data to set up the test scenario – possible sql queries or web service calls
  • Verifying the unseen changes – sql queries (new/changed records), parsing logs, or web service calls
  • Navigation to the location of the test – this could be opening web pages, logging in, and going to a certain web page.
  • Capturing screen shots for human eyes to review
  • Notification of environment changes
I recently wrote a blog post on using Interactive Ruby to support manual testing with automation which may help you see the automation and manual thought process marry.
One more thing to consider: do not automate tests that will not be run repeatedly. Do not even script them. Keep notes on what was done just in case you need to do some forensic analysis. Just don’t automate them!

Pass/Fail Criteria

The first mistake I ever made in automated tests was to think I automated a test by creating the automated navigation. Without some criteria to know if the test passed, the “result” is useless.

I like to know which tests failed separately from which did not complete. If the problem is not what you are testing for, it’s an exception. When the test fails, somebody has to figure out if there is a failure in the product, if the product changed without a like change in the tests, or if there is a failure in the tests (presumably, a good practice was not followed). Assuming there is a failure in the product, a defect that will be fixed or not fixed – it is a known problem. In the case of the incomplete test, the problem is unknown because we haven’t actually seen what happens when we get to the end – you probably do not know if the test would pass or not.

Layered Framework

The first thing I got right when I started automating tests was to create layers. In fact, I was so sure that it the only possibility for being successful that I was shocked to hear Dorothy Graham talk about the idea in Lightning Strikes the Keynote at StarWest 2010 (as if it were a new idea). Maybe some automation tools make this separation so difficult that some people don’t do it.

The best way to describe this is to separate the what from the how. The what should be your test cases (like general instructions for a manual tester). The how should be your test framework (classes and methods). The what should be the business logic that needs to be tested (or the workflow, or whatever). The how should be the specific instructions for dealing with the interface (click this, fill in that). The what is your customers actions. The how is the implementation of your interface that supports what the customer wants to do.

The purpose is to simplify maintenance when the product creators (dev team in my case) changes the how. We often do not see that separation in step-by-step manual test scripts (also called checks by the experts). If we change the submit form action from clicking a button to a gesture (such as a Be-Witched nose wiggle), there is one place to change the code so that all tests incorporating that form submission will work.

I have seen the separation in many different ways and levels. In some cases, the framework supported domain specific actions (log in, set field, submit form). In other cases, the framework supported transactions (create account, update profile, purchase subscription).

Run 1 Check per Test

I like the idea of separating each test from the others. Not because I care how many test cases exist but because I want separation of failing and passing results. I do not find value in doing and checking 10 things if they all pass or they all fail because one check failed. This also means to go straight to the tested functionality. Did you want to test the navigation? Do that in another test.

What about an end-to-end test? For example, suppose you want to test creating an order, fulfilling the order, charging a credit card, and notification of completion? If it’s a straight through “complete” use case, then you are testing one thing.

Timing Dependencies

There are two timing-related considerations here. First, sometimes tests must wait. They wait for a web page to load, they wait for JavaScript to be completed. You should not wait by guessing how long is necessary. Many tools come with wait_until type functions such as browser.div(name=>’javascript complete’).wait_until_exists which will delay the script the right amount of time. If you don’t get that in your tool, create loops that check for existence with a timeout.

The second timing consideration is dependence on something happening that we don’t know when it’s going to happen. Suppose I create a situation that will trigger a notification when the notification scheduler runs but I don’t know exactly when it will run. I can either help it along will triggering the notification scheduler manually or … consider not doing that test. Nobody wants the automated tests hung up for 35 minutes.

Do Not Assume the Data Exists

In a previous job that I had, the product under test came with a sample database. I found that it was often used to support manual tests. The problem with the assumption that the sample data will be there to support the test is that other tests could change or remove the data. Manual testers will make the adjustment by creating the data at that point. An automated test will… stop.

When we converting the manual tests into automated tests, one of the first features we added to the framework was to allow us to create the data needed to support the test. In that case, we used web services calls.

The Most Reliable Way to Use Data

A long time ago I worked on a system that had almost no data to support my tests. I would spend an hour creating data through the web interface. I hope that nobody does what I did, not even with a web automation tool. I solved that problem by learning how to import xml files with the data needed to support my tests. Since then, I have used api’s (including web services api’s) and sql scripts to inject data. Use the most reliable way possible.

Clean Up After Your Tests

“A job isn’t finished until you have cleaned up after yourself” said my father. I say the same thing about testing. For the sake of other tests that will run after yours, you may consider cleaning up.


I spend years learning about these practices. Sometimes I learned the hard way, other times I was fortunate enough to learn them from a seasoned professional. I did not want to call this best practices because that would assume I know the context. There are so many situations that I could not know the best practice for each of them. Consider each of them with the help of your team before making a decision. If the others that depend on the test results understand these practices, they can often make implementing the practices much easier.

Additional Acknowledgments

In addition to the ones already in the post, I would like to recognize my co-workers Jean-Philippe Boucharlat and David C. Cooper, whom I worked with at Hewlett-Packard, for sharing their insights to automation best practices.

Update Notes – February 10 2012

I have updated the original post to improve the article from the feedback of the kind readers.

Exploratory Testing with Interactive Ruby

I want to premise this post with two things. First, I didn’t think of this. I heard Jim Knowlton interviewed on the Watir Podcast. I also attended a web cast with Michael Bolton on using automated testing in a manual form. Both inspired me to try this myself. As some of my software friends are aware, I dig testing with Ruby and Watir (Web Application Testing In Ruby). I didn’t know anything about Ruby or Interactive Ruby (IRB) before working with it, starting in about 2007-2008. But since starting to use it, I found I am more productive by “trying” my code out in the IRB interface before adding it to my functions. Very handy!

My eyes were opened even more when I heard Jim talking about using IRB to facilitate his testing (and his co-workers) by creating libraries that assisted him. I started to envision things I could use it for: pumping data into a system for create conditions for testing business logic; grabbing near-real time information from log files; quickly getting to the ‘state’ in which I want to explore; and grabbing unseen information about the interface that I am looking at such as the state of objects and making them ‘seen’. I remember being so excited that I asked other people to listen to the podcast.

The webcast with Michael was revealing too. I could see with a simple command line IO, inputing numbering and getting numbers back helped me quickly learn about a system that I knew nothing about. He talked strategies to use exploration, as well as tactics that supported the strategy.

Those lessons came to fruition at my new job. I started with needing to test a web service. I have tested SOA web services before. But I had not tested the with Ruby before. I saw the team I was joining using a RestClient. They were modifying the ‘get’ and ‘delete’ calls based on the data set from the ‘create’ call. I asked myself how many times I wanted to copy that data, the answer was three times. So I pulled out the restful web services book that I had opened one or two times in the past, and found a snippet of net/http to call web services. Once I figured out how to add the pieces I needed, in less than a day, I had a class that could perform all three calls and retain the data, allowing me to test the web services api until I was satisfied that it was working according to the design. Running these from IRB allowed me to run manual tests. I kept changing the tests. Even though they were written in QC with instructions on parameters, I kept asking myself “what if I did it this way?” which allowed me to learn and satisfy myself that the API was strong.

I also saw  the other team members copying and pasting data from the create api call to use in a lynx (text-based command line browser) for testing business logic. I found all the typing and copying unnecessary and mistake-prone so I ‘printed’ the lynx command out allowing me to copy and paste the entire thing. This allowed me to more quickly get to the business of testing. I used that gained time to add a direct command from Ruby, reducing a step to run quicker, and finally incorporating the Mechanize library with that saved time.

The fruit of that labor was to allow me to test more quickly from the web service call to the browser call. That in itself did not find any defects, but it enabled me to see the need for faster (virtual) browser calls which did find three major defects and a minor defect. In spite of these gains, my manager was concerned about the cost of maintenance of early automation – a lesson I learned from James Bach’s Test Automation Snake Oil that says “Manual testing, on the other hand, is a process that adapts easily to change and can cope with complexity. Humans are able to detect hundreds of problem patterns, in a glance, an instantly distinguish them from harmless anomalies.”  I gave her a demonstration to show that my mind was in control of these tests. She felt at ease and asked me to continue.

The final culmination was today when I discovered a problem in a caching algorithm that would not be detected with smaller slower data-flows. I had no intention to test for load. I wanted to test functionality in a system environment and more realistic volumes. At this point, I did create a Ruby “script” but even then it took command line arguments to allow me to modify the test cases as I saw fit. There were no magical red or green colors that filled a test management system with results. Just tests that I ran as I felt were needed to learn if this system was ready to release or not.

– Dave McNulla