Tag Archives: Ruby

Watir Podcast Episode 59

The Watir Podcast is publishing podcasts again. In this week’s episode, Neil Manvar from SauceLabs tells us about the advantages of use SauceLabs, how to get started, and what mistakes can lead to trouble.

Please listen and give feedback on this podcast on SoundCloud (https://soundcloud.com/the-watir-podcast/episode-59-saucelabs), as well as what you would like to hear about on future Watir Podcasts.

You can subscribe to The Watir Podcast feed (http://feeds.soundcloud.com/users/soundcloud:users:248873479/sounds.rss).

After a four year hiatus, we have resumed recording the Watir Podcast. It’s not at the same site. You can now find it on SoundCloud. Please listen and provide feedback.

This week, episode 56 has an interview with bigtime Watir developer Titus Fortner. He explains who Watir is releasing a beta for Watir 6 which supports the Geckodriver, Selenium 3, and Firefox 48.

My Customizations in ZSH

I have added the following as oh-my-zsh custom plugins. I did this because I like to sync my customizations between computers, but I don’t want them all to be active on every computer. I can tailor that with my .zshrc
plugins=(git ruby gem mvn rvm sublime safepaste autojump ushare pipe directory hardware ackgrep  command-not-found)

directory.plugin.zsh – this is supposed to be mostly directory stuff, but I added in my autojump statistics, disk space, and tar/zip commands. Seemed relevant to me.

alias as='autojump -s'
alias po='popd'
alias pu='pushd'
alias home='cd ~/'

alias md='mkdir -p'
alias mkdir='mkdir -p'
alias RF='rm -rf '
alias path='echo -e ${PATH//:/\\n}'

alias ..='cd ..'
alias ...='cd ../..'
alias ....='cd ../../..'
alias .....='cd ../../../..'
alias lsa='ls -a'
alias ll='ls -l'
alias lla='ls -la'
alias la='ls -a'
alias lf='ls *(.)' #just files
alias ld='ls *(/)' #just directories
alias df='df -kTh'
alias du='du -kh'
alias tj='tar -xvjpf '
alias t='tar xvfz '
alias tz='tar -xvzpf '
alias tb='tar -xvfj '

ackgrep.plugin.zsh – I called it ackgrep (my favorite seach tool), but I use it for searching just about anything that would otherwise be annoying.

alias a='ack-grep '
alias apage='ack-grep --pager=less -r'
alias ainvert='ack-grep -v '
alias ag='which '
agg() { ack-grep -H $* $ZSH_CUSTOM }

alias f='find . |grep -v svn |grep '
alias findjar='find . -type f -name  *.jar |xargs -n1 -i -t  jar tvf {} |grep '

alias h='history'
alias hg='history|grep -v grep|grep ' 
alias hgg='history|grep '

alias pg='ps -eaf|grep -v grep|grep ' 
alias pgg='ps -eaf|grep '

alias lg='lla|grep '

pipe.plugin.zsh – I found these in a customization, either on the internet or on a VM that somebody created for me (probably from Steve G). I can’t even remember I have them, but when I do it’s easy living.

alias -g DN=/dev/null
alias -g EG='|& egrep'
alias -g EH='|& head'
alias -g EL='|& less'
alias -g ELS='|& less -S'
alias -g ETL='|& tail -20'
alias -g ET='|& tail'
alias -g F=' | fmt -'
alias -g G='| egrep'
alias -g H='| head'
alias -g HL='|& head -20'
alias -g LL="2>&1 | less"
alias -g L="| less"
alias -g LS='| less -S'
alias -g MM='| most'
alias -g M='| more'
alias -g NE="2> /dev/null"
alias -g NS='| sort -n'
alias -g NUL="> /dev/null 2>&1"
alias -g PIPE='|'
alias -g R=' > /c/aaa/tee.txt '
alias -g RNS='| sort -nr'
alias -g Sk="*~(*.bz2|*.gz|*.tgz|*.zip|*.z)"
alias -g S='| sort'
alias -g TL='| tail -20'
alias -g T='| tail'
alias -g TF='| tail -f '
alias -g TFN='| tail -f -n '
alias -g US='| sort -u'
alias -g VM=/var/log/messages
alias -g X0G='| xargs -0 egrep'
alias -g X0='| xargs -0'
alias -g XG='| xargs egrep'
alias -g X='| xargs'

ruby.plugin.zsh – some of these are in the standard ruby plugin. I wanted more because rvm doesn’t cooperate as much as it should on my zsh.

alias sgem='sudo gem'

# Find ruby file
alias rfind='find . -name "*.rb" | xargs grep -n'

# #RVM & Ruby
alias rvm_login='/bin/zsh --login'
alias rl='rvm_login && rvm use 2.0.0'
alias ru='rvm gemset use '
alias rv='rvm use '
alias gg='gem list|grep '
alias bu='bundle update'

# Add RVM to PATH for scripting
export PATH=$PATH:$HOME/.rvm/bin

alias yd='yardoc features/**/*.feature features/**/*.rb lib/**/*.rb'

hardware.plugin.zsh – this is just to help me with my laptop annoyances. Like that stupid touchpad that my wrist rests on while I type.

getID() {
	xinput list|grep "Synaptics TouchPad"|awk '{print $6 }'|cut -d'=' -f2

alias laptopprimary='xrandr --output LVDS --primary'alias tpf='xinput set-prop `getID` "Device Enabled" 0'
alias tpn='xinput set-prop `getID` "Device Enabled" 1'


Issue & Story Tracking

Everywhere that I’ve ever worked at had a defect tracking system, requirements system, and sometimes merged into one. Some were off the shelf, some were homemade, and others were services. I never felt the need to install one for work, but I did want to see what was involved with doing this, including how to integrate it with other products in the test stack.

I didn’t look around much for an open source solution. I know Bugzilla is commonly used. I also looked at Mantis and Trac. I decided to go with Redmine because it’s a Ruby On Rails app.


The biggest issue was to identify which ports to use. I already had a Puppies application that I was testing as part of my Cucumber and Cheese book. I added a Graylog2 application, also Ruby On Rails for the web interface (server component is java).

I originally chose to use sqlite3 because it was already installed on my system. I knew that I could use IRB to crack into the database if I needed to. Turned out that I did. I had to create an admin user (default one did not work). I eventually switched to MySQL.


I integrated it with Git. I had trouble getting it working until I discovered that I needed to point it to the .git directory of my project directory. By including issue id’s in my checking comments, the issue automatically associated the issue with the checkin. The comment showed up on the repository page and the issue page. It supports diffing, too.

I was able to install a Jenkins Plugin (technically it is for Hudson but works on Jenkins) to integrate with my Jenkins. This allowed me to identify which builds the associated checkins would be built. I also installed a Redmine plugin to Jenkins could also track builds on Jenkins to show Redmine issues associated with the build in aggregate.

It also supported multiple projects, and sub-projects. The wiki and news seemed relevant for keeping track of project information.

There is role based access control. That probably isn’t necessary for an agile team so I ignored this capability. Also non-agile (my opinion) is time tracking and Gantt charts.


I found all the features to be useful. If I were going to build my own stack then I would like using this component.

What Everybody Should Know About Buying Test Automation Tools

Vendor Tools Fail for Developing Software

Discussed recently on my Twitter  feed was question about test jobs and the required skills. Somebody pointed out that many ads list selenium or QTP. Why both? The person asking told me at one time he had used both and knew both. I think he asked the question because he knew the products were quite different. Or maybe because the hiring manager did not seem to know about that difference.

I have used many products, vendor, open source, and home-grown for automating tests through the years. A vendor test tool just means that somebody is selling it for money.  In this case, lots of money.

They are all  for the purpose of running tests (typically functional tests). However, they are so different that people tend to join a camp and stick with it.  What’s your preference?

My favorite is open source tools (in general), but I am good with whatever tool gets the job done. Productivity is the most important to me. Maybe that is why open source is my favorite. Let me share some of my analysis on the vendor products. Keep in mind the context of my primary job is testing software that is under development. My analysis is greatly influenced by what I need to do.

Source Control

Vendor tools supply source control systems with their tools. I have seen this with QTP and Test Partner. So you have to install a database server or test management system to connect to (and the test management system stores it’s data in a database like SQL Server or Oracle).

Is that good or bad? When the tools were formed, it was probably good – good because testers weren’t used to fooling with source control systems like svn, git, or perforce; good because crazy people like me could ruin hours, days, or weeks of work with a screwed up search/replace-all action; good because they captured so much data when they recorded scripts. Are they still good to have? At this point,  I say no. They are not needed because people are using the dev source control system anyways – two systems are not better than one. They aren’t needed because they editing/using the tests depend on an active connection – people want a copy on their system and only sync when necessary.


Vendor tools number one strength is the plug-ins they build to tackle automated testing on wicked technologies like Flex. And it’s not new. WinRunner and Astra QuickTest were some of the first products to support testing web technologies. This versatility helps sell the vendor tools.

Unfortunately for them, many of the technologies they sell plugins for will come-and-go. I believe that HTML5 will push those EOL’s a little faster. In the meantime, open source alternatives are available for widely used languages like java, ruby, and perl.

Integrations with other tools

Automated test tools integrate with other products like test management systems, defect tracking systems, and requirements tracking systems. There was a time that vendor-sold tools were the most integrated products. Why? Because they often owned all of the tools. They were able to integrate the products through internal knowledge sharing by teams. The integrations, unfortunately, were often based on technologies like dynamic data exchange (DDE) and object linking & embedding (OLE). That was good once upon a time, as were video cassette recorders.

Today products can integrate through the native language when they are libraries. When they are not, RESTful web services are convenient and reliable. I have seen substantial efforts to update vendor products to use modern integration technologies but they are often handcuffed by their customer-bases’ reliance on current product models.

Visually Oriented/Helpful

The first automated tests I ever wrote were tests that I didn’t write. I used a process that has been around since Microsoft Test (later Visual Test, later Rational Visual Test) called Record & Play. Was that helpful? As a learning experience, it was helpful. The tests created by the recordings were not helpful. The recordings are not helpful for long because they do not follow Good Practices for Automating Functional Tests. Even the tool vendors will concede that the recordings must be edited to minimize maintenance in the future.

Try creating the scripts without the recorder. I haven’t used some of the major vendor tools, but I know with QTP that I wasn’t able to build the script with the recorded objects. The objects are the early friend. They help you get the script created without knowing anything. Eventually, though, you will need to fix all those tests because the software under test has changed. How many tests are depending on that object? Are they multiple objects that look similar? Can you effectively minimize the number of objects during your development iterations as the product under test changes? All these concerns make me want to skip the object repositories. I want to know what is going into my tests, what is coming out, and be able to see what is happening behind the scenes.


Vendor tools sell themselves well with customer support. Actually, they sell the customer support. The revenue from maintenance is usually higher than the revenue from sales. They can do this because the customer needs the fixes and improvements from the version they purchased. There is absolutely no DIY for those tools. The good part is that you do not have to fix it, but that bad side if you can’t.

There are forums for discussing issues in open source tools & libraries. The product authors and other volunteers answer questions about the products to help both experienced and new users, including providing direct access to defect tracking systems. When the cause of issues are found, the plans for release are posted and the user can resolve the issue themselves in their own library code base.

Foot Print

For software, the foot print is the total impact on a computer – space required to install on the hard disk drive, RAM used to run it (both edit and execution modes), licensing connections, integration connections, database connections, and cost to license.

The foot print affects every person that will use the automated test scripts to learn about the product under test. Today’s development methodologies are designed to reduce the gap in time from when a defect is introduced to when it is fixed. Build it regularly and test it regularly using continuous integration systems (e.g. Hudson). Let the failing tests be run on the developers system that checked the defective code in to the repository. Foot print matters.

The vendor tools are bulky and expensive to say the least.  QTP 11 requires a minimum of 1gb storage and 1gb RAM. A use can get by without any connections (to database or to QC) but not a team. I haven’t been able to find the exact cost, but I am seeing ~$6,000 on my Google searches.

Test Complete 8 requires a minimum of 700mb storage and 256mb RAM.  It’s $1,000 for an individual node-lock license and $4,500 for an enterprise floating license (prices in between for mixes of those two).

Telerik Test Studio requires 500mb storage and 500mb RAM. Each of  of those three leading vendor tools require Windows XP, Windows Vista, or Windows 7. Their most popular package (some tools but missing most) is $1,300 while the complete package for one user is $7,334.

Compare those minimum requirements to Selenium test automation tools. Selenium is 31mb (server and java client), runs on Windows, Mac OS X, Linux, and Solaris. Ruby, the core language for the Watir and Watir-Webdriver libraries, supports Windows, Mac OS X, Linux, Solaris, and (I have never heard of) OpenIndiana. Ruby is 275mb (measure for v1.93) with Watir-Webdriver requiring an additional 18b. They are both free, the support is free, and the maintenance is $0.  That’s a huge difference.


For automating tests on software development projects, I believe the vendor tools are inferior to the open source tools. It seems counter-intuitive – why wouldn’t the vendor tools adopt the capabilities of the open source tools?

First, they can’t because of the install-base requires them to remain the same. These products have been selling for up to and even more than a decade. People, departments, companies rely on the tools they have invested so much time into. Regardless of whether they tests are brittle or not, they cannot choose (or allow the vendor to choose) a radical change to how their systems are built. So instead they build new features and modules to sell while the core remains the same.

Second, they do not want to because their target is vast market of uninformed manual testers and managers. I have walked through vendor showcases at StarWest. They have interfaces that are way sexier than VIM. The trials seem so easy. “Come on in here, automate a test – everybody is doing it!” So you do it and it works beautifully. Of course you don’t have to run that test on your system, against your software, or maintain it. Their core target usually knows none of the good automation practices they should follow. But their companies take their advice and pony up the money.

As I mentioned before, the context of my job plays a significant role in my analysis above. If I were choosing a product for testing in an IT department (which is service-oriented), my priorities would change, and the tools I favored may change. I do not dislike vendors. I do not think they mean harm, in fact they provide utility.