I use Ruby a lot even though our automation is mostly in java. I like to create tools to solve problems (thank you Brian Marick). I developed some tweaks for IRB that I want to share.
I have an idea. I will not get too high on it except that it is intriguing to me. Maybe the idea is not new to other people. The idea came to me while I was thinking about the test automation pyramid (or ice cream cone as automation expert Alister Scott recognized the typical shape). I am fond of the three-layer concept – enough so that I made myself learn how to write unit tests and service-layer tests, in addition to the GUI-based tests that are commonly practiced by software testers.
I am thinking about the shape of the triangle, which part should be how big, etc. Suddenly the problem hits me. The problem is not isn’t the investment in automated tests. It isn’t the maintenance (which I thought maintenance was a big issue). Alright, I am lying. Investment is half of the equation – the return on investment calculation. The problem is the (lack of) focus on the return for the investment.
So much is invested in tests that will find what? Low severity defects that do not halt releases? I do not care about low severity defects until I have sussed the high severity defects. I care even less about them when I am faced with making a substantial investment to find them. Where does that leave me? Focus on the big bugs. Automation for big bugs. Which bugs? The ones we do not want to ship with them in. The ones that require a patch if you miss them. The ones that make you slip release dates if you catch them late.
“I don’t care about low severity defects until
I have sussed the high severity defects.”
I expect to get criticized for this idea. Why? I have no experience on this. I have never tried it. It is an untried idea. What could be less valuable than a hypothesis? I can try it, but this kind of thing would take a while to prove itself out. But I cannot worry about that while I am brainstorming an idea. The idea will be flawed in many people’s minds. Automated tests are really just automated checks. That is not new. People are not particularly suspicious that the results are false positives, they do not like that the test are little positives – that is they test so little compared what a person can do. I believe this concept too.
What I really believe in is the ends, not the means. The means is how I will get it done. The ends is what I want to accomplish. I care about the automation only when it affectedly helps me get the good defects. What are those? In my world, those are the defects that reduce my organization’s ability to meet agreed or assumed service level agreements. I call those Enterprise Readiness defects. Examples include poor performance, poor performance after time (example: caused by a memory leak), poor resilience during or after high loads, problems with failover, and problems with data retention.
How can I accomplish that? By remembering that automation is just a tool in the toolbox.