Photo “Seagulls on the American River” courtesy of Vince Mig.
Testing a Rest API
When I think of rest, it’s usually the type that requires a bed or comfortable couch. But these days I am spending more time verifying intermediate components of a larger system that take advantage of the synchronous behavior known as a restful web service than working on web GUI interfaces.
The differences are apparent, but not always. The web GUI is intended for a person to use while the rest UI is meant for another system to use.
|Web GUI||Rest UI|
|Person uses||Another system uses|
|More likely to have errors in usage||Able to have errors more quickly|
|Details handled by the browser||Must provide many details using operations or intentional headers|
Yet there are also similarities. I explain more about them in What to Test.
- Features are driven by business logic
- The inputs are controlled by technical logic
- Feedback should be appropriately helpful
- Service may need to perform at various load levels
- Service should be appropriately secure
Because of these reasons, I will look at the context of the service in deciding what to test, and how to test it. I won’t plan too far into the future. I manage this with the tool design. I will get more into that with How to Test.
What to Test
The business logic is the purpose. For whatever qualities exist or don’t, purpose is the reason. I make that my primary concern. I think about what it’s supposed to do and when. I concentrate on not just in the inputs and outputs, but which actors are involved in which ways.
I also want to verify the technical components of the interface such as variable types and ranges. I will include body, which may be XML, JSON, or something else. I will test variations of the headers. Some are used for authentication, some for content description. As with any interface, feedback is important. What went wrong, or did everything go well?
There is professional performance testing and shoe-string performance testing. Most of my experience is in the latter. I focus on the work flows and data flows. This creates the right balance of activities. I also build in appropriate timing. The key to remember is that the client for a rest service isn’t a person, it’s another server. There is no thinking time on the part of the agent. I create logs that indicate what errors have been realized and how long transactions are taking.
How to Test
I find myself eager to invest in exploratory testing to verify rest services. Often requirements and acceptance criteria are limited – very little what-if discussion. Creating agents in ruby has helped me accomplish this – see Exploratory Testing with Interactive Ruby. I am hoping to one day see if I can apply the same principle to other interactive shells to allow me to test in-process, such as with a java-based web service.
I create a rest client using ruby (or more recently JRuby).
- I get contact through any valid service (preferably a get). I used to use ‘net/http’, or something like it (even used Mechanize before), but I created my own convenient gem called rest_baby.
- I expand that to one successful service call. Sometimes that would mean creating initial data, such as an account to connect with. Typically, I have to resolve the authentication, depending on how that is designed.
- I build on that by adding ways to track the data that gets sent to and returned from the service. Instance variables are convenient for that, tailored to match the need. I may create an array to track a group. I like to declare them as attr_accessor because it allows me to access and change them from within IRB.
- I add more service calls. i want to make sure type of operation is validated. Typically there will be more services as well.
- I incorporate techniques learned from Cucumbers and Cheese (Chapter 5, Default data section) to create default data.
- I figure out ways to modify that data to serve the test needs. I validate all these using interactive tests with IRB. for example, I created optional parameters to update and remove JSON elements in order to test our JSON validation.
As a tester, I want to have my business logic tested through cucumber so that I can depend on that behavior through time and change. Having a client gives me that opportunity to exercise the behavior. When I have questions remaining, and I always have questions, I can choose between additional scenarios, exploratory tests, or both.
Scenario outlines make it easier to see more variations. I will also continue to use IRB to learn more about those behaviors. Just because we use scrum stories to create software doesn’t mean we know everything ahead of time.
When I checked message bodies for valid JSON, such as missing keys, extra keys, and malformed JSON, I was looking to make sure the techniques worked correctly. I was not checking business behavior. I concentrate on the following technical aspects of the API:
- Variations of URL’s. The lingo is URI, which breaks the entire URL into parts (e.g. protocol://host:port/path?param1¶m2).
- Parameters, whether in the URI , headers, or body. I check the range of values. I check for required parameters & components. I also check the required and optional parameters.
- Feedback – the nuggets are in the standard http response codes and body text. The design of these can vary a great deal, so I incorporate that into my tests.
I do not consider myself a performance tester. Nor do I consider myself to be a coder. Even with my limitations on either of those, I can design some rudimentary multi-threading well enough to validate the system under test (SUT) is handling multiple, concurrent clients. To do this, I will create work flow model-based scripts from the variety of rest calls that could be made by the client. I look for the following kinds of problems:
- System failure – it falls down from a load and needs to be rescued by the admin.
- Timing issues – the behavior is unexpectedly different because the SUT threading fails to pass data correctly. I find this more often when there are load-balanced servers running.
Synchronous vs. Asynchronous
Synchronous communications expect an instant response. Asynchronous may mean that data will be updated soon but not right away. The rest service should be a synchronous model. However, data flow may include a queue along the way. Therefore, the design of the work flow script should account for this.
Even when a web service is written in another technology, such as Java, I still go back to creating test capability using Ruby. It’s so quick and convenient that I cannot resist. The return on investment is high because I am finding issues quickly.