Appearance
Collection Runner
The Collection Runner is a powerful feature in Boomerang that enables you to execute a series of API requests in a specific sequence. It's particularly useful for complex testing scenarios, API workflows, and data-driven testing.
Understanding Runner
The Runner executes a collection of requests in order, automating your API testing workflows. For example, you might want to test a complete user management flow by running a sequence of requests that create a user, retrieve their details, update their information, and finally delete the user.
The Runner supports data-driven testing through CSV files, allowing you to run the same requests with different data sets. This capability is invaluable when you need to test your APIs with multiple combinations of input data.
Starting a Collection Run
Access the Collection Runner by clicking the play button icon in the left sidebar. The Runner interface displays all requests from your current project in a flat list. You can select specific requests you want to include in your run by checking the boxes next to them. This flexibility allows you to run any combination of requests, regardless of their organization in your project.
Configure your run settings based on your testing requirements:
- Set delay between requests (useful for rate-limited APIs)
- Configure iteration count for repeated runs
- Import test data from CSV files
Working with Data Files
The Collection Runner accepts CSV files for data-driven testing. Your CSV file should include headers that match your variable names:
csv
userId,username,email
1001,johndoe,john@example.com
1002,janesmith,jane@example.com
When you import this file, each row becomes a separate iteration. Variables from your CSV are accessible using the standard syntax: {{userId}}
, {{username}}
, etc.
Run Result
When you configure and start a run from the Runner screen, Boomerang automatically opens the Run Result tab to show the execution progress. You'll see each request being executed in real-time, with key metrics displayed at the top of the screen:
- Number of iterations completed
- Total duration of the run
- Average response time across all requests
For each iteration, the Run Result screen displays:
- Request method (GET, POST, PUT, etc.) and name
- Response status code
- Response time in milliseconds
The interface provides four tabs for analyzing each request:
- Request: Shows the request body that was sent to the API
- Response: Displays the response received from the API in a formatted view
- Headers: Lists all request and response headers exchanged during the API call
- Captures: Shows any data captured using
bg.capture()
orbg.captureGroup()
in your scripts
Once the run completes, you can use the "Run Again" button to re-execute the same set of requests with the same configuration.
Advanced Usage
Request Dependencies
When testing workflows where requests depend on previous responses, use scripts to manage these dependencies:
javascript
// In your login request's post-response script
const response = bg.response.json();
bg.globals.set("authToken", response.token);
bg.globals.set("userId", response.userId);
// Later requests can use {{authToken}} and {{userId}}
Environment Selection
The Collection Runner uses your currently selected environment variables during execution. Ensure you've selected the correct environment before starting your run to test against the intended API endpoints.
Result Analysis
After completing a run, the Collection Runner provides comprehensive statistics including:
- Total requests executed
- Success/failure rates
- Average response times
- Test result summaries
- Captured data from response scripts
Best Practices
Write descriptive test names in your scripts to make failure analysis more straightforward:
javascript
// Good example
bg.test("User creation returns valid ID and timestamp", () => {
const response = bg.response.json();
bg.expect(response.userId).to.be.a("string");
bg.expect(response.createdAt).to.match(/^\d{4}-\d{2}-\d{2}/);
});
Keep your CSV test data focused and maintainable by including only necessary columns and maintaining consistent data formats across rows.
Troubleshooting
If you encounter issues during collection runs:
Verify your environment variables are correctly defined and the right environment is selected. Many issues stem from missing or incorrect environment variables.
Check your CSV data format matches expected variable names exactly. Headers must match your variable references precisely.
Review request dependencies to ensure data flow between requests is properly handled in your scripts.
For rate-limited APIs, adjust the delay between requests if you're experiencing timeouts or receiving too many requests errors.
Use your browser's developer tools to monitor network requests and responses during the run. The Network tab in developer tools provides detailed information about each request, including headers, payload, and response data, which is invaluable for debugging issues.
The Collection Runner is a vital tool for automated API testing in Boomerang. By understanding its features and following these best practices, you can create reliable and maintainable API test suites.