Fusillade Documentation
High-performance load testing engine built in Rust
Fusillade is a modern load testing platform that combines the raw performance of Rust with the developer-friendly scripting of JavaScript. It uses OS threads with blocking I/O for predictable latency and supports HTTP/2, WebSocket, gRPC, MQTT, AMQP, SSE, and browser automation.
| Runtime | Rust + QuickJS (rquickjs) |
| Concurrency | OS Threads (std::thread) |
| HTTP Client | hyper + hyper-rustls (HTTP/1.1 & HTTP/2) |
| Metrics | Network time only (excludes JS overhead) |
| Histograms | HDR histograms (hdrhistogram) |
Installation
# From source
$ cargo install --path .# Verify installation
$ fusillade --version
$ fusi --version # Short aliasQuick Start
# test.js
export const options = {
workers: 10,
duration: '30s',
thresholds: {
'http_req_duration': ['p95 < 500'],
'http_req_failed': ['rate < 0.01'],
}
};
export default function() {
const res = http.get('https://api.example.com/users');
check(res, {
'status is 200': (r) => r.status === 200,
'protocol is HTTP/2': (r) => r.proto === 'h2',
});
sleep(1);
}# Run
$ fusillade run test.js
$ fusi run test.js # Short aliasConfiguration Options
Configure via export const options in your script.
| option | type | description | example |
|---|---|---|---|
| workers | number | Concurrent virtual users (VUs) | 10 |
| duration | string | Test duration | "30s", "1m" |
| stages | array | Ramping schedule (target VUs over time) | [{duration: "10s", target: 50}] |
| thresholds | object | Pass/fail criteria for CI/CD | {"http_req_duration": ["p95 < 500"]} |
| iterations | number | Fixed iterations per worker (then exit) | 10 |
| warmup | string | URL for connection pool warmup | "https://api.example.com" |
| stop | string | Wait time for active iterations (default: 30s) | "30s" |
| stack_size | number | Worker thread stack (bytes, default 256KB) | 524288 |
| min_iteration_duration | string | Minimum iteration time (rate limiting) | "1s" |
| jitter | string | Add artificial latency (chaos) | "500ms" |
| drop | number | Drop probability 0.0-1.0 (chaos) | 0.05 |
| scenarios | object | Multiple named scenarios | {browse: {...}} |
Threshold Syntax
p95<500 - 95th percentile under 500ms
p99<1000 - 99th percentile under 1000ms
avg<200 - Average under 200ms
rate<0.01 - Rate under 1% (for error rates)
count>100 - Count greater than 100
Configuration Files
External YAML or JSON configuration files for separating test logic from parameters.
# Usage
$ fusillade run script.js --config config/stress-test.yaml
$ fusillade run script.js --config config/stress-test.yaml --workers 100 # CLI overrides# config/ramping.yaml
schedule:
- duration: 30s
target: 10 # Ramp to 10 VUs
- duration: 1m
target: 10 # Hold at 10 VUs
- duration: 30s
target: 50 # Ramp to 50 VUs
- duration: 2m
target: 50 # Hold at 50 VUs
- duration: 30s
target: 0 # Ramp down
criteria:
http_req_duration:
- p95<500
- avg<200
http_req_failed:
- rate<0.01Available Templates
local.yaml - Quick smoke tests
ramping.yaml - Gradual load increase
stress-test.yaml - Push beyond capacity
soak-test.yaml - Extended duration (memory leaks)
chaos.yaml - Fault injection
arrival-rate.yaml - Fixed RPS testing
multi-scenario.yaml - Multiple behaviors
Lifecycle Hooks
Optional functions that run once before and after the test.
export function setup()Runs once before any VUs start. Return value passed to every default() iteration.
export function teardown(data)Runs once after all VUs finish. Receives setup return value.
setup() → [VU iterations run in parallel] → teardown(data)
↓ ↓ ↓
1x only N workers × M iterations 1x only
export function setup() {
// Runs once before test starts
const res = http.post('https://api.example.com/login', JSON.stringify({
username: 'testuser', password: 'secret'
}));
const token = JSON.parse(res.body).token;
return {
authToken: token,
testData: JSON.parse(open('./users.json'))
};
}
export default function(data) {
// Each worker receives setup data
http.get('https://api.example.com/profile', {
headers: { Authorization: 'Bearer ' + data.authToken }
});
}
export function teardown(data) {
// Cleanup after all workers finish
http.post('https://api.example.com/logout', null, {
headers: { Authorization: 'Bearer ' + data.authToken }
});
print('Test completed, cleaned up resources');
}Multiple Scenarios
Run multiple user types concurrently with independent configurations.
export const options = {
scenarios: {
browse: {
workers: 10,
duration: '30s',
exec: 'browseProducts',
},
checkout: {
workers: 5,
duration: '1m',
exec: 'checkoutFlow',
startTime: '30s', // starts after 30s delay
},
},
};
export function browseProducts() {
http.get('https://api.example.com/products');
sleep(1);
}
export function checkoutFlow() {
http.post('https://api.example.com/checkout', JSON.stringify({item: 1}));
sleep(2);
}Scenario Options
workers - Workers for this scenario
duration - Duration of this scenario
iterations - Fixed iterations per worker
exec - Function name to call (default: "default")
startTime - Delay before starting (e.g., "30s")
thresholds - Per-scenario pass/fail criteria
stack_size - Worker stack size in bytes
HTTP API
http.get(url, [options])Performs a GET request
http.post(url, body, [options])Performs a POST request. Supports multipart if body contains file markers.
http.put(url, body, [options])Performs a PUT request
http.patch(url, body, [options])Performs a PATCH request
http.del(url, [options])Performs a DELETE request
http.head(url, [options])Performs a HEAD request (returns headers only)
http.options(url, [options])Performs an OPTIONS request (for CORS preflight)
http.file(path, [filename], [contentType])Returns file marker for multipart uploads
http.request({method, url, body, headers, name, timeout})Generic request builder
Request Options
headers: Object - Custom HTTP headers
name: String - Custom metric tag for aggregating dynamic URLs
tags: Object - Custom tags for filtering/aggregating metrics
timeout: String - Request timeout ("10s", "500ms"). Default: "60s"
Response Object
status: Number - HTTP status code (0 for network/timeout errors)
body: String - Response body as text
headers: Object - Response headers
proto: String - Protocol ("h1" for HTTP/1.x, "h2" for HTTP/2, "h3" for HTTP/3)
// Timings (in milliseconds)
timings.duration: Total request time
timings.blocked: Time waiting for connection slot
timings.connecting: TCP connection time
timings.tls_handshaking: TLS handshake time
timings.sending: Time sending request
timings.waiting: Time to first byte (TTFB)
timings.receiving: Time reading response
Use the name option to group dynamic URLs (e.g., /products/1 → /products/:id) for cleaner metrics.
Checks & Utility Functions
check(value, assertions)Verifies boolean conditions. Failed checks are recorded as metrics. Returns true if all pass.
sleep(seconds)Pauses the virtual user (fractional seconds supported).
print(message)Logs a message to stdout and worker logs file.
open(path)Reads a file from filesystem. Only available during initialization, not inside default().
check(res, {
'status is 200': (r) => r.status === 200,
'body contains user': (r) => r.body.includes('user'),
'latency < 500ms': (r) => r.timings.duration < 500,
});
sleep(1); // Wait 1 second
sleep(0.5); // Wait 500ms
print('Current iteration complete');
// Load data at init time (outside default function)
const data = JSON.parse(open('./test-data.json'));Request Grouping (segment)
Groups requests under a named category for cleaner metrics. Nested segments create hierarchical names.
segment(name, fn)Groups requests under a named category for metrics reporting.
segment('Login Flow', () => {
http.post('/login', credentials);
segment('Dashboard', () => {
http.get('/dashboard'); // Metric: "Login Flow::Dashboard::/dashboard"
});
});Environment & Context
__ENVObject containing all environment variables.
__WORKER_IDCurrent worker's numeric ID (0-indexed). For partitioning data.
__SCENARIOName of currently executing scenario (in multi-scenario tests).
// Use worker ID to partition test data
const userId = users[__WORKER_ID % users.length];
// Access environment variables
const apiKey = __ENV.API_KEY || 'default-key';
// Check current scenario
if (__SCENARIO === 'checkout') {
// checkout-specific logic
}Standard Metrics
Automatically collected metrics available for thresholds.
HTTP Timing
http_req_durationTotal request time
http_req_blockedTime waiting for connection slot
http_req_connectingTCP connection time
http_req_tls_handshakingTLS handshake time
http_req_sendingTime sending request body
http_req_waitingTime to first byte (TTFB)
http_req_receivingTime reading response body
Throughput & Data
http_reqsTotal number of HTTP requests
http_req_failedFailed requests (non-2xx/3xx or network error)
data_sentTotal bytes sent
data_receivedTotal bytes received
Execution
vusNumber of active virtual users
iterationsNumber of completed script iterations
Custom Metrics
Define business-level metrics beyond standard HTTP latency.
metrics.histogramAdd(name, value)Tracks distribution (min, max, avg, p95, p99). For timings.
metrics.counterAdd(name, value)Cumulative sum. For counting events.
metrics.gaugeSet(name, value)Stores most recent value. For current state.
metrics.rateAdd(name, success)Tracks success rate as percentage.
stats.get(name)Query current stats. Returns {p95, p99, avg, count, min, max}.
export const options = {
thresholds: {
'checkout_duration': ['p95 < 500'],
'items_sold': ['count > 100'],
'payment_success': ['rate > 0.99'],
}
};
export default function() {
let start = Date.now();
// ... business logic ...
metrics.histogramAdd('checkout_duration', Date.now() - start);
metrics.counterAdd('items_sold', 3);
metrics.rateAdd('payment_success', true);
metrics.gaugeSet('queue_depth', 42);
// Query stats programmatically
const s = stats.get('checkout_duration');
if (s.p95 > 500) {
print('Warning: P95 latency exceeds 500ms');
}
}Standard Library
Built-in globals for common operations.
crypto - Hashing and HMAC
crypto.md5(data)Returns MD5 hash as hex string
crypto.sha1(data)Returns SHA1 hash as hex string
crypto.sha256(data)Returns SHA256 hash as hex string
crypto.hmac(algorithm, key, data)Returns HMAC using md5/sha1/sha256
encoding - Base64
encoding.b64encode(data)Encode string to base64
encoding.b64decode(data)Decode base64 to string
utils - Random Data
utils.uuid()Generate a UUID v4 string
utils.randomInt(min, max)Random integer in range [min, max]
utils.randomString(length)Random alphanumeric string
utils.randomItem(array)Pick a random element from an array
const hash = crypto.sha256('password');
const token = encoding.b64encode('user:pass');
const id = utils.uuid();
const num = utils.randomInt(1, 100);
const user = utils.randomItem(users);Unit Testing
Built-in testing framework for verifying logic before running load tests.
describe(name, fn)Groups related tests
test(name, fn)Defines a test case
expect(value).toBe(expected)Strict equality check
expect(value).toEqual(expected)Deep equality check (via JSON)
expect(value).toBeTruthy()Checks if value is truthy
describe("Cart Logic", () => {
test("calculates total correctly", () => {
const total = calculateTotal(100, 2);
expect(total).toBe(200);
});
test("returns object with items", () => {
const cart = getCart();
expect(cart).toEqual({ items: [], total: 0 });
});
});WebSocket
ws.connect(url)Opens a WebSocket connection, returns socket object
socket.send(text)Sends a text message
socket.recv()Blocking receive (returns text, binary as string, or null on close)
socket.close()Closes the connection
const socket = ws.connect('wss://echo.websocket.org');
socket.send('Hello');
const response = socket.recv();
print(response);
socket.close();gRPC
new GrpcClient()Creates a new gRPC client
.load(files, includes)Loads Protobuf definitions from .proto files
.connect(url)Connects to gRPC server
.invoke(method, payload)Performs a unary RPC call. Method format: package.Service/Method
const client = new GrpcClient();
client.load(['./protos/hello.proto'], ['./protos']);
client.connect('http://localhost:50051');
const response = client.invoke('helloworld.Greeter/SayHello', {
name: 'World'
});
print(response.message);Server-Sent Events (SSE)
sse.connect(url)Opens an SSE connection
client.recv()Blocking receive (returns {event, data, id} or null)
client.close()Closes the stream
client.urlProperty containing the connection URL
const client = sse.connect('https://api.example.com/events');
for (let i = 0; i < 5; i++) {
const event = client.recv();
if (event) {
print(event.data);
// event = { event: 'update', data: '...', id: '1' }
}
}
client.close();MQTT & AMQP
MQTT
new JsMqttClient()Creates a new MQTT client
.connect(host, port, clientId)Connect to MQTT broker
.publish(topic, payload)Publish a message to a topic
.close()Close the connection
const mqtt = new JsMqttClient();
mqtt.connect('localhost', 1883, 'test-client');
mqtt.publish('sensors/temp', '22.5');
mqtt.close();AMQP (RabbitMQ)
new JsAmqpClient()Creates a new AMQP client
.connect(url)Connect to AMQP broker (e.g., amqp://localhost)
.publish(exchange, routingKey, payload)Publish a message
.close()Close the connection
const amqp = new JsAmqpClient();
amqp.connect('amqp://localhost');
amqp.publish('', 'my-queue', JSON.stringify({ event: 'order.created' }));
amqp.close();Browser Automation
Native headless Chromium support for end-to-end testing.
chromium.launch()Launches a new headless browser instance
browser.newPage()Opens a new tab/page
browser.close()Closes the browser
page.goto(url)Navigates to a URL and waits for load
page.content()Returns the HTML content of the page
page.click(selector)Clicks an element matching the CSS selector
page.type(selector, text)Types text into an input element
page.evaluate(script)Executes JavaScript in the page context
page.metrics()Returns performance timing metrics
page.screenshot()Captures a PNG screenshot (auto-captured on check() failures)
export default function() {
const browser = chromium.launch();
const page = browser.newPage();
page.goto('https://example.com/login');
page.type('#username', 'testuser');
page.type('#password', 'secret');
page.click('#submit');
const html = page.content();
const title = page.evaluate('document.title');
const perf = page.metrics();
print(`DOM interactive: ${perf.domInteractive - perf.navigationStart}ms`);
const png = page.screenshot();
browser.close();
}CLI Reference
fusillade run <script>
Execute a load test script
-w, --workers <N> Override concurrent workers
-d, --duration <D> Override test duration
--config <FILE> Use YAML/JSON config file
--json Output NDJSON to stdout
--export-json <FILE> Save summary to JSON file
--export-html <FILE> Save summary to HTML file
--out <CONFIG> Output (otlp=URL, csv=FILE, junit=FILE)
--headless No TUI (for CI/CD)
--jitter <DURATION> Chaos: Add artificial latency
--drop <RATE> Chaos: Drop probability (0.0-1.0)
--estimate-cost [LIMIT] Estimate bandwidth costs
-i, --interactive Enable interactive control
fusillade types -o index.d.ts
Generate TypeScript definitions for IDE support
fusillade schema -o config.json
Generate JSON Schema for config validation
fusillade record -o <file.js> -p <port>
Start HTTP proxy to record traffic (default port: 8085)
fusillade convert --input <file.har> --output <file.js>
Convert HAR file to Fusillade scenario
fusillade worker --listen 0.0.0.0:8080
Start distributed worker node
fusillade worker --connect controller:9001
Connect worker to a controller
fusillade controller --listen 0.0.0.0:9000
Start controller with web dashboard
fusillade replay <errors.json> [--parallel]
Re-execute failed requests from error log
fusillade export <errors.json> --format curl
Export failed requests as cURL commands
fusillade exec <script>
Execute a JavaScript snippet directly (debugging)
Interactive Flight Control
Control your load test in real-time with the --interactive flag.
# Usage
$ fusillade run test.js --interactive -w 10 -d 5m| Command | Aliases | Description |
|---|---|---|
| ramp <N> | scale | Dynamically scale workers to N |
| pause | Pause all worker execution | |
| resume | unpause | Resume execution after pause |
| tag <k>=<v> | Inject custom tag into metrics | |
| status | stats | Print current test status |
| stop | quit, exit | Graceful shutdown |
Distributed Mode
Scale horizontally across multiple machines or Kubernetes pods.
# Start workers
# On each worker node
$ fusillade worker --listen 0.0.0.0:8080
# Or connect to controller
$ fusillade worker --connect controller:9001# Start controller
$ fusillade controller --listen 0.0.0.0:9000
# Dashboard at http://localhost:9000/dashboard# Dispatch test via API
curl -X POST http://controller:9000/api/dispatch \
-H "Content-Type: application/json" \
-d '{
"script_content": "export default function() { http.get(...) }",
"config": { "vus": 10, "duration_secs": 60 }
}'Controller API Endpoints
GET / - Controller info page
GET /dashboard - Real-time metrics dashboard
GET /api/stats - Current test statistics (JSON)
GET /api/workers - List connected workers
POST /api/dispatch - Dispatch test to workers
GET /api/history - Past test runs (from SQLite)
GET /api/screenshots - List saved screenshots
Assets (local imports, data files) are automatically bundled and distributed to workers.
Kubernetes Deployment
Deploy on Kubernetes for scalable, distributed load testing.
# Quick Start
# Create namespace
kubectl apply -f k8s/namespace.yaml
# Deploy controller and workers
kubectl apply -f k8s/controller.yaml -n fusillade
kubectl apply -f k8s/worker.yaml -n fusillade
# Access dashboard
kubectl port-forward svc/fusillade-controller 9000:9000 -n fusilladeArchitecture
┌─────────────────────────────────────┐
│ Controller (:9000) │
│ Dashboard • Orchestration • DB │
└──────────────┬──────────────────────┘
│ gRPC (9001)
┌──────────┼──────────┐
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Worker │ │ Worker │ │ Worker │
└────────┘ └────────┘ └────────┘
HPA (3-50 pods)Auto-Scaling
HorizontalPodAutoscaler scales workers 3 → 50 pods based on:
CPU utilization (target: 70%)
Memory utilization (target: 80%)
Scale-up: 5 pods per 30s (aggressive)
Scale-down: 2 pods per 60s (conservative)
Chaos Engineering
Test system resilience with built-in fault injection.
# CLI flags
# Add 500ms jitter and drop 5% of requests
$ fusillade run test.js --jitter 500ms --drop 0.05# In script options
export const options = {
workers: 10,
duration: '30s',
jitter: '500ms', // Add artificial latency
drop: 0.05, // Drop 5% of requests
};Behavior
jitter - Adds artificial delay before each request. Simulates network latency.
drop - Randomly fails requests with status 0 and error "Simulated network drop".
Cost Estimation
Estimate data transfer costs before running large-scale tests.
# Usage
# Basic estimation (default $10 warning threshold)
$ fusillade run heavy_test.js --estimate-cost
# Custom threshold ($50)
$ fusillade run heavy_test.js --estimate-cost 50How It Works
1. Runs brief dry run (5 iterations, ~3 seconds)
2. Calculates average request rate and response size
3. Extrapolates to configured duration and workers
4. Shows estimated data transfer (GB) and AWS egress cost
# Example output
📊 Cost Estimation
------------------------------------
Est. Requests: ~12500
Est. Data Transfer: 2.45 GB
Est. AWS Cost: ~$0.22 (at $0.09/GB)
------------------------------------
Proceed? [y/N]Error Replay
Failed requests are logged to fusillade-errors.json (JSONL format) for debugging.
# Export to cURL
$ fusillade export fusillade-errors.json --format curl
# Output:
# Request 1 - POST https://api.example.com/checkout (Status: 500)
curl -X POST \
-H "Content-Type: application/json" \
-d '{"item":"1234"}' \
'https://api.example.com/checkout'# Replay failed requests
# Sequential (default: 100ms delay between requests)
$ fusillade replay fusillade-errors.json
# Parallel execution
$ fusillade replay fusillade-errors.json --parallel