API Reference
Everything you need to integrate Deadpipe into your pipelines and applications.
Overview
Deadpipe offers two powerful monitoring solutions. Use one or both depending on your needs.
Frontend Monitor
Drop-in JavaScript snippet that automatically tracks all API calls, errors, and performance metrics from your web app.
Best for: Web applications, SPAs, dashboards →Pipeline Heartbeat
Simple API to send heartbeats from your ETL jobs, cron tasks, or any scheduled process.
Best for: ETL pipelines, cron jobs, data sync →Frontend Monitor
The Frontend Monitor is a lightweight JavaScript snippet that automatically tracks all API calls, errors, and performance metrics from your web application. Drop it in and get instant visibility.
What Gets Tracked Automatically
API & Network
- • All
fetch()requests - • All
XMLHttpRequestcalls - • WebSocket connections & messages
- • Response times & status codes
- • Slow request detection
- • Network status changes (online/offline)
Errors
- • Uncaught JavaScript errors
- • Unhandled Promise rejections
- • Console errors & warnings
- • Resource loading failures (images, scripts)
- • CSP violations
- • Stack traces (truncated)
Web Vitals
- •
LCP- Largest Contentful Paint - •
FID- First Input Delay - •
CLS- Cumulative Layout Shift - •
INP- Interaction to Next Paint - •
TTFB- Time to First Byte - •
FCP- First Contentful Paint
Performance
- • Page load timing breakdown
- • Long tasks blocking the main thread
- • P50/P95/P99 latency percentiles
- • Per-endpoint statistics
- • Connection quality (effective type, RTT)
- • Session tracking
1. Enable via Script Tag
The easiest way to enable monitoring. Just add this script tag to your HTML <head> or before </body>:
<script
src="https://deadpipe.com/monitor.js"
data-key="dp_your_api_key"
data-app="my-app-name">
</script>2. React / Next.js Integration
For React or Next.js apps, add the script using the Script component:
// app/layout.tsx or pages/_app.tsx
import Script from 'next/script'
export default function RootLayout({ children }) {
return (
<html>
<head>
<Script
src="https://deadpipe.com/monitor.js"
data-key={process.env.NEXT_PUBLIC_DEADPIPE_KEY}
data-app="my-nextjs-app"
strategy="afterInteractive"
/>
</head>
<body>{children}</body>
</html>
)
}Configuration Options
| Attribute | Default | Description |
|---|---|---|
data-key | required | Your Deadpipe API key (starts with dp_) |
data-app | hostname | Application name for grouping events |
data-track-fetch | true | Enable/disable fetch/XHR tracking |
data-track-errors | true | Enable/disable JS error tracking |
data-track-vitals | true | Enable/disable Core Web Vitals tracking |
data-track-websocket | true | Enable/disable WebSocket tracking |
data-track-resources | true | Enable/disable resource error tracking |
data-track-long-tasks | true | Enable/disable long task detection |
data-track-console | true | Enable/disable console.error tracking |
data-track-network | true | Enable/disable network status tracking |
data-sample-rate | 1 | Sample rate 0-1 (0.5 = 50% of requests) |
data-batch-interval | 5000 | How often to send batched events (ms) |
data-slow-threshold | 3000 | Mark requests slower than this as 'slow' (ms) |
data-long-task-threshold | 50 | Report tasks longer than this (ms) |
data-ignore | | Comma-separated URL patterns to ignore |
Example: Production Configuration
A typical production setup with sampling and ignored endpoints:
<script
src="https://deadpipe.com/monitor.js"
data-key="dp_prod_xxxxxxxxxxxxx"
data-app="production-frontend"
data-sample-rate="0.1"
data-slow-threshold="2000"
data-ignore="/health,/metrics,analytics.google.com">
</script>This configuration samples 10% of requests, marks anything over 2s as slow, and ignores health checks and Google Analytics.
Verify It's Working
Make some API calls in your app and check your Dashboard to see them appear in real-time.
Monitor SDK
The Frontend Monitor exposes a global Deadpipe object for manual tracking beyond automatic API interception.
Track Custom Events
Track user interactions, feature usage, or any custom event.
Deadpipe.track('button_click', {
button: 'checkout',
value: 99.99
});Track Metrics
Record numerical metrics with optional tags.
Deadpipe.metric('page_load_time', 1250, {
page: '/checkout'
});Browser Heartbeat
Send a heartbeat directly from the browser for client-side jobs.
// Simple heartbeat
await Deadpipe.heartbeat('browser-sync', 'success');
// With additional data
await Deadpipe.heartbeat('data-export', 'success', {
records_processed: 500
});Pipeline Wrapper
Wrap an async function to automatically track its success/failure and duration.
// Automatically sends heartbeat on success or failure
const result = await Deadpipe.pipeline('data-sync', async () => {
const data = await fetchData();
await processData(data);
return { records_processed: data.length };
});Get Current Stats
Retrieve aggregated statistics for API calls, WebSockets, vitals, and network.
const stats = Deadpipe.getStats();
// {
// api: { "/api/users": { calls: 50, errors: 2, avgDuration: 150, p95: 400 } },
// websocket: { "wss://api.example.com": { connections: 1, messages: 100 } },
// vitals: { LCP: 1200, FID: 50, CLS: 0.05 },
// network: { online: true, effectiveType: "4g" }
// }Get Web Vitals
Get the current Core Web Vitals measurements.
const vitals = Deadpipe.getVitals();
// { LCP: { value: 1200, rating: "good" }, FID: { value: 50, rating: "good" }, ... }Network Status
Check the current network status and connection quality.
// Check if online
const isOnline = Deadpipe.isOnline(); // true/false
// Get connection details
const connection = Deadpipe.getConnection();
// { effectiveType: "4g", downlink: 10, rtt: 50, saveData: false }Force Flush
Immediately send all queued events (useful before page navigation).
Deadpipe.flush();Pipeline Heartbeat
Send heartbeats from your backend jobs to track their health. Perfect for ETL pipelines, cron jobs, data sync tasks, or any scheduled process.
How It Works
- Add a heartbeat call at the end of your job
- Deadpipe tracks when each pipeline last checked in
- If a heartbeat is missed, you get alerted
cURL Example
Add this to the end of your shell script or cron job:
curl -X POST https://deadpipe.com/api/v1/heartbeat \
-H "Content-Type: application/json" \
-H "X-API-Key: dp_your_api_key" \
-d '{"pipeline_id": "my-etl-job", "status": "success"}'Python Example
import requests
def send_heartbeat(pipeline_id, status, records=None, duration_ms=None, app_name=None):
payload = {
"pipeline_id": pipeline_id,
"status": status, # "success" or "failed"
"records_processed": records,
"duration_ms": duration_ms
}
if app_name:
payload["app_name"] = app_name
requests.post(
"https://deadpipe.com/api/v1/heartbeat",
json=payload,
headers={"X-API-Key": "dp_your_api_key"}
)
# Basic usage:
send_heartbeat("daily-sales-etl", "success", records=15420, duration_ms=45000)
# With app-specific scoping:
send_heartbeat("daily-sync", "success", app_name="billing-service")Node.js Example
async function sendHeartbeat(pipelineId, status, data = {}) {
await fetch('https://deadpipe.com/api/v1/heartbeat', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.DEADPIPE_API_KEY
},
body: JSON.stringify({
pipeline_id: pipelineId,
status,
...data
})
});
}
// Basic usage:
await sendHeartbeat('data-sync', 'success', { records_processed: 1000 });
// With app-specific scoping:
await sendHeartbeat('daily-sync', 'success', {
app_name: 'orders-service',
records_processed: 5000
});Request Parameters
| Field | Type | Description |
|---|---|---|
pipeline_id* | string | Unique identifier for your pipeline (name) |
status* | string | 'success' or 'failed' |
app_name | string | Optional app name for grouping pipelines by service |
records_processed | number | Optional count of records processed |
duration_ms | number | Optional execution time in milliseconds |
App-Specific Pipelines
Use the app_name parameter to organize pipelines by service or application. This is useful when you have multiple services with similarly-named pipelines.
// billing-service can have its own "daily-sync" pipeline
{ "pipeline_id": "daily-sync", "app_name": "billing-service", "status": "success" }
// orders-service can also have a "daily-sync" pipeline - no collision!
{ "pipeline_id": "daily-sync", "app_name": "orders-service", "status": "success" }Pipelines are automatically scoped to your account. You can also create pipelines in the Dashboard first, then send heartbeats using the pipeline name.
Auto-Creation
Pipelines are automatically created on the first heartbeat if they don't exist. They'll be created with default settings (24h expected interval) which you can customize in the Dashboard.
Data Isolation
All your data is completely isolated to your account. Your pipelines, API keys, and alerts are only visible and accessible to you.
How Isolation Works
API Keys
Each API key is tied to your account. When you send a heartbeat, we identify you via your API key and scope all data accordingly.
Pipelines
Pipeline names are unique within your account. Two different users can have pipelines named "daily-sync" without any collision.
App Scoping
Within your account, you can further scope pipelines to specific apps. This allows "orders-service" and "billing-service" to each have their own "daily-sync" pipeline.
Pipeline Naming
Pipelines can be identified in three ways:
| Scope | Example | Use Case |
|---|---|---|
| Account-wide | { "pipeline_id": "main-etl" } | Simple setup with few pipelines |
| App-specific | { "pipeline_id": "sync", "app_name": "billing" } | Multiple services with similar pipeline names |
| UUID (Dashboard) | { "pipeline_id": "abc-123-def" } | Using the pipeline ID from the Dashboard |
Dashboard + Heartbeat Integration
You can create pipelines either in the Dashboard first or let them auto-create on first heartbeat. Either way, when you send a heartbeat with a pipeline_id, we'll match it to an existing pipeline by name (within your account and optional app scope) or create a new one.
Rate Limits & Security
All API endpoints are protected with rate limiting, payload validation, and abuse detection to ensure fair usage and protect against attacks.
Rate Limits by Endpoint
| Endpoint | Limit | Window | Block Duration |
|---|---|---|---|
/api/v1/monitor/events | 100 requests | 1 minute | 5 minutes |
/api/v1/heartbeat | 60 requests | 1 minute | 5 minutes |
/api/v1/monitors | 30 requests | 1 minute | 1 minute |
Rate Limit Headers
All API responses include rate limit headers so you can track your usage:
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1703980800
Retry-After: 60 (only when rate limited)Rate Limit Response
When you exceed the rate limit, you'll receive a 429 Too Many Requests response:
{
"error": "Rate limit exceeded",
"retryAfter": 60
}Automatic Blocking
API keys that repeatedly exceed rate limits (10+ violations in 60 minutes) are temporarily blocked and will receive a 403 Forbidden response. This protects against abuse and ensures fair usage for all users.
Payload Size Limits
To prevent memory exhaustion, all endpoints have payload size limits:
| Endpoint | Max Payload Size |
|---|---|
/api/v1/monitor/events | 100 KB |
/api/v1/heartbeat | 10 KB |
/api/v1/monitors | 10 KB |
Array Limits
The /api/v1/monitor/events endpoint has additional limits on array sizes:
| Field | Max Items |
|---|---|
events | 100 items per batch |
stats | 50 endpoints |
wsStats | 20 WebSocket connections |
The frontend monitor automatically batches events and respects these limits. If you're making direct API calls, ensure your payloads stay within these limits.
Alerts
Configure alerts in your Dashboard to get notified when things go wrong.
Pipeline Stale
Get notified if a pipeline misses its expected heartbeat
High Error Rate
Alert when API error rate exceeds your threshold
Slow Responses
Know when P95 latency goes above acceptable limits
Pipeline Failed
Immediate notification when a job reports failure
Notification Channels
Alerts can be sent via Email or Browser alerts. Configure your preferred channels in the Dashboard settings.
Ready to get started?
Create your free account and start monitoring in under 5 minutes.