API Reference

Everything you need to integrate Deadpipe into your pipelines and applications.

Overview

Frontend Monitor

The Frontend Monitor is a lightweight JavaScript snippet that automatically tracks all API calls, errors, and performance metrics from your web application. Drop it in and get instant visibility.

What Gets Tracked Automatically

API & Network

  • • All fetch() requests
  • • All XMLHttpRequest calls
  • • WebSocket connections & messages
  • • Response times & status codes
  • • Slow request detection
  • • Network status changes (online/offline)

Errors

  • • Uncaught JavaScript errors
  • • Unhandled Promise rejections
  • • Console errors & warnings
  • • Resource loading failures (images, scripts)
  • • CSP violations
  • • Stack traces (truncated)

Web Vitals

  • LCP - Largest Contentful Paint
  • FID - First Input Delay
  • CLS - Cumulative Layout Shift
  • INP - Interaction to Next Paint
  • TTFB - Time to First Byte
  • FCP - First Contentful Paint

Performance

  • • Page load timing breakdown
  • • Long tasks blocking the main thread
  • • P50/P95/P99 latency percentiles
  • • Per-endpoint statistics
  • • Connection quality (effective type, RTT)
  • • Session tracking

1. Enable via Script Tag

The easiest way to enable monitoring. Just add this script tag to your HTML <head> or before </body>:

<script 
  src="https://deadpipe.com/monitor.js"
  data-key="dp_your_api_key"
  data-app="my-app-name">
</script>

2. React / Next.js Integration

For React or Next.js apps, add the script using the Script component:

// app/layout.tsx or pages/_app.tsx
import Script from 'next/script'

export default function RootLayout({ children }) {
  return (
    <html>
      <head>
        <Script
          src="https://deadpipe.com/monitor.js"
          data-key={process.env.NEXT_PUBLIC_DEADPIPE_KEY}
          data-app="my-nextjs-app"
          strategy="afterInteractive"
        />
      </head>
      <body>{children}</body>
    </html>
  )
}

Configuration Options

AttributeDefaultDescription
data-keyrequiredYour Deadpipe API key (starts with dp_)
data-apphostnameApplication name for grouping events
data-track-fetchtrueEnable/disable fetch/XHR tracking
data-track-errorstrueEnable/disable JS error tracking
data-track-vitalstrueEnable/disable Core Web Vitals tracking
data-track-websockettrueEnable/disable WebSocket tracking
data-track-resourcestrueEnable/disable resource error tracking
data-track-long-taskstrueEnable/disable long task detection
data-track-consoletrueEnable/disable console.error tracking
data-track-networktrueEnable/disable network status tracking
data-sample-rate1Sample rate 0-1 (0.5 = 50% of requests)
data-batch-interval5000How often to send batched events (ms)
data-slow-threshold3000Mark requests slower than this as 'slow' (ms)
data-long-task-threshold50Report tasks longer than this (ms)
data-ignoreComma-separated URL patterns to ignore

Example: Production Configuration

A typical production setup with sampling and ignored endpoints:

<script 
  src="https://deadpipe.com/monitor.js"
  data-key="dp_prod_xxxxxxxxxxxxx"
  data-app="production-frontend"
  data-sample-rate="0.1"
  data-slow-threshold="2000"
  data-ignore="/health,/metrics,analytics.google.com">
</script>

This configuration samples 10% of requests, marks anything over 2s as slow, and ignores health checks and Google Analytics.

Verify It's Working

Make some API calls in your app and check your Dashboard to see them appear in real-time.

Monitor SDK

The Frontend Monitor exposes a global Deadpipe object for manual tracking beyond automatic API interception.

Track Custom Events

Track user interactions, feature usage, or any custom event.

Deadpipe.track('button_click', { 
  button: 'checkout', 
  value: 99.99 
});

Track Metrics

Record numerical metrics with optional tags.

Deadpipe.metric('page_load_time', 1250, { 
  page: '/checkout' 
});

Browser Heartbeat

Send a heartbeat directly from the browser for client-side jobs.

// Simple heartbeat
await Deadpipe.heartbeat('browser-sync', 'success');

// With additional data
await Deadpipe.heartbeat('data-export', 'success', { 
  records_processed: 500 
});

Pipeline Wrapper

Wrap an async function to automatically track its success/failure and duration.

// Automatically sends heartbeat on success or failure
const result = await Deadpipe.pipeline('data-sync', async () => {
  const data = await fetchData();
  await processData(data);
  return { records_processed: data.length };
});

Get Current Stats

Retrieve aggregated statistics for API calls, WebSockets, vitals, and network.

const stats = Deadpipe.getStats();
// {
//   api: { "/api/users": { calls: 50, errors: 2, avgDuration: 150, p95: 400 } },
//   websocket: { "wss://api.example.com": { connections: 1, messages: 100 } },
//   vitals: { LCP: 1200, FID: 50, CLS: 0.05 },
//   network: { online: true, effectiveType: "4g" }
// }

Get Web Vitals

Get the current Core Web Vitals measurements.

const vitals = Deadpipe.getVitals();
// { LCP: { value: 1200, rating: "good" }, FID: { value: 50, rating: "good" }, ... }

Network Status

Check the current network status and connection quality.

// Check if online
const isOnline = Deadpipe.isOnline(); // true/false

// Get connection details
const connection = Deadpipe.getConnection();
// { effectiveType: "4g", downlink: 10, rtt: 50, saveData: false }

Force Flush

Immediately send all queued events (useful before page navigation).

Deadpipe.flush();

Pipeline Heartbeat

Send heartbeats from your backend jobs to track their health. Perfect for ETL pipelines, cron jobs, data sync tasks, or any scheduled process.

How It Works

  1. Add a heartbeat call at the end of your job
  2. Deadpipe tracks when each pipeline last checked in
  3. If a heartbeat is missed, you get alerted

cURL Example

Add this to the end of your shell script or cron job:

curl -X POST https://deadpipe.com/api/v1/heartbeat \
  -H "Content-Type: application/json" \
  -H "X-API-Key: dp_your_api_key" \
  -d '{"pipeline_id": "my-etl-job", "status": "success"}'

Python Example

import requests

def send_heartbeat(pipeline_id, status, records=None, duration_ms=None, app_name=None):
    payload = {
        "pipeline_id": pipeline_id,
        "status": status,  # "success" or "failed"
        "records_processed": records,
        "duration_ms": duration_ms
    }
    if app_name:
        payload["app_name"] = app_name
    
    requests.post(
        "https://deadpipe.com/api/v1/heartbeat",
        json=payload,
        headers={"X-API-Key": "dp_your_api_key"}
    )

# Basic usage:
send_heartbeat("daily-sales-etl", "success", records=15420, duration_ms=45000)

# With app-specific scoping:
send_heartbeat("daily-sync", "success", app_name="billing-service")

Node.js Example

async function sendHeartbeat(pipelineId, status, data = {}) {
  await fetch('https://deadpipe.com/api/v1/heartbeat', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'X-API-Key': process.env.DEADPIPE_API_KEY
    },
    body: JSON.stringify({
      pipeline_id: pipelineId,
      status,
      ...data
    })
  });
}

// Basic usage:
await sendHeartbeat('data-sync', 'success', { records_processed: 1000 });

// With app-specific scoping:
await sendHeartbeat('daily-sync', 'success', { 
  app_name: 'orders-service',
  records_processed: 5000 
});

Request Parameters

FieldTypeDescription
pipeline_id*stringUnique identifier for your pipeline (name)
status*string'success' or 'failed'
app_namestringOptional app name for grouping pipelines by service
records_processednumberOptional count of records processed
duration_msnumberOptional execution time in milliseconds

App-Specific Pipelines

Use the app_name parameter to organize pipelines by service or application. This is useful when you have multiple services with similarly-named pipelines.

// billing-service can have its own "daily-sync" pipeline
{ "pipeline_id": "daily-sync", "app_name": "billing-service", "status": "success" }

// orders-service can also have a "daily-sync" pipeline - no collision!
{ "pipeline_id": "daily-sync", "app_name": "orders-service", "status": "success" }

Pipelines are automatically scoped to your account. You can also create pipelines in the Dashboard first, then send heartbeats using the pipeline name.

Auto-Creation

Pipelines are automatically created on the first heartbeat if they don't exist. They'll be created with default settings (24h expected interval) which you can customize in the Dashboard.

Data Isolation

All your data is completely isolated to your account. Your pipelines, API keys, and alerts are only visible and accessible to you.

How Isolation Works

API Keys

Each API key is tied to your account. When you send a heartbeat, we identify you via your API key and scope all data accordingly.

Pipelines

Pipeline names are unique within your account. Two different users can have pipelines named "daily-sync" without any collision.

App Scoping

Within your account, you can further scope pipelines to specific apps. This allows "orders-service" and "billing-service" to each have their own "daily-sync" pipeline.

Pipeline Naming

Pipelines can be identified in three ways:

ScopeExampleUse Case
Account-wide{ "pipeline_id": "main-etl" }Simple setup with few pipelines
App-specific{ "pipeline_id": "sync", "app_name": "billing" }Multiple services with similar pipeline names
UUID (Dashboard){ "pipeline_id": "abc-123-def" }Using the pipeline ID from the Dashboard

Dashboard + Heartbeat Integration

You can create pipelines either in the Dashboard first or let them auto-create on first heartbeat. Either way, when you send a heartbeat with a pipeline_id, we'll match it to an existing pipeline by name (within your account and optional app scope) or create a new one.

Rate Limits & Security

All API endpoints are protected with rate limiting, payload validation, and abuse detection to ensure fair usage and protect against attacks.

Rate Limits by Endpoint

EndpointLimitWindowBlock Duration
/api/v1/monitor/events100 requests1 minute5 minutes
/api/v1/heartbeat60 requests1 minute5 minutes
/api/v1/monitors30 requests1 minute1 minute

Rate Limit Headers

All API responses include rate limit headers so you can track your usage:

X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1703980800
Retry-After: 60  (only when rate limited)

Rate Limit Response

When you exceed the rate limit, you'll receive a 429 Too Many Requests response:

{
  "error": "Rate limit exceeded",
  "retryAfter": 60
}

Automatic Blocking

API keys that repeatedly exceed rate limits (10+ violations in 60 minutes) are temporarily blocked and will receive a 403 Forbidden response. This protects against abuse and ensures fair usage for all users.

Payload Size Limits

To prevent memory exhaustion, all endpoints have payload size limits:

EndpointMax Payload Size
/api/v1/monitor/events100 KB
/api/v1/heartbeat10 KB
/api/v1/monitors10 KB

Array Limits

The /api/v1/monitor/events endpoint has additional limits on array sizes:

FieldMax Items
events100 items per batch
stats50 endpoints
wsStats20 WebSocket connections

The frontend monitor automatically batches events and respects these limits. If you're making direct API calls, ensure your payloads stay within these limits.

Alerts

Configure alerts in your Dashboard to get notified when things go wrong.

Pipeline Stale

Get notified if a pipeline misses its expected heartbeat

High Error Rate

Alert when API error rate exceeds your threshold

Slow Responses

Know when P95 latency goes above acceptable limits

Pipeline Failed

Immediate notification when a job reports failure

Notification Channels

Alerts can be sent via Email or Browser alerts. Configure your preferred channels in the Dashboard settings.

Ready to get started?

Create your free account and start monitoring in under 5 minutes.