Skip to main content

Rate Limiting

Fanfare implements rate limiting to ensure fair usage and maintain service stability for all customers.

Rate Limit Types

Per-Organization Limits

Rate limits are applied at the organization level, shared across all API keys for that organization.

Per-Endpoint Limits

Some endpoints have specific rate limits based on their resource intensity:
Endpoint CategoryLimitWindow
Authentication (OTP request)10 requests1 minute
External Authorization100 requests1 minute
Queue Entry1000 requests1 minute
Auction Bids500 requests1 minute
Draw Entry1000 requests1 minute
Admin API (general)1000 requests1 minute
Analytics Query100 requests1 minute

Rate Limit Headers

Every API response includes rate limit information in the headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1701424860
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the window
X-RateLimit-RemainingRemaining requests in the current window
X-RateLimit-ResetUnix timestamp when the window resets

Rate Limit Exceeded Response

When you exceed the rate limit, the API returns a 429 Too Many Requests response:
HTTP/1.1 429 Too Many Requests
Retry-After: 30
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1701424860
Content-Type: application/json

{
  "error": "Rate limit exceeded",
  "retryAfter": 30
}

Retry-After Header

The Retry-After header indicates when you can retry:
  • Numeric value: Seconds to wait before retrying
  • HTTP date: Timestamp when you can retry

Handling Rate Limits

Basic Retry Logic

async function apiCallWithRetry(url, options, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch(url, options);

    if (response.status === 429) {
      const retryAfter = response.headers.get("Retry-After");
      const waitTime = retryAfter ? parseInt(retryAfter, 10) * 1000 : 1000 * Math.pow(2, attempt);

      console.log(`Rate limited. Waiting ${waitTime}ms before retry...`);
      await new Promise((resolve) => setTimeout(resolve, waitTime));
      continue;
    }

    return response;
  }

  throw new Error("Max retries exceeded");
}

Proactive Rate Limit Tracking

class RateLimitTracker {
  constructor() {
    this.remaining = Infinity;
    this.resetTime = 0;
  }

  update(headers) {
    this.remaining = parseInt(headers.get("X-RateLimit-Remaining") || "0", 10);
    this.resetTime = parseInt(headers.get("X-RateLimit-Reset") || "0", 10) * 1000;
  }

  async waitIfNeeded() {
    if (this.remaining <= 0 && Date.now() < this.resetTime) {
      const waitTime = this.resetTime - Date.now();
      console.log(`Approaching rate limit. Waiting ${waitTime}ms...`);
      await new Promise((resolve) => setTimeout(resolve, waitTime));
    }
  }
}

Request Queue with Rate Limiting

class RateLimitedQueue {
  constructor(requestsPerMinute = 1000) {
    this.queue = [];
    this.processing = false;
    this.interval = 60000 / requestsPerMinute;
  }

  async add(request) {
    return new Promise((resolve, reject) => {
      this.queue.push({ request, resolve, reject });
      this.process();
    });
  }

  async process() {
    if (this.processing || this.queue.length === 0) return;
    this.processing = true;

    while (this.queue.length > 0) {
      const { request, resolve, reject } = this.queue.shift();
      try {
        const result = await request();
        resolve(result);
      } catch (error) {
        reject(error);
      }
      await new Promise((r) => setTimeout(r, this.interval));
    }

    this.processing = false;
  }
}

Burst Limits

In addition to per-minute limits, Fanfare implements burst limits to prevent sudden spikes:
Endpoint CategoryBurst LimitBurst Window
Queue Entry100 requests1 second
Auction Bids50 requests1 second
Draw Entry100 requests1 second
Exceeding burst limits returns the same 429 response.

Best Practices

1. Implement Exponential Backoff

function getBackoffDelay(attempt, baseDelay = 1000) {
  return Math.min(baseDelay * Math.pow(2, attempt), 30000);
}

2. Use Batch Endpoints

Instead of making individual requests, use batch endpoints where available:
POST /api/v1/consumers/batch
Content-Type: application/json

{
  "consumers": [
    { "email": "[email protected]" },
    { "email": "[email protected]" },
    { "email": "[email protected]" }
  ]
}

3. Cache Responses

Cache read responses to reduce API calls:
const cache = new Map();

async function getCachedData(key, fetchFn, ttlMs = 60000) {
  const cached = cache.get(key);
  if (cached && Date.now() < cached.expiresAt) {
    return cached.data;
  }

  const data = await fetchFn();
  cache.set(key, { data, expiresAt: Date.now() + ttlMs });
  return data;
}

4. Monitor Rate Limit Usage

Track your rate limit consumption to identify patterns:
function logRateLimitUsage(response) {
  const limit = response.headers.get("X-RateLimit-Limit");
  const remaining = response.headers.get("X-RateLimit-Remaining");
  const usage = ((limit - remaining) / limit) * 100;

  if (usage > 80) {
    console.warn(`High rate limit usage: ${usage.toFixed(1)}%`);
  }
}

5. Distribute Load Over Time

For bulk operations, spread requests over time:
async function processBulkWithDelay(items, processItem, delayMs = 100) {
  for (const item of items) {
    await processItem(item);
    await new Promise((r) => setTimeout(r, delayMs));
  }
}

Plan-Based Limits

Rate limits may vary by subscription plan:
PlanAdmin APIConsumer APIAnalytics
Starter500/min2000/min50/min
Growth1000/min5000/min100/min
EnterpriseCustomCustomCustom
Contact sales for Enterprise rate limit customization.

Rate Limit Exemptions

Certain endpoints are exempt from standard rate limits:
  • Health check endpoints (/health)
  • OpenAPI documentation endpoints (/openapi.json)
  • Webhook delivery (outbound)

Monitoring and Alerts

Use the Fanfare dashboard to:
  • View real-time rate limit usage
  • Set up alerts for approaching limits
  • Analyze historical usage patterns

Need Higher Limits?

If your use case requires higher rate limits:
  1. Review if batch endpoints can reduce request volume
  2. Implement caching for read operations
  3. Contact support for Enterprise plan options