Rate Limiting
The 0.link API implements rate limiting to ensure fair usage and maintain service reliability. This guide explains rate limits, how to handle them, and best practices for high-volume applications.
Rate Limit Overview
Rate limits restrict the number of API requests you can make within a specific time window. Limits vary by account plan and endpoint type.
Rate Limit Tiers
Free Tier
- Standard Endpoints: 60 requests per minute
- Upload Endpoints: 10 requests per minute
- Analytics Endpoints: 30 requests per minute
Pro Plan
- Standard Endpoints: 600 requests per minute
- Upload Endpoints: 100 requests per minute
- Analytics Endpoints: 300 requests per minute
Enterprise
- Custom Limits: Negotiated based on needs
- Burst Allowance: Higher temporary limits
- Dedicated Resources: Isolated rate limiting
Rate Limit Headers
Every API response includes rate limit information in headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1642248000
X-RateLimit-Window: 60
X-RateLimit-Type: standard| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed in window |
X-RateLimit-Remaining | Requests left in current window |
X-RateLimit-Reset | Unix timestamp when window resets |
X-RateLimit-Window | Window duration in seconds |
X-RateLimit-Type | Endpoint category (standard, upload, analytics) |
Rate Limit Exceeded Response
When you exceed the rate limit, you'll receive a 429 status code:
{
"status": "error",
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Too many requests",
"retry_after": 60,
"limit": 60,
"window": "1 minute",
"endpoint_type": "standard"
},
"meta": {
"timestamp": "2024-01-15T10:30:00Z",
"request_id": "req_abc123"
}
}Handling Rate Limits
Basic Retry Logic
async function makeRequestWithRetry(apiCall, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await apiCall();
} catch (error) {
if (error.status === 429) {
const retryAfter = error.retry_after || Math.pow(2, attempt);
console.log(`Rate limited. Retrying in ${retryAfter} seconds...`);
await sleep(retryAfter * 1000);
continue;
}
throw error;
}
}
throw new Error('Max retries exceeded');
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}Exponential Backoff
async function exponentialBackoff(apiCall, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await apiCall();
} catch (error) {
if (error.status === 429 && attempt < maxRetries - 1) {
const delay = Math.min(1000 * Math.pow(2, attempt), 30000);
const jitter = Math.random() * 0.1 * delay;
await sleep(delay + jitter);
continue;
}
throw error;
}
}
}Proactive Rate Limiting
Monitor headers to avoid hitting limits:
class RateLimitedClient {
constructor(apiKey) {
this.apiKey = apiKey;
this.remaining = Infinity;
this.resetTime = 0;
}
async makeRequest(endpoint, options = {}) {
// Check if we're close to limit
if (this.remaining < 5 && Date.now() < this.resetTime * 1000) {
const waitTime = (this.resetTime * 1000) - Date.now();
console.log(`Proactively waiting ${waitTime}ms to avoid rate limit`);
await sleep(waitTime);
}
const response = await fetch(endpoint, {
...options,
headers: {
'Authorization': `Bearer ${this.apiKey}`,
...options.headers
}
});
// Update rate limit tracking
this.remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
this.resetTime = parseInt(response.headers.get('X-RateLimit-Reset'));
return response;
}
}Best Practices
1. Respect Rate Limit Headers
Always check rate limit headers and adjust request timing:
function checkRateLimit(headers) {
const remaining = parseInt(headers['x-ratelimit-remaining']);
const resetTime = parseInt(headers['x-ratelimit-reset']);
if (remaining < 10) {
const waitTime = (resetTime * 1000) - Date.now();
console.warn(`Low rate limit remaining: ${remaining}. Reset in ${waitTime}ms`);
}
}2. Implement Graceful Degradation
Design your application to handle rate limits gracefully:
async function fetchDataWithFallback(primaryEndpoint, fallbackData) {
try {
return await apiClient.get(primaryEndpoint);
} catch (error) {
if (error.status === 429) {
console.log('Rate limited, using cached data');
return fallbackData;
}
throw error;
}
}3. Batch Operations
Group multiple operations to reduce API calls:
// ❌ Individual requests (uses 3 API calls)
const user1 = await client.users.get('user_1');
const user2 = await client.users.get('user_2');
const user3 = await client.users.get('user_3');
// ✅ Batch request (uses 1 API call)
const users = await client.users.getBatch(['user_1', 'user_2', 'user_3']);4. Cache Responses
Implement caching to reduce API calls:
class CachedApiClient {
constructor() {
this.cache = new Map();
this.cacheTime = 5 * 60 * 1000; // 5 minutes
}
async get(endpoint) {
const cached = this.cache.get(endpoint);
if (cached && Date.now() - cached.timestamp < this.cacheTime) {
return cached.data;
}
const data = await this.apiCall(endpoint);
this.cache.set(endpoint, {
data,
timestamp: Date.now()
});
return data;
}
}5. Spread Requests Over Time
Avoid bursts by spreading requests:
async function processItemsWithRateLimit(items, processor, requestsPerMinute = 30) {
const delay = (60 * 1000) / requestsPerMinute;
for (const item of items) {
await processor(item);
await sleep(delay);
}
}Monitoring Rate Limits
Track Usage Patterns
Monitor your rate limit usage:
class RateLimitMonitor {
constructor() {
this.requests = [];
}
logRequest(endpoint, remaining, limit) {
this.requests.push({
endpoint,
remaining,
limit,
timestamp: Date.now(),
utilization: ((limit - remaining) / limit) * 100
});
// Cleanup old data (keep last hour)
const oneHourAgo = Date.now() - (60 * 60 * 1000);
this.requests = this.requests.filter(req => req.timestamp > oneHourAgo);
}
getUtilizationStats() {
if (this.requests.length === 0) return null;
const utilizations = this.requests.map(req => req.utilization);
return {
average: utilizations.reduce((a, b) => a + b, 0) / utilizations.length,
max: Math.max(...utilizations),
recent: utilizations.slice(-10).reduce((a, b) => a + b, 0) / Math.min(10, utilizations.length)
};
}
}Set Up Alerts
Configure monitoring for rate limit issues:
function checkRateLimitHealth(remaining, limit) {
const utilization = ((limit - remaining) / limit) * 100;
if (utilization > 90) {
console.error('CRITICAL: Rate limit utilization above 90%');
// Send alert to monitoring system
} else if (utilization > 75) {
console.warn('WARNING: Rate limit utilization above 75%');
}
}SDK Rate Limiting
Our official SDKs include built-in rate limiting:
JavaScript SDK
import { ZeroLink } from '@0link/sdk';
const client = new ZeroLink({
apiKey: 'your_api_key',
rateLimiting: {
enabled: true,
strategy: 'exponential_backoff',
maxRetries: 3
}
});
// SDK automatically handles rate limits
const projects = await client.projects.list();Python SDK
from zerolink import Client
client = Client(
api_key='your_api_key',
rate_limiting={
'enabled': True,
'strategy': 'exponential_backoff',
'max_retries': 3
}
)
# SDK automatically handles rate limits
projects = client.projects.list()Troubleshooting
Common Issues
Consistent 429 Errors
- Check if your request volume exceeds your plan limits
- Implement proper retry logic with exponential backoff
- Consider upgrading your plan
Unexpected Rate Limiting
- Verify you're not making concurrent requests
- Check for automated processes consuming your quota
- Monitor usage patterns in your dashboard
Slow Performance
- Implement caching to reduce API calls
- Use batch endpoints where available
- Optimize request timing and frequency
Getting More Capacity
If you need higher rate limits:
- Upgrade Plan: Higher tiers include increased limits
- Contact Sales: Enterprise plans offer custom limits
- Optimize Usage: Review and optimize your API usage patterns
- Multiple Keys: Use multiple API keys for different services
Plan Comparison
| Feature | Free | Pro | Enterprise |
|---|---|---|---|
| Standard API | 60/min | 600/min | Custom |
| Upload API | 10/min | 100/min | Custom |
| Analytics API | 30/min | 300/min | Custom |
| Burst Allowance | No | 2x for 1 min | Custom |
| Multiple Keys | 3 | 10 | Unlimited |
Getting Help
For rate limiting questions:
- Dashboard: View usage at dashboard.0link.com
- Support: Email support@0link.com
- Sales: Contact sales@0link.com for higher limits
- Status: Check status.0link.com for service issues