Table of Contents
Logs Not Recording
Symptom
SDK is integrated but logs are not appearing in the dashboard.
Checklist
- Verify API key is correct
Confirm you are using an API key that starts with
at-. - Check if enabled=True
Verify that
enabled=Falseis not set during initialization. - Confirm client is wrapped
Verify that the OpenAI (or other) client is wrapped with
tracer.wrap_openai(). - Check project name
Verify you have selected the correct project in the dashboard.
# Correct configuration example
from aitracer import AITracer
from openai import OpenAI
tracer = AITracer(
api_key="at-xxxx", # <- Starts with at-
project="my-project",
enabled=True # <- Must be True
)
# Wrap the client
client = tracer.wrap_openai(OpenAI()) # <- Don't forget to wrap
# Now logs will be recorded
response = client.chat.completions.create(...)
Async Mode
In the default async mode, logs are buffered before being sent. If you want to see logs immediately, call flush().
# Send logs immediately
tracer.flush()
API Key Errors
Error Message
Invalid API key or Unauthorized
Solution
- Check API key format
Use a correctly formatted key that starts with
at-. - Check if key is disabled
Check the key status in "Settings" -> "API Keys" in the dashboard.
- Verify environment variable
Confirm the environment variable
AITRACER_API_KEYis correctly set. - Regenerate key
If the problem persists, generate a new API key.
# Check environment variable (Python)
import os
print(os.getenv("AITRACER_API_KEY")) # Should display at-xxxx
# Check environment variable (Bash)
# echo $AITRACER_API_KEY
Rate Limits
Error Message
Rate limit exceeded or 429 Too Many Requests
Solution
- Check plan limits
Verify you haven't reached the log limit for your current plan.
- Upgrade your plan
If you've reached the limit, upgrade to a higher tier plan.
- Temporary workaround
The SDK automatically retries, but if you're sending large volumes of logs, add delays between sends.
Log Limits by Plan
| Plan | Monthly Log Limit |
|---|---|
| Free | 1,000 |
| Starter | 10,000 |
| Pro | 100,000 |
| Enterprise | Unlimited |
Network Errors
Error Message
Connection error, Timeout, Name resolution failed
Solution
- Check internet connection
Verify you can connect to
api.aitracer.co. - Check firewall/proxy
For corporate networks, verify that connections to
api.aitracer.co:443are allowed. - Check DNS settings
Verify that DNS can resolve correctly.
# Connection test (Bash)
curl -I https://api.aitracer.co/health
# DNS resolution test
nslookup api.aitracer.co
Even if network errors occur, the SDK does not interfere with your application. Only log sending fails; LLM API calls work normally.
Performance Issues
Symptom
Application feels slower after integrating AITracer.
Checklist
- Verify async mode is being used
The default is async mode (
sync=False). Check that sync mode is not enabled. - Impact is under 3ms
Impact in async mode is typically under 3ms. If you're experiencing more latency, investigate other causes.
# Check if sync mode is enabled
tracer = AITracer(
api_key="at-xxxx",
sync=False # <- Should be False (default)
)
# Sync mode is not recommended except for serverless environments
Data Issues
Token Count is 0
Symptom
Logs are recorded but token count shows as 0.
Cause
Some LLM providers don't include token counts in their responses. Also, token counts may not be available in streaming mode.
Cost Not Calculated
Symptom
Cost shows as 0 or "-".
Cause
- Token count not available
- Model name not recognized (custom model names)
- New model with unregistered pricing
For manual logs, explicitly specify input_tokens and output_tokens.
Serverless Environments
Symptom
Logs are not recorded or only partially recorded on AWS Lambda / Cloud Functions / Vercel.
Solution
In serverless environments, the process may terminate before the buffer is sent. Apply one of the following solutions.
Option 1: Use Sync Mode
tracer = AITracer(
api_key="at-xxxx",
sync=True # <- Recommended True for serverless
)
Option 2: Call flush() Before Exit
def handler(event, context):
try:
response = client.chat.completions.create(...)
return {"statusCode": 200, "body": response}
finally:
tracer.flush() # <- Always call this
Using sync mode adds slight latency to each request. If performance is critical, try the flush() approach instead.
Debugging Methods
Enable Debug Logging
import logging
# Set AITracer log level to DEBUG
logging.getLogger("aitracer").setLevel(logging.DEBUG)
# Add handler
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
logging.getLogger("aitracer").addHandler(handler)
Check Data Before Sending
# Check buffer contents (for debugging)
print(tracer._buffer) # Display internal buffer
# Check statistics
print(tracer.stats()) # Display sent/failed log counts
If Issues Persist
Contact Support with the following information:
- SDK version you're using
- Python / PHP / Node.js version
- Error message (screenshot or text)
- Code snippet where the issue occurs
- Execution environment (local / AWS Lambda / GCP, etc.)
