Table of Contents
Dashboard Overview
The AITracer dashboard allows you to monitor your LLM application's operational status in real-time.
Main Menu
Home Screen
The home screen allows you to view key metrics at a glance.
Key Metric Cards
| Metric | Description |
|---|---|
| Total Requests | Total number of API calls within the selected period |
| Total Cost | API usage charges for the selected period (USD) |
| Average Latency | Average response time for API calls |
| Error Rate | Percentage of failed requests |
| Total Tokens | Sum of input and output tokens |
Period Selection
Select the display period from the dropdown in the top right corner.
- Last 1 hour
- Last 24 hours
- Last 7 days
- Last 30 days
- Custom period
Graph Display
- Request Trends: Request count changes over time
- Cost Trends: Daily cost changes
- Latency Distribution: P50/P95/P99 trends
- Usage by Model: Pie chart breakdown
Log Search & Viewing
In the "Logs" menu, you can search and view all API call logs.
Filter Features
| Filter | Description | Example |
|---|---|---|
| Period | Filter by date range | Last 24 hours |
| Project | Filter by project | my-chatbot |
| Model | Filter by model used | gpt-4, claude-3 |
| Provider | Filter by provider | OpenAI, Anthropic |
| Status | Filter by success/error | error |
| Trace ID | Search for specific trace | trace-abc123 |
| Metadata | Search by custom metadata | user_id:user-456 |
Log List Display Items
- Timestamp: Request timestamp
- Model: Model name used
- Status: Success/Error
- Tokens: Input/Output token count
- Cost: Estimated cost (USD)
- Latency: Response time (ms)
Log Detail View
Click on a log to view detailed information.
- Input Data: Request content (prompts, messages)
- Output Data: Response content
- Metadata: Custom metadata
- Trace Information: Links to related requests
- Error Details: Error message and type for errors
When PII detection is enabled, personal information is displayed masked.
Analytics Features
In the "Analytics" menu, you can view detailed analytics reports.
Cost Analytics
- Daily Cost Trends: Graph showing daily cost changes
- Cost by Model: Which models are incurring costs
- Cost by Project: Cost breakdown by project
- Cost Efficiency: Cost per token comparison
Performance Analytics
- Latency Distribution: P50, P95, P99 percentiles
- Latency by Model: Response time comparison by model
- Performance by Time: Performance changes by time of day
- Bottleneck Analysis: Trends for slow requests
Usage Analytics
- Request Trends: Request count over time
- Token Usage: Input/output token trends
- Error Rate Trends: Error rate changes
- Usage by User: Analysis by user_id metadata
Session Analytics
- Session Count: Active session count trends
- Average Turns: Conversation count per session
- Session Duration: Average session duration
- Feedback: User feedback aggregation
App User Analytics
The App Users feature allows you to track and analyze usage by each end user of your AI application. By specifying a user_id, you can understand per-user costs, usage, and behavior patterns.
The App Users feature is available on Starter plan and above.
App Users List
You can view the list of all users from the "App Users" menu in the sidebar.
| Item | Description |
|---|---|
| User ID | The user_id specified in the SDK |
| Request Count | Total API requests issued by the user |
| Token Count | Total tokens used (input + output) |
| Cost | API costs consumed by the user (USD) |
| Error Count | Number of requests with errors |
| First Access | Timestamp of first recorded log |
| Last Access | Timestamp of last recorded log |
User Detail View
Click on a user ID to view detailed information for that user.
- Summary Cards: Request count, cost, token count, average latency, error rate
- Usage by Model: Breakdown of models used by the user
- Recent Logs: Recent API call history
Use Cases
- Identify Heavy Users: Understand high-cost users and analyze usage patterns
- Investigate Error-Prone Users: Identify causes of frequent errors for specific users
- Billing & Limits Reference: Design billing based on per-user API usage
- Improve User Experience: Consider measures for users with high latency
SDK Configuration
To track users, specify user_id when starting a session or in metadata.
# Specify user_id in session
with tracer.session(
session_id="session-123",
user_id="user-456"
) as session:
response = client.chat.completions.create(...)
# Or specify in metadata
tracer.set_metadata({"user_id": "user-456"})
See SDK Reference - Session Management for details.
Project Management
You can logically group logs using projects. See the Project Management Guide for SDK project specification methods and best practices.
Creating a Project
- Open the "Projects" menu
- Click "+ New Project"
- Enter project name and description
- Click "Create"
Project Usage Examples
| Project Name | Purpose |
|---|---|
| production | Production environment logs |
| staging | Staging environment logs |
| development | Development environment logs |
| chatbot-v2 | Specific feature logs |
Project Settings
- Default Alerts: Project-specific alert rules
- Data Retention: Log retention period (Enterprise only)
- Access Permissions: Team member permission settings
Settings
API Key Management
Manage API keys in "Settings" -> "API Keys".
- Create New: Issue a new API key
- Revoke: Disable existing keys
- Permissions: Set access permissions per key
- Usage Stats: View request count per key
Team Management
Manage team members in "Settings" -> "Team".
- Invite Members: Invite by email address
- Role Settings: Admin/Member/Viewer
- Access Control: Project-level permissions
Integration Settings
Configure external service integrations in "Settings" -> "Integrations".
- Slack: Alert notification destination
- Webhook: Custom notification endpoint
Billing & Plans
Manage billing information and plan changes in "Settings" -> "Plan".
- Current Plan: Current plan and usage
- Change Plan: Upgrade/Downgrade
- Billing History: View past invoices
- Payment Method: Update credit card information
