MCP Server
Enable AI assistants to analyze performance traces and generate flamegraph visualizations
The Flamechart MCP (Model Context Protocol) server enables AI assistants to analyze performance traces and generate flamegraph visualizations. It works with both local trace files and remote traces stored on FlameDeck, across all major trace formats. Here’s a quick demo of the MCP server in action:
Best Model Performance - For optimal flamegraph analysis, we recommend using AI models with strong image understanding capabilities such as OpenAI’s o3. These models can better interpret the visual flamegraph outputs and provide more detailed insights about performance bottlenecks.
Quick Start
1. Install the MCP server
2. Start Analyzing
Try this prompt in your AI assistant (use the absolute local file path to your trace file):
Working with Remote Traces
Setting Up FlameDeck Integration
Create an API Key
- Go to FlameDeck Settings
- Create a new key with
trace:download
permissions - Copy the generated key
Using Remote Traces
Once configured, you can analyze traces directly from FlameDeck URLs:
Example Workflows
Debugging Slow React Rendering
API Performance Investigation
Supported Trace Formats
The MCP server supports a wide range of profiling formats, including pprof, chrome, pyinstrument, etc.
Automatic Format Detection - The server automatically detects trace formats and handles gzipped files transparently.
Available Tools
get_top_functions
Identifies the slowest functions in your trace, helping you find performance bottlenecks.
File path or FlameDeck URL
Sort by 'self'
or 'total'
time
Starting position for pagination
Number of functions to return
Example Usage:
generate_flamegraph_screenshot
Creates a visual flamegraph showing the timeline of function execution.
File path or FlameDeck URL
Image width in pixels
Image height in pixels
Theme: 'light'
or 'dark'
Start time for zoomed view
End time for zoomed view
Stack depth to start visualization
Example Usage:
generate_sandwich_flamegraph_screenshot
Creates a “sandwich view” focusing on a specific function, showing both its callers and callees.
File path or FlameDeck URL
Exact function name to focus on
Example Usage:
Performance Features
Smart Caching
The MCP server includes intelligent caching to improve performance:
- Remote URLs: Cached for 5 minutes to avoid repeated API calls
- Local Files: Cached until file modification detected
- Memory Management: Automatic cleanup with 50 profile limit
- Cache Invalidation: Smart invalidation based on file changes
Performance Boost - Subsequent tool calls on the same trace are nearly instant thanks to smart caching.
Troubleshooting
Common Issues
MCP Server Not Starting
MCP Server Not Starting
- Ensure Node.js is installed (version 16+ recommended)
- Try running with
npx -y @flamedeck/flamechart-mcp
to force latest version
API Key Authentication Errors
API Key Authentication Errors
- Verify your API key has
trace:download
permissions - Check that the key is correctly set in environment variables
- Ensure the FlameDeck URL format is correct:
https://www.flamedeck.com/traces/{id}
Trace File Not Found
Trace File Not Found
- Use absolute file paths for local traces
- Verify file exists and is readable
- Check that the trace format is supported
Performance Issues
Performance Issues
- Large traces (>100MB) may take longer to process
- Consider using time-based zooming for very long traces
- Clear cache if memory usage becomes excessive
Getting Help
For technical support or to report issues, please submit a ticket through our GitHub issue tracker.
Advanced Usage
Custom Flamegraph Views
Create focused visualizations by specifying time ranges:
Integration with Code Analysis
Combine trace analysis with code review: