Skip to the content.

Complete API reference for all modules and functions.


Table of Contents

  1. Core Tracing API
  2. Async Tracing API
  3. Comparison API
  4. Memory Leak Detection API
  5. Profiling API
  6. Export API
  7. Flamegraph API
  8. Jupyter API
  9. Data Structures

Core Tracing API

trace_scope(output_file=None)

Context manager for tracing function calls.

Parameters:

Returns:

Example:

from callflow_tracer import trace_scope

with trace_scope() as graph:
    my_function()

# Or auto-export
with trace_scope("output.html"):
    my_function()

@trace

Decorator to trace a specific function.

Example:

from callflow_tracer import trace

@trace
def my_function():
    return 42

get_current_graph()

Get the current active call graph.

Returns:

Example:

from callflow_tracer import get_current_graph

graph = get_current_graph()
if graph:
    print(f"Nodes: {len(graph.nodes)}")

clear_trace()

Clear the current trace data.

Note: This function is not thread-safe.

Example:


### `AsyncCallGraph`

Extended `CallGraph` with async metadata (timeline, await time, concurrency).

---

## Comparison API

### `compare_graphs(before, after)`

Compare two call graphs and return a structured diff.

**Returns:**
- `dict` with `summary` and per-function differences

---

### `export_comparison_html(before, after, output_file, label1=None, label2=None, title=None)`

Generate a split-screen HTML report highlighting improvements/regressions.

**Example:**
```python
from callflow_tracer.comparison import export_comparison_html
export_comparison_html(before, after, "comparison.html", label1="Before", label2="After")

Memory Leak Detection API

MemoryLeakDetector

Main detector that orchestrates snapshots, growth analysis, and reporting.


detect_leaks(output_file=None)

Context manager to run code while capturing memory snapshots and generating an optional HTML report.

Example:

from callflow_tracer.memory_leak_detector import detect_leaks
with detect_leaks("leak_report.html") as detector:
    do_work()
    detector.take_snapshot("after_work")

@track_allocations

Decorator to track allocations within a function and attach a brief report to stdout or detector state.


MemorySnapshot(label)

Capture a point-in-time snapshot of memory and objects. Use compare_to(other) to compute diffs.


find_reference_cycles()

Return a list of detected reference cycles using the gc module.


get_memory_growth(interval=1.0, iterations=5)

Measure memory growth over time by sampling at a fixed interval.


get_top_memory_consumers(limit=10)

Return the top allocation sites/files by memory consumed (via tracemalloc).


Profiling API

@profile_function

Decorator to profile a function’s performance.

Tracks:

Example:

from callflow_tracer import profile_function

@profile_function
def expensive_function():
    # Your code here
    pass

# Access stats
stats = expensive_function.performance_stats
print(stats.to_dict())

profile_section(name=None)

Context manager for profiling a code section.

Parameters:

Returns:

Example:

from callflow_tracer import profile_section

with profile_section("Data Processing") as stats:
    # Your code here
    process_data()

# Access stats
stats_dict = stats.to_dict()
print(f"CPU time: {stats_dict['cpu']}")
print(f"Memory: {stats_dict['memory']}")
print(f"I/O wait: {stats_dict['io_wait']}")

get_memory_usage()

Get current memory usage in MB.

Returns:

Example:

from callflow_tracer import get_memory_usage

mem = get_memory_usage()
print(f"Memory: {mem:.2f}MB")

PerformanceStats.to_dict()

Convert performance statistics to dictionary.

Returns:

Example:

stats_dict = stats.to_dict()

# Memory stats
if stats_dict['memory']:
    print(f"Current: {stats_dict['memory']['current_mb']:.2f}MB")
    print(f"Peak: {stats_dict['memory']['peak_mb']:.2f}MB")

# CPU stats
if stats_dict['cpu']:
    print(stats_dict['cpu']['profile_data'])

# I/O wait
print(f"I/O wait: {stats_dict['io_wait']:.4f}s")

Export API

export_html(graph, output_file, title=None, layout='hierarchical', profiling_stats=None)

Export call graph to interactive HTML.

Parameters:

Example:

from callflow_tracer import trace_scope, export_html

with trace_scope() as graph:
    my_function()

export_html(
    graph,
    "output.html",
    title="My Application",
    layout="hierarchical",
    profiling_stats=stats.to_dict()
)

export_json(graph, output_file)

Export call graph to JSON format.

Parameters:

Example:

from callflow_tracer import trace_scope, export_json

with trace_scope() as graph:
    my_function()

export_json(graph, "output.json")

Flamegraph API

generate_flamegraph(call_graph, output_file=None, width=1200, height=800, title="CallFlow Flame Graph", color_scheme="default", show_stats=True, min_width=0.1, search_enabled=True)

Generate an interactive flamegraph visualization.

Parameters:

Returns:

Example:

from callflow_tracer import trace_scope
from callflow_tracer.flamegraph import generate_flamegraph

with trace_scope() as graph:
    my_function()

# Basic
generate_flamegraph(graph, "flamegraph.html")

# Enhanced with all features
generate_flamegraph(
    graph,
    "enhanced.html",
    title="Performance Analysis",
    color_scheme="performance",
    show_stats=True,
    search_enabled=True,
    min_width=0.1,
    width=1600,
    height=1000
)

Jupyter API

init_jupyter()

Initialize Jupyter notebook integration.

Example:

from callflow_tracer.jupyter import init_jupyter

init_jupyter()
# Loads magic commands

display_callgraph(graph_data, width="100%", height="600px", layout="hierarchical")

Display an interactive call graph in a Jupyter notebook.

Parameters:

Example:

from callflow_tracer import trace_scope
from callflow_tracer.jupyter import display_callgraph

with trace_scope() as graph:
    my_function()

# Display inline in notebook
display_callgraph(
    graph.to_dict(),
    width="100%",
    height="800px",
    layout="force"
)

%callflow_trace

Line magic to trace a single line of code.

Example:

%callflow_trace my_function()

%%callflow_cell_trace

Cell magic to trace an entire cell.

Example:

%%callflow_cell_trace

def my_function():
    return 42

result = my_function()
print(result)

Data Structures

CallGraph

Main graph object containing traced calls.

Attributes:

Methods:

to_dict()

Convert graph to dictionary format.

Returns:

Example:

graph_dict = graph.to_dict()

print(f"Nodes: {len(graph_dict['nodes'])}")
print(f"Edges: {len(graph_dict['edges'])}")
print(f"Duration: {graph_dict['metadata']['duration']:.4f}s")

CallNode

Represents a function in the call graph.

Attributes:

Example:

for node in graph.nodes.values():
    print(f"{node.full_name}:")
    print(f"  Calls: {node.call_count}")
    print(f"  Total time: {node.total_time:.4f}s")
    print(f"  Avg time: {node.avg_time:.4f}s")

PerformanceStats

Container for performance statistics.

Attributes:

Methods:

to_dict()

Convert stats to dictionary.

Returns:

_get_memory_stats()

Get memory statistics.

Returns:

_get_cpu_stats()

Get CPU profiling statistics.

Returns:


Complete Example

from callflow_tracer import (
    trace_scope,
    profile_section,
    export_html,
    export_json
)
from callflow_tracer.flamegraph import generate_flamegraph
import time

def slow_function():
    """Intentionally slow function."""
    time.sleep(0.1)
    return sum(range(10000))

def fast_function():
    """Fast function."""
    return sum(range(100))

def main_workflow():
    """Main application workflow."""
    slow = slow_function()
    fast = fast_function()
    return slow + fast

# Trace and profile
with profile_section("Main Workflow") as perf_stats:
    with trace_scope() as graph:
        result = main_workflow()
        print(f"Result: {result}")

# Export call graph with profiling
export_html(
    graph,
    "callgraph.html",
    title="Application Call Graph",
    profiling_stats=perf_stats.to_dict()
)

# Export flamegraph
generate_flamegraph(
    graph,
    "flamegraph.html",
    title="Performance Flamegraph",
    color_scheme="performance",
    show_stats=True,
    search_enabled=True
)

# Export JSON for programmatic analysis
export_json(graph, "trace.json")

# Analyze programmatically
for node in graph.nodes.values():
    if node.avg_time > 0.05:
        print(f"Bottleneck: {node.full_name} ({node.avg_time:.3f}s)")

Error Handling

All functions handle errors gracefully:

try:
    with trace_scope() as graph:
        risky_function()
except Exception as e:
    print(f"Error: {e}")
    # Graph still contains data up to the error
    if graph:
        export_html(graph, "partial_trace.html")

Type Hints

All functions include type hints:

from typing import Optional, Dict, Any, Union
from pathlib import Path

def generate_flamegraph(
    call_graph: Union[CallGraph, dict],
    output_file: Optional[Union[str, Path]] = None,
    width: int = 1200,
    height: int = 800,
    title: str = "CallFlow Flame Graph",
    color_scheme: str = "default",
    show_stats: bool = True,
    min_width: float = 0.1,
    search_enabled: bool = True
) -> Optional[str]:
    ...

Best Practices

1. Use Context Managers

# Good
with trace_scope() as graph:
    my_function()

# Avoid
trace_scope()  # Without context manager

2. Combine Tracing and Profiling

with profile_section("Analysis") as stats:
    with trace_scope() as graph:
        my_function()

export_html(graph, "output.html", profiling_stats=stats.to_dict())

3. Use Performance Color Scheme

generate_flamegraph(
    graph,
    "output.html",
    color_scheme="performance"  # Best for finding bottlenecks
)

4. Enable All Features

generate_flamegraph(
    graph,
    "output.html",
    show_stats=True,
    search_enabled=True,
    color_scheme="performance"
)

Performance Considerations

Overhead

Memory

Thread Safety


Migration Guide

From Basic to Enhanced

# Old way
from callflow_tracer import trace_scope
from callflow_tracer.flamegraph import generate_flamegraph

with trace_scope() as graph:
    my_function()

generate_flamegraph(graph, "output.html")

# New way (backward compatible!)
generate_flamegraph(
    graph,
    "output.html",
    color_scheme="performance",  # New!
    show_stats=True,             # New!
    search_enabled=True          # New!
)

All old code still works! New parameters are optional.


Troubleshooting

Common Issues

Issue: CPU profile shows 0.000s

Solution: Fixed in latest version. Update package.

Issue: Flamegraph shows “No Data”

Solution: Ensure code runs inside trace_scope:

with trace_scope() as graph:
    your_function()  # Must be inside

Issue: Module filter not working

Solution: Fixed in latest version. Update package.


Version History

Latest Version

Previous Versions


API Documentation - Last Updated: 2025-10-05