POST
/
transcription-results
Get Transcription Results
curl --request POST \
  --url https://client.camb.ai/apis/transcription-results \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <api-key>' \
  --data '{
  "run_ids": [
    12345,
    6789
  ]
}'
{}

Access multiple transcription results simultaneously through this powerful bulk endpoint. When your transcription processes complete, this API provides comprehensive text representations of your audio or video content with precise timing information and speaker identification for each run. This bulk retrieval capability streamlines workflows for content creators managing multiple transcription projects, enabling efficient processing of large-scale media libraries.

Efficient Bulk Processing

Managing multiple transcription projects becomes seamless with bulk retrieval functionality. Our system delivers transcriptions for multiple runs that include:

  • Parallel Processing: Retrieve results from multiple transcription runs in a single request
  • Temporal Precision: Exact start and end timestamps for each spoken segment across all runs
  • Speaker Differentiation: Clear identification of different speakers throughout each piece of content
  • Verbatim Text: Accurate textual representation of all spoken content for each run

This structured approach to bulk transcription retrieval enables content creators, researchers, and educators to efficiently work with multiple audio-visual materials simultaneously, making entire content libraries searchable, analyzable, and more accessible.

Retrieving Multiple Transcription Results

To access your completed transcriptions in bulk, you’ll need the unique run_id values that were assigned when you initially submitted your transcription requests. These identifiers allow our system to locate your specific transcription results within our processing infrastructure and return them as a consolidated response.

Understanding the Request Structure

The bulk transcription endpoint requires a JSON payload containing an array of run IDs. This allows you to specify exactly which transcription results you want to retrieve:

{
  "run_ids": [12345, 12346, 12347, 12348]
}

Response Structure

The API returns an array of transcription results, where each element corresponds to one of the requested run IDs. Each transcription result contains an array of dialogue items with detailed timing and speaker information:

FieldDescription
startThe precise starting point of the speech segment (in seconds)
endThe exact ending point of the speech segment (in seconds)
textThe verbatim transcription of the spoken content in this segment
speakerIdentifier for the person speaking during this segment

Let’s explore how to retrieve and work with bulk transcription data using Python:

"""
Bulk Transcription Analysis Tool

This script helps you:
1. Fetch multiple transcription results from the CAMB.AI API
2. Analyze the transcriptions (duration, speakers, content)
3. Export transcriptions to CSV files
4. Find common speakers and words across all transcriptions

Before running:
- Replace "your-api-key" with your actual API key
- Install required packages: pip install requests
"""

import requests
import csv
from collections import Counter
import re

# ============================================================================
# CONFIGURATION - Update these values before running
# ============================================================================

# Your API authentication details
headers = {
    "x-api-key": "your-api-key",  # πŸ”‘ REPLACE WITH YOUR ACTUAL API KEY
    "Content-Type": "application/json",
}

# The API endpoint for fetching transcription results
API_URL = "https://client.camb.ai/apis/transcription-results"

# ============================================================================
# MAIN FUNCTIONS
# ============================================================================

def get_bulk_transcriptions(run_ids):
    """
    Fetches multiple transcription results from the API in a single request.

    This function sends a list of run IDs to the API and gets back all the
    transcription data for those runs at once, which is much faster than
    making individual requests for each run.

    Parameters:
        run_ids (list): A list of numbers representing the run IDs you want to fetch
                       Example: [12345, 12346, 12347]

    Returns:
        dict: A dictionary where each key is a run ID and each value contains
              the transcription data for that run. Returns None if there's an error.

    Example response structure:
        {
            "12345": {
                "transcript": [
                    {"start": 0.0, "end": 2.5, "speaker": "Speaker_1", "text": "Hello world"},
                    {"start": 2.5, "end": 5.0, "speaker": "Speaker_2", "text": "How are you?"}
                ]
            }
        }
    """
    try:
        # Prepare the data to send to the API
        payload = {"run_ids": run_ids}

        print(f"πŸ“‘ Requesting transcriptions for {len(run_ids)} runs...")

        # Make the API request
        response = requests.post(
            API_URL,
            headers=headers,
            json=payload,
        )

        # Check if the request was successful (status code 200-299)
        response.raise_for_status()

        # Convert the response from JSON to a Python dictionary
        results = response.json()

        # Show the raw response for debugging (you can comment this out later)
        print("πŸ“‹ Raw API response:")
        print(results)
        print()

        print(f"βœ… Successfully retrieved {len(results)} transcription results")
        return results

    except requests.exceptions.RequestException as e:
        # This handles network errors, API errors, etc.
        print(f"❌ Error retrieving bulk transcriptions: {e}")
        if hasattr(e, "response") and e.response is not None:
            print(f"πŸ” Response details: {e.response.text}")
        return None

# ============================================================================
# HELPER FUNCTIONS
# ============================================================================

def format_time(seconds):
    """
    Converts seconds to a readable HH:MM:SS format.

    Parameters:
        seconds (float): Time in seconds (e.g., 3661.5)

    Returns:
        str: Formatted time string (e.g., "01:01:01")
    """
    hours = int(seconds // 3600)      # Get whole hours
    minutes = int((seconds % 3600) // 60)  # Get remaining minutes
    secs = int(seconds % 60)          # Get remaining seconds

    # Format with leading zeros (e.g., "01:05:30" instead of "1:5:30")
    return f"{hours:02d}:{minutes:02d}:{secs:02d}"

def safe_get_transcript(transcription_data):
    """
    Safely extracts the transcript from API response data.

    The API returns nested data, so we need to safely navigate to the transcript.

    Parameters:
        transcription_data (dict): The data for one transcription run

    Returns:
        list: List of transcript segments, or empty list if not found
    """
    if not transcription_data:
        return []

    # Navigate safely: transcription_data -> "transcript" -> list of segments
    return transcription_data.get("transcript", [])

# ============================================================================
# ANALYSIS FUNCTIONS
# ============================================================================

def analyze_single_transcription(transcription, run_id):
    """
    Analyzes a single transcription and prints detailed statistics.

    This function examines one transcription result and shows:
    - How long the audio was
    - How many dialogue segments there are
    - Who the speakers are
    - Sample dialogue from the beginning

    Parameters:
        transcription (list): List of dialogue segments from the API
        run_id (str): The ID of this transcription run (for display purposes)
    """
    # Check if we have any data to analyze
    if not transcription:
        print(f"πŸ” Run {run_id}: No transcription data available")
        return

    print(f"πŸ“Š Analyzing Run {run_id}:")
    print("=" * 40)

    # Calculate basic statistics
    total_duration = max(segment["end"] for segment in transcription) if transcription else 0
    total_segments = len(transcription)

    # Find all unique speakers in this transcription
    speakers = set()
    for segment in transcription:
        if "speaker" in segment:
            speakers.add(segment["speaker"])
    unique_speakers = len(speakers)

    # Count how many times each speaker appears
    speaker_counts = Counter()
    for segment in transcription:
        if "speaker" in segment:
            speaker_counts[segment["speaker"]] += 1

    # Find who spoke the most
    most_frequent_speaker = speaker_counts.most_common(1)[0][0] if speaker_counts else "Unknown"

    # Print all the statistics
    print(f"  πŸ• Total duration: {format_time(total_duration)}")
    print(f"  πŸ’¬ Total segments: {total_segments}")
    print(f"  πŸ‘₯ Unique speakers: {unique_speakers}")
    print(f"  🎀 Most frequent speaker: {most_frequent_speaker}")

    # Show the first few lines of dialogue as examples
    print("  πŸ“ Sample dialogue:")
    for i, segment in enumerate(transcription[:2]):  # Show first 2 segments only
        start_time = format_time(segment["start"])
        end_time = format_time(segment["end"])
        speaker = segment.get("speaker", "Unknown")

        # Truncate long text to keep output readable
        text = segment.get("text", "")
        if len(text) > 50:
            text = text[:50] + "..."

        print(f"    [{start_time} β†’ {end_time}] {speaker}: {text}")
    print()  # Add blank line for readability

def analyze_bulk_transcriptions(transcription_results):
    """
    Analyzes all transcription results and provides insights for each one.

    This is the main analysis function that processes all the transcriptions
    you fetched from the API.

    Parameters:
        transcription_results (dict): Dictionary of all transcription results from the API
    """
    if not transcription_results:
        print("❌ No transcription results to analyze")
        return

    print(f"\nπŸ” Analyzing {len(transcription_results)} transcription results:")
    print("=" * 60)

    # Process each transcription one by one
    for run_id in transcription_results:
        # Safely extract the transcript data
        transcript = safe_get_transcript(transcription_results.get(run_id, {}))

        # Analyze this individual transcription
        analyze_single_transcription(transcript, run_id)

# ============================================================================
# EXPORT FUNCTIONS
# ============================================================================

def export_transcription_to_csv(transcription, filename):
    """
    Export a single transcription to a CSV file for further analysis.

    Creates a CSV file with columns: start, end, speaker, text
    This makes it easy to open in Excel or other spreadsheet programs.

    Parameters:
        transcription (list): List of dialogue segments
        filename (str): Name of the output file (e.g., "transcription_12345.csv")
    """
    if not transcription:
        print(f"⚠️  No data to export for {filename}")
        return

    try:
        # Open the file for writing
        with open(filename, "w", newline="", encoding="utf-8") as csvfile:
            # Define the column headers
            fieldnames = ["start", "end", "speaker", "text"]
            writer = csv.DictWriter(csvfile, fieldnames=fieldnames)

            # Write the header row
            writer.writeheader()

            # Write each dialogue segment as a row
            for segment in transcription:
                writer.writerow({
                    "start": segment.get("start", ""),
                    "end": segment.get("end", ""),
                    "speaker": segment.get("speaker", ""),
                    "text": segment.get("text", ""),
                })

        print(f"πŸ’Ύ Exported transcription to {filename}")

    except Exception as e:
        print(f"❌ Error exporting to {filename}: {e}")

# ============================================================================
# CROSS-TRANSCRIPTION ANALYSIS FUNCTIONS
# ============================================================================

def extract_speakers_from_bulk(transcription_results):
    """
    Finds all unique speakers across multiple transcription runs.

    This helps you understand who appears in your audio content overall.

    Parameters:
        transcription_results (dict): Dictionary of all transcription results

    Returns:
        list: List of all unique speaker names found across all transcriptions
    """
    all_speakers = set()  # Use a set to automatically handle duplicates

    # Go through each transcription
    for run_id in transcription_results:
        transcript = safe_get_transcript(transcription_results.get(run_id, {}))

        if transcript:
            # Extract speakers from this transcription
            for segment in transcript:
                if "speaker" in segment:
                    all_speakers.add(segment["speaker"])

    return list(all_speakers)  # Convert back to list

def find_common_words(transcription_results, min_length=5):
    """
    Identifies frequently used words across multiple transcriptions.

    This helps you understand the main topics and themes in your audio content.

    Parameters:
        transcription_results (dict): Dictionary of all transcription results
        min_length (int): Minimum word length to consider (default: 5)
                         This filters out common short words like "the", "and", etc.

    Returns:
        Counter: A Counter object with word frequencies
    """
    all_words = []

    # Go through each transcription
    for run_id in transcription_results:
        transcript = safe_get_transcript(transcription_results.get(run_id, {}))

        if transcript:
            # Extract words from each dialogue segment
            for segment in transcript:
                if "text" in segment:
                    # Convert to lowercase for consistent counting
                    text = segment["text"].lower()

                    # Extract words (letters only, no punctuation)
                    words = re.findall(r"\b\w+\b", text)

                    # Only keep words that are long enough
                    filtered_words = [word for word in words if len(word) >= min_length]
                    all_words.extend(filtered_words)

    # Count how many times each word appears
    word_counts = Counter(all_words)

    # Display the results
    print("\nπŸ“ˆ Most common words across all transcriptions:")
    print("-" * 45)
    for word, count in word_counts.most_common(10):  # Show top 10
        print(f"  {word}: {count} occurrences")

    return word_counts

# ============================================================================
# MAIN EXECUTION
# ============================================================================

if __name__ == "__main__":
    """
    Main execution section - this runs when you execute the script directly.

    Modify the run_ids list below with your actual run IDs.
    """

    print("πŸš€ Starting Bulk Transcription Analysis Tool")
    print("=" * 50)

    # ⚠️ MODIFY THIS LIST WITH YOUR ACTUAL RUN IDs
    run_ids = [12345, 12346, 12347, 12348]

    print(f"πŸ“‹ Processing run IDs: {run_ids}")
    print()

    # Step 1: Fetch all transcription results from the API
    print("πŸ”„ Step 1: Fetching transcription results...")
    bulk_results = get_bulk_transcriptions(run_ids)

    if bulk_results:
        print("βœ… Successfully fetched results. Starting analysis...")

        # Step 2: Analyze all transcription results
        print("\nπŸ”„ Step 2: Analyzing transcriptions...")
        analyze_bulk_transcriptions(bulk_results)

        # Step 3: Export each transcription to a separate CSV file
        print("\nπŸ”„ Step 3: Exporting transcriptions to CSV files...")
        for run_id in bulk_results:
            transcript = safe_get_transcript(bulk_results.get(run_id, {}))
            if transcript:
                filename = f"transcription_run_{run_id}.csv"
                export_transcription_to_csv(transcript, filename)

        # Step 4: Find all speakers across all transcriptions
        print("\nπŸ”„ Step 4: Finding all speakers...")
        all_speakers = extract_speakers_from_bulk(bulk_results)
        print(f"πŸ‘₯ All speakers across {len(bulk_results)} runs: {all_speakers}")

        # Step 5: Analyze common vocabulary
        print("\nπŸ”„ Step 5: Analyzing common vocabulary...")
        find_common_words(bulk_results)

        print("\nπŸŽ‰ Analysis complete! Check the CSV files for detailed transcription data.")

    else:
        print("❌ Failed to fetch transcription results. Please check your API key and run IDs.")

Bulk Processing Benefits

The bulk transcription endpoint provides several advantages for managing multiple transcription projects:

Workflow Efficiency

  • Single Request: Retrieve multiple transcription results without making separate API calls
  • Reduced Latency: Minimize network overhead by consolidating requests
  • Batch Processing: Perfect for processing entire content libraries or project collections

Content Management

  • Comparative Analysis: Easily compare transcriptions across multiple runs
  • Consolidated Reporting: Generate unified reports from multiple transcription sources
  • Quality Assurance: Streamline review processes for large-scale transcription projects

Integration Capabilities

  • Database Population: Efficiently populate content databases with transcription data
  • Search Indexing: Build comprehensive search indexes across multiple content pieces
  • Analytics Processing: Perform cross-content analysis and pattern recognition

Best Practices for Bulk Processing

To maximize the effectiveness of bulk transcription retrieval:

Request Optimization

  • Batch Size: Request reasonable batch sizes (typically 3-5 runs per request)
  • Run Grouping: Group related runs together for more efficient processing
  • Retry Logic: Implement retry mechanisms for handling temporary failures

Data Management

  • Result Caching: Cache frequently accessed transcription results
  • Incremental Processing: Process new runs while maintaining existing data
  • Storage Strategy: Implement efficient storage solutions for large transcription datasets

Performance Considerations

  • Parallel Processing: Use the bulk endpoint instead of multiple individual requests
  • Memory Management: Process large result sets in chunks to manage memory usage
  • Rate Limiting: Respect API rate limits when making frequent bulk requests

Authorizations

x-api-key
string
header
required

The x-api-key is a custom header required for authenticating requests to our API. Include this header in your request with the appropriate API key value to securely access our endpoints. You can find your API key(s) in the 'API' section of our studio website.

Body

application/json

Response

200
application/json

Successful Response

An object containing the results of one to five transcription runs. Each key in the object is a unique identifier for a run, and the corresponding value is the transcription output.