Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.camb.ai/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Separate a mixed recording into two stems: foreground (vocals, speech, or lead elements) and background (accompaniment, ambience, or supporting sounds). This is useful for remixing, isolation, accessibility, and cleaning up recordings. The pipeline is asynchronous: you upload an audio file, poll until processing finishes, then download URLs for both stems.

Prerequisites

1

Create an account

Sign up at CAMB.AI Studio if you haven’t already.
2

Get your API key

Go to Settings → API Keys in Studio and copy your key. See Authentication for details.
3

Install the SDK

pip install camb-sdk
Skip this step if you’re using the direct API.
4

Set your API key to use in your code

export CAMB_API_KEY="your_api_key_here"

Code

import os
import time
from camb.client import CambAI

client = CambAI(api_key=os.getenv("CAMB_API_KEY"))

def separate_audio():
    # Replace with your mixed audio file (WAV, MP3, FLAC, or AAC)
    audio_path = "mix.wav"

    # Step 1: Submit the separation job (multipart upload)
    response = client.audio_separation.create_audio_separation(
        media_file=open(audio_path, "rb"),
        project_name="my-separation-job",
    )

    task_id = response.task_id
    print(f"Audio separation task created: {task_id}")

    # Step 2: Poll until complete
    max_attempts = 60
    attempt = 0
    while attempt < max_attempts:
        status = client.audio_separation.get_audio_separation_status(task_id=task_id)
        print(f"Status: {status.status}")

        if status.status == "SUCCESS":
            # Step 3: Retrieve download URLs for both stems
            result = client.audio_separation.get_audio_separation_run_info(status.run_id)
            print(f"Foreground URL: {result.foreground_audio_url}")
            print(f"Background URL: {result.background_audio_url}")
            break
        elif status.status == "ERROR":
            print(f"Separation failed: {status.exception_reason}")
            break

        time.sleep(5)
        attempt += 1

separate_audio()

How it works

Separation runs as an async pipeline:
create_audio_separation / createAudioSeparation (multipart file)


    task_id returned immediately


get_audio_separation_status / getAudioSeparationStatus  ← poll every ~5s

        ├── PENDING → keep polling
        ├── ERROR   → check exception_reason
        └── SUCCESS → run_id available


get_audio_separation_run_info / getAudioSeparationRunInfo

                          └── foreground_audio_url, background_audio_url

Parameters

Required

ParameterTypeDescription
media_filefile streamMixed audio to separate (uploaded as multipart form data)

Optional

ParameterTypeDescription
project_namestringLabel for the job in your dashboard
project_descriptionstringNotes for the job
folder_idintegerFolder to organize the run under
run_idintegerOptional run identifier for tracing
traceparentstringDistributed tracing header value

Tips

  • Formats: FLAC, MP3, WAV, and AAC are supported; lossless or high-quality sources generally separate better than heavily compressed files.
  • Length: Roughly 10 seconds to 10 minutes often works well; very long files may take more processing time.
  • Polling: Cap your loop (for example 60 attempts × 5 seconds) and treat timeout as a separate failure path from ERROR.
  • Mix balance: Foreground and background should both be audible in the original mix for cleaner stems.

Next Steps

Create Audio Separation

Multipart upload and task creation API reference.

Get Separation Status

Poll task status with task_id.

Get Separation Result

Retrieve foreground_audio_url and background_audio_url.

Dubbing

Translate video audio with the SDK.