Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.camb.ai/llms.txt

Use this file to discover all available pages before exploring further.

The official Rust SDK for Camb.ai provides access to text-to-speech, dubbing, translation, transcription, audio separation, voice cloning, audio generation, stories, translated TTS, translated stories, dictionaries, folders, project setup, and streaming pipelines. The client is async-only and built on Tokio.

Installation

Add the crate and Tokio to your Cargo.toml:
[dependencies]
camb_api = { git = "https://github.com/Camb-ai/cambai-rust-sdk" }
tokio = { version = "1.0", features = ["full"] }
futures = "0.3"
The camb_api package name matches the crate defined in the SDK repository.

Authentication

Get your API key from Camb.ai Studio and read it from the environment so it never appears in source control. Build the client with APIClient::new and ClientConfig.
use camb_api::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;

    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    Ok(())
}
For local development, load CAMB_API_KEY from a .env file with a crate such as dotenvy at process startup.

Quick Start

Streaming TTS returns a ByteStream. The SDK does not include a file helper, so write chunks with std::io::Write after each try_next call on the stream.
use camb_api::prelude::*;
use std::io::Write;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let mut stream = client
        .text_to_speech
        .tts(
            &CreateStreamTtsRequestPayload {
                text: "Hello from Camb.ai.".to_string(),
                voice_id: 147320,
                language: CreateStreamTtsRequestPayloadLanguage::EnUs,
                speech_model: Some(CreateStreamTtsRequestPayloadSpeechModel::Mars8Flash),
                user_instructions: None,
                enhance_named_entities_pronunciation: None,
                output_configuration: Some(StreamTtsOutputConfiguration {
                    format: Some(OutputFormat::Wav),
                    duration: None,
                    apply_enhancement: None,
                }),
                voice_settings: None,
                inference_options: None,
            },
            None,
        )
        .await?;

    let mut file = std::fs::File::create("output.wav")?;
    while let Some(chunk) = stream.try_next().await? {
        file.write_all(&chunk)?;
    }

    Ok(())
}

Models

Streaming TTS accepts speech_model as CreateStreamTtsRequestPayloadSpeechModel. The Rust enum serializes to the wire values in the table. If you omit speech_model, the API applies its default.
VariantWire valueNotes
AutoautoServer-selected model
Mars6mars-6Earlier MARS generation
Mars7mars-7MARS 7 generation
Mars8mars-8MARS 8 generation
Mars8Flashmars-8-flashLower-latency MARS 8
Mars8Instructmars-8-instructInstruction-style control via user_instructions and bracketed text
Per-model language coverage is listed at MARS Models.
speech_model: Some(CreateStreamTtsRequestPayloadSpeechModel::Mars8Flash)

Text-to-Speech

Streaming

text_to_speech.tts posts to tts-stream and returns audio chunks as ByteStream, as shown in Quick Start.

Submit, poll, and fetch

Non-streaming TTS uses create_tts to obtain a task_id, polls get_tts_result until TaskStatus::Success, then reads the run with get_tts_run_info.
use camb_api::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let submitted = client
        .text_to_speech
        .create_tts(
            &CreateTtsRequestPayload {
                project_name: None,
                project_description: None,
                folder_id: None,
                text: "Hello from batch TTS.".to_string(),
                voice_id: 147320,
                language: Languages::EN_US,
                gender: None,
                age: None,
                run_id: None,
            },
            None,
        )
        .await?;

    let task_id = submitted.task_id;
    let mut run_id: Option<i64> = None;
    loop {
        let status = client
            .text_to_speech
            .get_tts_result(
                &task_id,
                &GetTtsResultQueryRequest::default(),
                None,
            )
            .await?;
        if status.status == TaskStatus::Success {
            run_id = status.run_id.flatten();
            break;
        }
        tokio::time::sleep(Duration::from_secs(2)).await;
    }

    if let Some(rid) = run_id {
        let info = client
            .text_to_speech
            .get_tts_run_info(
                &Some(rid),
                &GetTtsRunInfoQueryRequest::default(),
                None,
            )
            .await?;
        println!("{:?}", info);
    }

    Ok(())
}

Voices

List voices

voice_cloning.list_voices returns a vector of ListVoicesListVoicesGetResponseItem values. Print or deserialize each entry to read fields such as identifiers for use in TTS.
use camb_api::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let voices = client
        .voice_cloning
        .list_voices(&ListVoicesQueryRequest::default(), None)
        .await?;

    for voice in voices {
        println!("{:?}", voice);
    }

    Ok(())
}

Create a custom voice

create_custom_voice sends multipart form data. Load reference audio into a Vec<u8>. Gender is a numeric newtype: use the value expected by your account and endpoint documentation (for example Gender(1)).
use camb_api::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let file_bytes = std::fs::read("reference.wav")?;

    let result = client
        .voice_cloning
        .create_custom_voice(
            &CreateCustomVoiceRequest {
                voice_name: "My Custom Voice".to_string(),
                gender: Gender(1),
                description: Some(Some("Warm and conversational.".to_string())),
                publish_voice_to_market_place: None,
                age: None,
                language: None,
                enhance_audio: Some(Some(true)),
                file: file_bytes,
                run_id: None,
            },
            None,
        )
        .await?;

    println!("{:?}", result);

    Ok(())
}

Language support

Dubbing, translation, transcription, and related jobs use Languages, a newtype around i64 with constants such as Languages::EN_US and Languages::HI_IN. Streaming TTS uses the separate enum CreateStreamTtsRequestPayloadLanguage (for example CreateStreamTtsRequestPayloadLanguage::EnUs), which serializes to locale strings.
use camb_api::prelude::*;

fn main() {
    println!("{:?}", Languages::EN_US);
    println!("{:?}", Languages::HI_IN);
}
To fetch supported languages at runtime, call languages.get_source_languages and languages.get_target_languages.
use camb_api::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let source = client
        .languages
        .get_source_languages(&GetSourceLanguagesQueryRequest::default(), None)
        .await?;
    let target = client
        .languages
        .get_target_languages(&GetTargetLanguagesQueryRequest::default(), None)
        .await?;

    println!("{:?}", source);
    println!("{:?}", target);

    Ok(())
}

Dubbing

dub.end_to_end_dubbing starts a job. Poll dub.get_end_to_end_dubbing_status until status is TaskStatus::Success, then call get_dubbed_run_info with the run_id from the status payload. Optional get_dubbed_run_transcript returns transcript data for a target language.
use camb_api::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let submitted = client
        .dub
        .end_to_end_dubbing(
            &EndToEndDubbingRequestPayload {
                video_url: "https://example.com/video.mp4".to_string(),
                source_language: Languages::EN_US,
                target_language: Some(Some(Languages::HI_IN)),
                target_languages: None,
                project_name: None,
                project_description: None,
                folder_id: None,
                selected_audio_tracks: None,
                add_output_as_an_audio_track: None,
                chosen_dictionaries: None,
                ai_optimization: None,
                run_id: None,
            },
            None,
        )
        .await?;

    let task_id = submitted.task_id.ok_or("missing task_id")?;
    let mut run_id: Option<i64> = None;
    loop {
        let status = client
            .dub
            .get_end_to_end_dubbing_status(
                &task_id,
                &GetEndToEndDubbingStatusQueryRequest::default(),
                None,
            )
            .await?;
        if status.status == TaskStatus::Success {
            run_id = status.run_id.flatten();
            break;
        }
        tokio::time::sleep(Duration::from_secs(5)).await;
    }

    if let Some(rid) = run_id {
        let info = client.dub.get_dubbed_run_info(&Some(rid), None).await?;
        println!("{:?}", info);

        let transcript = client
            .dub
            .get_dubbed_run_transcript(
                &Some(rid),
                &Languages::HI_IN,
                &GetDubbedRunTranscriptQueryRequest::default(),
                None,
            )
            .await?;
        println!("{:?}", transcript);
    }

    Ok(())
}
For multiple targets in one job, set target_languages to Some(Some(vec![Languages::FR_FR, Languages::ES_ES])) and set target_language to None.

Translation

translation.create_translation returns serde_json::Value. Read task_id from the JSON, poll get_translation_task_status, then call get_translation_result with the resolved run_id.
use camb_api::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let submitted = client
        .translation
        .create_translation(
            &CreateTranslationRequestPayload {
                project_name: None,
                project_description: None,
                folder_id: None,
                texts: vec![
                    "Hello, how are you?".to_string(),
                    "Welcome to Camb.ai.".to_string(),
                ],
                age: None,
                formality: None,
                gender: None,
                source_language: Languages::EN_US,
                target_language: Languages::FR_FR,
                chosen_dictionaries: None,
                run_id: None,
            },
            None,
        )
        .await?;

    let task_id = submitted
        .get("task_id")
        .and_then(|v| v.as_str())
        .ok_or("missing task_id")?
        .to_string();

    let mut run_id: Option<i64> = None;
    loop {
        let status = client
            .translation
            .get_translation_task_status(
                &task_id,
                &GetTranslationTaskStatusQueryRequest::default(),
                None,
            )
            .await?;
        if status.status == TaskStatus::Success {
            run_id = status.run_id.flatten();
            break;
        }
        tokio::time::sleep(Duration::from_secs(2)).await;
    }

    if let Some(rid) = run_id {
        let result = client
            .translation
            .get_translation_result(&Some(rid), None)
            .await?;
        for text in result.texts {
            println!("{}", text);
        }
    }

    Ok(())
}
For streaming translation over a single string, use translation.translation_stream with CreateTranslationStreamRequestPayload and inspect the returned serde_json::Value shape for your integration.

Transcription

transcription.create_transcription accepts multipart fields including media_url or media_file. Poll get_transcription_task_status, then get_transcription_result with optional word-level timestamps.
use camb_api::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let submitted = client
        .transcription
        .create_transcription(
            &CreateTranscriptionRequest {
                language: Languages::EN_US,
                media_file: None,
                media_url: Some(Some("https://example.com/audio.mp3".to_string())),
                file: None,
                audio_url: None,
                project_name: None,
                project_description: None,
                folder_id: None,
                run_id: None,
            },
            None,
        )
        .await?;

    let task_id = submitted.task_id.ok_or("missing task_id")?;
    let mut run_id: Option<i64> = None;
    loop {
        let status = client
            .transcription
            .get_transcription_task_status(
                &task_id,
                &GetTranscriptionTaskStatusQueryRequest::default(),
                None,
            )
            .await?;
        if status.status == TaskStatus::Success {
            run_id = status.run_id.flatten();
            break;
        }
        tokio::time::sleep(Duration::from_secs(3)).await;
    }

    if let Some(rid) = run_id {
        let result = client
            .transcription
            .get_transcription_result(
                &Some(rid),
                &GetTranscriptionResultQueryRequest {
                    word_level_timestamps: Some(Some(true)),
                },
                None,
            )
            .await?;
        println!("{:?}", result);
    }

    Ok(())
}
To transcribe a local file, set media_file to Some(bytes) and clear media_url.

Audio separation

Upload media_file via multipart, poll get_audio_separation_status, then read get_audio_separation_run_info.
use camb_api::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let bytes = std::fs::read("track.mp3")?;

    let submitted = client
        .audio_separation
        .create_audio_separation(
            &CreateAudioSeparationRequest {
                media_file: Some(bytes),
                project_name: None,
                project_description: None,
                folder_id: None,
                run_id: None,
            },
            None,
        )
        .await?;

    let task_id = submitted.task_id.ok_or("missing task_id")?;
    let mut run_id: Option<i64> = None;
    loop {
        let status = client
            .audio_separation
            .get_audio_separation_status(
                &task_id,
                &GetAudioSeparationStatusQueryRequest::default(),
                None,
            )
            .await?;
        if status.status == TaskStatus::Success {
            run_id = status.run_id.flatten();
            break;
        }
        tokio::time::sleep(Duration::from_secs(3)).await;
    }

    if let Some(rid) = run_id {
        let info = client
            .audio_separation
            .get_audio_separation_run_info(&Some(rid), None)
            .await?;
        println!("{:?}", info);
    }

    Ok(())
}

Text-to-voice

Create a job with create_text_to_voice, poll get_text_to_voice_status, then load previews and identifiers from get_text_to_voice_result.
use camb_api::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let submitted = client
        .text_to_voice
        .create_text_to_voice(
            &CreateTextToVoiceRequestPayload {
                text: "A confident narrator introducing a documentary.".to_string(),
                voice_description: "Deep, measured baritone. Calm and authoritative.".to_string(),
                project_name: None,
                project_description: None,
                folder_id: None,
            },
            None,
        )
        .await?;

    let task_id = submitted.task_id.ok_or("missing task_id")?;
    let mut run_id: Option<i64> = None;
    loop {
        let status = client
            .text_to_voice
            .get_text_to_voice_status(
                &task_id,
                &GetTextToVoiceStatusQueryRequest::default(),
                None,
            )
            .await?;
        if status.status == TaskStatus::Success {
            run_id = status.run_id.flatten();
            break;
        }
        tokio::time::sleep(Duration::from_secs(3)).await;
    }

    if let Some(rid) = run_id {
        let result = client
            .text_to_voice
            .get_text_to_voice_result(&Some(rid), None)
            .await?;
        println!("{:?}", result);
    }

    Ok(())
}
Inspect get_text_to_voice_result for preview URLs and the voice_id to pass into text_to_speech.tts for production speech.

Text-to-audio

create_text_to_audio starts an asynchronous sound or music generation job. After success, get_text_to_audio_result returns a ByteStream for the audio bytes.
use camb_api::prelude::*;
use std::io::Write;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let submitted = client
        .text_to_audio
        .create_text_to_audio(
            &CreateTextToAudioRequestPayload {
                project_name: None,
                project_description: None,
                folder_id: None,
                prompt: "Heavy rain on a tin roof at night with distant thunder.".to_string(),
                duration: Some(15.0),
                audio_type: Some(TextToAudioType::Sound),
                run_id: None,
            },
            None,
        )
        .await?;

    let task_id = submitted.task_id.ok_or("missing task_id")?;
    let mut run_id: Option<i64> = None;
    loop {
        let status = client
            .text_to_audio
            .get_text_to_audio_status(
                &task_id,
                &GetTextToAudioStatusQueryRequest::default(),
                None,
            )
            .await?;
        if status.status == TaskStatus::Success {
            run_id = status.run_id.flatten();
            break;
        }
        tokio::time::sleep(Duration::from_secs(3)).await;
    }

    if let Some(rid) = run_id {
        let mut stream = client
            .text_to_audio
            .get_text_to_audio_result(
                &Some(rid),
                &GetTextToAudioResultQueryRequest::default(),
                None,
            )
            .await?;

        let mut file = std::fs::File::create("soundscape.wav")?;
        while let Some(chunk) = stream.try_next().await? {
            file.write_all(&chunk)?;
        }
    }

    Ok(())
}

Stories

story.create_story uploads a document as multipart data. The response is CreateStoryStoryPostResponse; the common async path exposes OrchestratorPipelineCallResult with a task_id. Poll get_story_status, then call get_story_run_info.
use camb_api::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let file_bytes = std::fs::read("story.pdf")?;

    let submitted = client
        .story
        .create_story(
            &CreateStoryRequest {
                file: file_bytes,
                source_language: Languages::EN_US,
                title: Some(Some("My Story".to_string())),
                description: None,
                narrator_voice_id: None,
                folder_id: None,
                chosen_dictionaries: None,
                run_id: None,
            },
            None,
        )
        .await?;

    let task_id = submitted
        .into_orchestratorpipelinecallresult()
        .and_then(|r| r.task_id)
        .ok_or("missing task_id")?;

    let mut run_id: Option<i64> = None;
    loop {
        let status = client
            .story
            .get_story_status(&task_id, &GetStoryStatusQueryRequest::default(), None)
            .await?;
        if status.status == TaskStatus::Success {
            run_id = status.run_id.flatten();
            break;
        }
        tokio::time::sleep(Duration::from_secs(5)).await;
    }

    if let Some(rid) = run_id {
        let info = client
            .story
            .get_story_run_info(
                &Some(rid),
                &GetStoryRunInfoQueryRequest::default(),
                None,
            )
            .await?;
        println!("{:?}", info);
    }

    Ok(())
}

Translated TTS

translated_tts.create_translated_tts returns CreateTranslatedTtsOut with a task_id. Poll get_translated_tts_task_status until the status is success.
use camb_api::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let submitted = client
        .translated_tts
        .create_translated_tts(
            &CreateTranslatedTtsRequestPayload {
                project_name: None,
                project_description: None,
                folder_id: None,
                text: "Good morning, welcome to our service.".to_string(),
                voice_id: 147320,
                age: None,
                formality: None,
                gender: None,
                source_language: Languages::EN_US,
                target_language: Languages::HI_IN,
                chosen_dictionaries: None,
                run_id: None,
            },
            None,
        )
        .await?;

    let task_id = submitted.task_id;
    loop {
        let status = client
            .translated_tts
            .get_translated_tts_task_status(
                &task_id,
                &GetTranslatedTtsTaskStatusQueryRequest::default(),
                None,
            )
            .await?;
        if status.status == TaskStatus::Success {
            println!("{:?}", status);
            break;
        }
        tokio::time::sleep(Duration::from_secs(3)).await;
    }

    Ok(())
}

Translated story

Add a target language to an existing story run with create_translation_for_existing_story. Poll get_translated_story_status, then read get_translated_story_run_info using the original story run_id and the target Languages value.
use camb_api::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let story_run_id: i64 = 12345;

    let submitted = client
        .translated_story
        .create_translation_for_existing_story(
            &Some(story_run_id),
            &CreateTranslationForExistingStoryRequestPayload {
                target_language: Languages::FR_FR,
            },
            None,
        )
        .await?;

    let task_id = submitted.task_id.flatten().ok_or("missing task_id")?;

    loop {
        let status = client
            .translated_story
            .get_translated_story_status(
                &task_id,
                &GetTranslatedStoryStatusQueryRequest::default(),
                None,
            )
            .await?;
        if status.status == TaskStatus::Success {
            break;
        }
        tokio::time::sleep(Duration::from_secs(5)).await;
    }

    let info = client
        .translated_story
        .get_translated_story_run_info(
            &Some(story_run_id),
            &Languages::FR_FR,
            &GetTranslatedStoryRunInfoQueryRequest::default(),
            None,
        )
        .await?;

    println!("{:?}", info);

    Ok(())
}

Dictionaries

List dictionaries

use camb_api::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let dictionaries = client
        .dictionaries
        .get_dictionaries(&GetDictionariesQueryRequest::default(), None)
        .await?;

    for d in dictionaries {
        println!("{:?}", d);
    }

    Ok(())
}

Create from file

use camb_api::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let csv = std::fs::read("terms.csv")?;

    client
        .dictionaries
        .create_dictionary_from_file(
            &CreateDictionaryFromFileRequest {
                dictionary_file: csv,
                dictionary_name: "Product Terms".to_string(),
                dictionary_description: Some(Some(
                    "Brand-specific terminology for our product line.".to_string(),
                )),
                run_id: None,
            },
            None,
        )
        .await?;

    Ok(())
}

Manage terms

TermTranslationInput pairs a translation string with a Languages value. Dictionary and term identifiers are numeric.
use camb_api::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let dictionary_id: i64 = 123;
    let term_id: i64 = 456;

    client
        .dictionaries
        .add_term_to_dictionary(
            dictionary_id,
            &AddDictionaryTermPayload {
                translations: vec![TermTranslationInput {
                    translation: "कैम्ब.एआई".to_string(),
                    language: Languages::HI_IN,
                }],
                run_id: None,
            },
            None,
        )
        .await?;

    client
        .dictionaries
        .update_term_translation_in_dictionary_using_term_id(
            dictionary_id,
            term_id,
            &UpdateTermTranslationsPayload {
                translations: vec![TermTranslationInput {
                    translation: "कैम्ब.एआई".to_string(),
                    language: Languages::HI_IN,
                }],
                run_id: None,
            },
            None,
        )
        .await?;

    client
        .dictionaries
        .delete_dictionary_term(
            dictionary_id,
            term_id,
            &DeleteDictionaryTermQueryRequest::default(),
            None,
        )
        .await?;

    client
        .dictionaries
        .delete_dictionary(
            dictionary_id,
            &DeleteDictionaryQueryRequest::default(),
            None,
        )
        .await?;

    Ok(())
}

Folders

use camb_api::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let folders = client
        .folders
        .list_folders(&ListFoldersQueryRequest::default(), None)
        .await?;
    println!("{:?}", folders);

    client
        .folders
        .create_folder(
            &CreateFolderPayload {
                folder_name: "My runs".to_string(),
                run_id: None,
            },
            None,
        )
        .await?;

    Ok(())
}

Project setup

project_setup.create_project accepts a public media_url, a source_language, and a target_languages list. The call returns CreateProjectSetupOut with a task_id. Poll create_project_setup_task_status until the returned vector includes a row, read run_id from that row, then call get_project_setup_result.
use camb_api::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let submitted = client
        .project_setup
        .create_project(
            &CreateProjectSetupRequestPayload {
                project_name: None,
                project_description: None,
                folder_id: None,
                media_url: "https://example.com/video.mp4".to_string(),
                source_language: Languages::EN_US,
                target_languages: vec![Languages::HI_IN],
                selected_audio_tracks: None,
                add_output_as_an_audio_track: None,
                chosen_dictionaries: None,
                run_id: None,
            },
            None,
        )
        .await?;

    let task_id = submitted.task_id;
    let mut run_id: Option<i64> = None;
    loop {
        let rows = client
            .project_setup
            .create_project_setup_task_status(
                &task_id,
                &CreateProjectSetupTaskStatusQueryRequest::default(),
                None,
            )
            .await?;
        if let Some(first) = rows.first() {
            run_id = Some(first.run_id);
            break;
        }
        tokio::time::sleep(Duration::from_secs(5)).await;
    }

    if let Some(rid) = run_id {
        if let Some(result) = client
            .project_setup
            .get_project_setup_result(&Some(rid), None)
            .await?
        {
            println!("{:?}", result);
        }
    }

    Ok(())
}

Streaming pipelines

The streaming client manages long-running stream resources: create_stream, get_stream_result, patch_stream_data, destroy_stream, and get_probe_stream. CreateStreamRequestPayload ties together ConfigStream, SourceStream, and one or more TargetStream values. StreamType is a numeric newtype (StreamType(i64)); use the stream type codes from your deployment or API reference.
use camb_api::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let created = client
        .streaming
        .create_stream(
            &CreateStreamRequestPayload {
                name: None,
                description: None,
                initial_delay: None,
                timeout_in_mins: None,
                voices: vec![147320],
                dictionaries: vec![],
                config: ConfigStream {
                    pipeline: None,
                    mixing: None,
                },
                source_stream: SourceStream {
                    language: Languages::EN_US,
                    url: "rtmp://example.com/in".to_string(),
                    category: None,
                    passphrase: None,
                    streamid: None,
                    number_of_streams: None,
                    audio_stream: None,
                    background_audio_stream: None,
                    latency: None,
                    relay_input: None,
                },
                target_streams: vec![TargetStream {
                    languages: vec![Languages::HI_IN],
                    url: "rtmp://example.com/out".to_string(),
                    r#type: StreamType(1),
                    passphrase: None,
                    streamid: None,
                    pids: None,
                    transcode_video: None,
                    embed_subtitles: None,
                    audio_codec: None,
                    audio_bitrate: None,
                    audio_channel_layout: None,
                    latency: None,
                    constant_bitrate: None,
                    relay_output: None,
                }],
                start_time: None,
                end_time: None,
                timezone: None,
            },
            None,
        )
        .await?;

    let stream_id = created.stream_id;

    let snapshot = client
        .streaming
        .get_stream_result(
            stream_id,
            &StreamingGetStreamResultQueryRequest::default(),
            None,
        )
        .await?;
    println!("{:?}", snapshot);

    client
        .streaming
        .patch_stream_data(
            stream_id,
            &UpdateStreamDataRequestPayload::default(),
            None,
        )
        .await?;

    client
        .streaming
        .destroy_stream(stream_id, &DestroyStreamQueryRequest::default(), None)
        .await?;

    let probe = client
        .streaming
        .get_probe_stream(
            &GetProbeStreamRequest {
                run_id: None,
                body: GetProbeStreamIn {
                    url: "rtmp://example.com/in".to_string(),
                    passphrase: None,
                    stream_id: None,
                },
            },
            None,
        )
        .await?;
    println!("{:?}", probe);

    Ok(())
}

Error handling

Failures surface as ApiError. Match on ApiError::UnprocessableEntityError for validation-style 422 responses, ApiError::Http for other status codes, and ApiError::Network for transport errors.
use camb_api::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let payload = CreateStreamTtsRequestPayload {
        text: "Hello world.".to_string(),
        voice_id: 147320,
        language: CreateStreamTtsRequestPayloadLanguage::EnUs,
        speech_model: None,
        user_instructions: None,
        enhance_named_entities_pronunciation: None,
        output_configuration: Some(StreamTtsOutputConfiguration {
            format: Some(OutputFormat::Wav),
            duration: None,
            apply_enhancement: None,
        }),
        voice_settings: None,
        inference_options: None,
    };

    match client.text_to_speech.tts(&payload, None).await {
        Ok(mut stream) => {
            while let Some(chunk) = stream.try_next().await? {
                let _ = chunk;
            }
        }
        Err(ApiError::UnprocessableEntityError { message, .. }) => {
            eprintln!("validation error: {}", message);
        }
        Err(e) => eprintln!("error: {}", e),
    }

    Ok(())
}
Pass RequestOptions as the last argument to tune timeouts and retries for a single call.
use camb_api::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("CAMB_API_KEY")?;
    let client = APIClient::new(ClientConfig {
        api_key: Some(api_key),
        ..ClientConfig::default()
    })?;

    let opts = RequestOptions::new()
        .timeout_seconds(300)
        .max_retries(3);

    let _stream = client
        .text_to_speech
        .tts(
            &CreateStreamTtsRequestPayload {
                text: "Hello world.".to_string(),
                voice_id: 147320,
                language: CreateStreamTtsRequestPayloadLanguage::EnUs,
                speech_model: None,
                user_instructions: None,
                enhance_named_entities_pronunciation: None,
                output_configuration: Some(StreamTtsOutputConfiguration {
                    format: Some(OutputFormat::Wav),
                    duration: None,
                    apply_enhancement: None,
                }),
                voice_settings: None,
                inference_options: None,
            },
            Some(opts),
        )
        .await?;

    Ok(())
}

Custom provider

Self-hosted MARS on Baseten can call the TtsProvider trait. BasetenProvider::new takes the Baseten API key and an optional deployment URL. See Custom Cloud Providers for deployment details.
use camb_api::prelude::*;
use camb_api::provider::{BasetenProvider, TtsProvider};
use std::io::Write;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = std::env::var("BASETEN_API_KEY")?;
    let url = std::env::var("BASETEN_URL").ok();

    let provider = BasetenProvider::new(api_key, url);

    let mut stream = provider
        .tts(
            &CreateStreamTtsRequestPayload {
                text: "Hello from a custom provider.".to_string(),
                voice_id: 0,
                language: CreateStreamTtsRequestPayloadLanguage::EnUs,
                speech_model: None,
                user_instructions: None,
                enhance_named_entities_pronunciation: None,
                output_configuration: None,
                voice_settings: None,
                inference_options: None,
            },
            None,
        )
        .await?;

    let mut file = std::fs::File::create("baseten_output.mp3")?;
    while let Some(chunk) = stream.try_next().await? {
        file.write_all(&chunk)?;
    }

    Ok(())
}
The bundled BasetenProvider builds a minimal JSON body for Baseten. Production deployments require valid reference audio, correct language fields, and URLs that match your Baseten model card. Treat the provider as a pattern you extend rather than a drop-in for every deployment.

Next steps

https://mintcdn.com/cambai/2LvnefIkletroPxv/images/pipecat-orange.svg?fit=max&auto=format&n=2LvnefIkletroPxv&q=85&s=40cf8e001b8cadc8a4c3c557dea603d5

Voice Agents

Build real-time voice agents with Pipecat
https://mintcdn.com/cambai/2LvnefIkletroPxv/images/livekit-orange.svg?fit=max&auto=format&n=2LvnefIkletroPxv&q=85&s=c750fcee9b1de69e3c1d0d6ec7eb6b3f

LiveKit Integration

Create voice agents with LiveKit

API Reference

Explore the full TTS API

Voice Library

Browse available voices

Resources