Deepgram
Deepgram offers a range of English-speaking voices for its text-to-speech API, each designed to produce natural-sounding speech output in an array of different accents and speaking styles.
Deepgram's voices are promised to have human-like tones, rhythm, and emotion, lower than 250 ms latency, and are optimized for high-throughput applications.
Consult Deepgram's TTS models guide for more information and samples for supported voices.
Voice IDs
Copy the voice ID from the Values column of Deepgram's
Voice Selection reference.
Prepend deepgram.
and the string is ready for use.
For example: deepgram.aura-athena-en
Examples
Learn how to use Deepgram voices on the SignalWire platform.
- SWML
- RELAY Realtime SDK
- Call Flow Builder
- cXML
Use the
languages
SWML method to set one or more voices for an AI agent.
version: 1.0.0
sections:
main:
- ai:
prompt:
text: Have an open-ended conversation about flowers.
languages:
- name: English
code: en-US
voice: deepgram.aura-asteria-en
Alternatively, use the say_voice
parameter
of the play
SWML method to select a voice for basic TTS.
version: 1.0.0
sections:
main:
- set:
say_voice: "deepgram.aura-asteria-en"
- play: "say:Greetings. This is the Asteria voice from Deepgram's Aura text-to-speech model."
// This example uses the Node.js SDK for SignalWire's RELAY Realtime API.
const playback = await call.playTTS({
text: "Greetings. This is the Asteria voice from Deepgram's Aura text-to-speech model.",
voice: "deepgram.aura-asteria-en",
});
await playback.ended();
Deepgram voices are not yet supported in Call Flow Builder.
<?xml version="1.0" encoding="UTF-8"?>
<Response>
<Say voice="deepgram.aura-asteria-en">
Greetings. This is the Asteria voice from Deepgram's Aura text-to-speech model.
</Say>
</Response>