Interaction API (Beta)

Enable real-time conversational AI with Convai’s Interaction API. Send text messages to your AI character and receive natural, streaming responses using Server-Sent Events (SSE).

Overview

The Interaction API provides real-time conversational capabilities via a lightweight REST + SSE interface. It allows your application to send user messages and receive streaming character responses with minimal latency.

This API supports continuous conversation context through session IDs and streams all outputs as Server-Sent Events (SSE) for smooth, live feedback.


Authentication

All API requests require authentication using an API key in the header:

X-API-Key: your_api_key_here

Endpoint

POST https://live.convai.com/connect/stream

Send a text query to an AI character and receive a streaming response.

Request Format: multipart/form-data

Parameter
Type
Description

character_id*

UUID

Unique identifier for the AI character

text_input*

string

Your text query/message to the character

character_session_id

string

Session ID to continue an existing conversation

Example Requests

curl -X POST https://live.convai.com/connect/stream \
  -H "X-API-Key: YOUR_API_KEY" \
  -F "character_id=7bd3274c-1745-11ee-a3af-42010a400002" \
  -F "text_input=Hello, how are you?" \
  --no-buffer

Response: Server-Sent Events (SSE) stream

data: {"type": "connection-started", "message": {"session_id": "abc123", "transport": "sse", "character_session_id": "def456"}}

data: {"label": "rtvi-ai", "type": "bot-llm-started"}

data: {"label": "rtvi-ai", "type": "bot-llm-text", "data": {"text": "Hello"}}

data: {"label": "rtvi-ai", "type": "bot-llm-text", "data": {"text": "! I'm doing great, thank you for asking."}}

data: {"label": "rtvi-ai", "type": "bot-transcription", "data": {"text": "Hello! I'm doing great, thank you for asking."}}

data: {"label": "rtvi-ai", "type": "bot-llm-stopped"}

data: {"type": "connection-stoppped"}

Resume Conversation

To maintain conversation context, save the character_session_id from the first response and include it in subsequent requests.

Example Requests

First Request:

curl -X POST https://live.convai.com/connect/stream \
  -H "X-API-Key: YOUR_API_KEY" \
  -F "character_id=7bd3274c-1745-11ee-a3af-42010a400002" \
  -F "text_input=My name is Alice" \
  --no-buffer

Response includes:

{"type": "connection-started", "message": {"character_session_id": "9a98ab9b-5a6c-406e-9721-cc5bb0d527bb", ...}}

Second Request (with session ID):

curl -X POST https://live.convai.com/connect/stream \
  -H "X-API-Key: YOUR_API_KEY" \
  -F "character_id=7bd3274c-1745-11ee-a3af-42010a400002" \
  -F "text_input=What is my name?" \
  -F "character_session_id=9a98ab9b-5a6c-406e-9721-cc5bb0d527bb" \
  --no-buffer

Bot remembers: "Your name is Alice."

Response Message Types

connection-started

Sent when the connection is established.

{
  "type": "connection-started",
  "message": {
    "session_id": "550e8400-e29b-41d4-a716-446655440000",
    "transport": "sse",
    "character_session_id": "660e8400-e29b-41d4-a716-446655440001"
  }
}

Fields:

  • session_id: Unique identifier for this connection

  • transport: Transport type (always "sse")

  • character_session_id: Save this to continue the conversation in future requests

bot-llm-started

Sent when the LLM starts generating a response.

{
  "label": "rtvi-ai",
  "type": "bot-llm-started"
}

bot-llm-text

Bot response text, streamed in chunks as they are generated.

{
  "label": "rtvi-ai",
  "type": "bot-llm-text",
  "data": {
    "text": "Hello there!"
  }
}

Fields:

  • text: A chunk of the bot's response text

bot-transcription

Complete transcription of the bot's response (sent after all text chunks).

{
  "label": "rtvi-ai",
  "type": "bot-transcription",
  "data": {
    "text": "Hello there! Complete response text."
  }
}

Fields:

  • text: The complete bot response text

bot-llm-stopped

Sent when the LLM finishes generating.

{
  "label": "rtvi-ai",
  "type": "bot-llm-stopped"
}

connection-stoppped

Sent when the response is complete and connection is closing.

{
  "type": "connection-stoppped"
}

Error Responses

All endpoints return standard HTTP error codes:

Status Code
Description

400

Bad Request - Invalid parameters

401

Unauthorized - Invalid or missing API key

404

Not Found - Character not found

422

Unprocessable Entity - Validation error

429

Too Many Requests - Rate limit exceeded

500

Internal Server Error

Error Response Format:

{
  "detail": "Error message describing what went wrong"
}

Example:

{
  "detail": "Invalid API key"
}

Conclusion

The Interaction API (Beta) enables dynamic, real-time communication with your Convai characters over text. By combining streaming responses, context persistence, and SSE-based delivery, it provides a responsive and low-latency conversational experience suitable for chat, games, and interactive AI applications.

Last updated

Was this helpful?