A guided overview of the first steps in Convai Playground, including navigation, character creation, testing, and essential global controls.
Introduction
Welcome to Convai Playground!
Your workspace for creating, customizing, and testing AI-powered characters. This page gives you a high-level overview of the core tools and workflows, helping you get productive quickly. Each section below links to a dedicated page where you can dive deeper.
Prerequisites
A Convai account.
Core Concepts
Character – An AI persona you create and customize with unique personality traits, language, knowledge, and behavior settings.
Avatar Studio – A no-code editor where you can design your character’s visual appearance and configure its Avatar Studio Experience, including environment, animations, interaction settings, and more.
Quick Start Flow
Dashboard Overview
Learn the layout, view your recent characters and experiences, and see where to create a new character or Convai Simulation Experience.
Continue to:
Creating a New Character
Start building your AI character by naming it, defining its description, choosing a language and voice, and setting its personality.
Continue to:
Testing a Character
Test your character in real time using the Chatbox for text and voice interactions, or via Video Call for a more immersive experience.
Continue to:
Character Description
Learn how to use the Character Description page in Convai Playground to define your character’s identity, speaking style, and unique traits.
Introduction
The Character Description page is where you define the personality, backstory, and communication style of your AI character. Each character in Convai Playground has its own dedicated Character Description page, ensuring a unique identity that can be refined over time.
Accessing the Character Description Page
You can reach the Character Description page by clicking any Character Card on your Dashboard. This opens the character’s profile, where you can edit and manage its core attributes.
Main Features and Sections
1. Character Name and ID
Character’s Name – Editable field for your character’s display name.
Character’s ID – A unique identifier for the character, essential for using it in Convai SDKs and API integrations.
You can copy the ID to use in your applications.
Support Tip: If you need help from the support team, provide this ID when reporting character-related issues.
2. Core Description / Speaking Style / Embodiment
Core Description
Add details about your character’s story, personality traits, distinctive features, and any behavioral guidelines.
Word limit: 1000 words.
Speaking Style
Describe How the Character Speaks – Outline the character’s tone, pace, formality, and speech patterns. Include unique expressions or phrases they commonly use.
Sample Dialogues – Provide example sentences showcasing the character’s typical speech style, including signature phrases that reinforce their personality.
Embodiment
Currently in development and will be available soon.
Examples
For inspiration, explore Sample Characters in the Dashboard. These examples show how different characters’ Core Descriptions and Speaking Styles are structured.
Conclusion
The Character Description page is the foundation of your AI character’s identity. By clearly defining its personality, voice, and unique traits, you ensure consistent and engaging interactions.
Character Customization
Learn how to refine your AI character’s personality, appearance, knowledge, and behavior to create consistent and engaging interactions.
Introduction
The Character Customization section in Convai Playground is where you transform a basic AI character into a fully realized persona. Here, you’ll define how your character looks, speaks, thinks, remembers, and responds, ensuring a unique and immersive experience for your users. Each page in this section focuses on a specific customization area, allowing you to work step-by-step or revisit any aspect at any time.
Core Customization Areas
Character Description
Define your character’s backstory, personality, and speaking style — the foundation of its identity.
Continue to:
Avatar Section
Open Avatar Studio to design your character’s visual appearance and configure its Avatar Studio Experience, including environment, animations, outfits, lighting, camera angles, and more.
Continue to:
Language And Speech
Configure the language, voice, and tone your character uses for communication.
Continue to:
Best Practices
Start with core identity settings (Character Description, Language and Speech) before moving to advanced customization.
Use Update frequently to save progress and avoid losing changes.
Test your character regularly to ensure each customization change has the desired effect.
Next Step
Begin with Character Description to establish the personality and tone of your AI character before moving on to appearance, speech, and advanced behaviors.
Avatar Studio
Learn how to access and customize your character’s avatar in Convai Playground using Avatar Studio.
Introduction
The Avatar section, lets you design and customize both the visual appearance of your AI character and its dedicated Avatar Studio Experience. All customization is handled through Avatar Studio—a powerful no-code tool where you can adjust the character’s look, clothing, and animations, as well as personalize the interactive environment it appears in.
Testing a Character
Learn the different ways to test your AI character in Convai Playground, including text chat, voice input, and video call with an avatar.
Introduction
Once you have created a character, it’s important to test how it interacts. Convai provides multiple ways to test your characters — from quick text and voice interactions to fully immersive video calls with custom avatars. This ensures you can refine personality, responsiveness, and interaction style before final deployment.
Dashboard Overview
Learn how to navigate and use the Convai Playground Dashboard to manage characters, experiences, and simulations efficiently.
Introduction
The Convai Playground Dashboard is your central hub for managing AI-powered characters and immersive simulation experiences. From here, you can easily create new characters, set up experiences, and access sample characters created by the Convai team.
Interact in Voice Mode (Beta)
Enable real-time, low-latency voice conversations with your Convai character using Voice Mode for natural, hands-free interactions.
Introduction
Voice Mode allows you to have seamless, natural, and low-friction voice conversations with your character. This guide explains how to set up Voice Mode, select the right interaction method, and maintain stable, real-time sessions for a smooth conversational experience.
Welcome
Start here to create, customize, test, and share interactive AI characters with Convai, covering Playground, no-code experiences, plugins and integrations, and API reference.
Welcome to the Official Convai Documentation
Your platform for building, customizing, and deploying intelligent, interactive AI characters across various environments and platforms.
Whether you’re a developer, designer, or creator, this documentation will guide you through every step from your first login to building fully immersive experiences with Convai’s powerful tools and integrations.
Knowledge Bank
Provide your character with information and reference materials to answer questions and maintain context.
Continue to: Knowledge Bank
Personality Traits
Adjust behavioral sliders to shape your character’s mannerisms, confidence, empathy, and other interaction styles.
Continue to: Personality Traits
Core AI Settings
Fine-tune advanced AI parameters to influence decision-making, creativity, and responsiveness.
Continue to: Core AI Settings
State of Mind
Define temporary or situational mindsets that influence how your character reacts in specific contexts.
Continue to: State of Mind
Memory
Review your character’s past conversations or enable Long Term Memory to allow recall across sessions.
Continue to: Memory
Narrative Design
Create structured narratives or guided interaction flows for your character to follow.
Continue to: Narrative Design
External API
Connect your character to external systems or APIs to retrieve live data or perform actions.
Continue to: External API
Publish
Share your character with others or embed it into your website.
Continue to: Publish
This will launch Avatar Studio, where you can customize your character’s appearance and configure its Avatar Studio Experience, including the environment, animations, outfits, lighting, camera angles, and more.
Next Step: Customize in Avatar Studio
For a complete guide to using Avatar Studio and its features, see our dedicated documentation:
The Avatar section serves as a quick link to Avatar Studio, where you can customize both your character’s appearance and its Avatar Studio Experience. Whether you’re creating realistic personas or stylized avatars, Avatar Studio provides tools to fine-tune the environment, animations, outfits, lighting, camera angles, and more—ensuring your character’s visual presence matches its personality and role.
Global Character Controls
Learn about tools available across all character pages, such as Versioning, Update, and Character Settings.
Continue to: Global Character Controls
Character Versioning
Save and switch between different versions of your character for safe experimentation and iteration.
Continue to: Character Versioning
If you want to test your character quickly without an avatar visual, you can use the Chatbox for text and voice interactions.
From your Dashboard, click on the character you want to test.
In the bottom-right Chatbox:
Type a message in the text input and press enter.
Or click the microphone button, speak, and click the microphone button again when finished to send your voice input.
Continue the conversation to evaluate your character’s responses.
Additional Chatbox Features:
Conversation Starters: At the bottom of the Chatbox, you’ll see dynamically generated conversation starters and quick replies, tailored to the ongoing dialogue.
Reset Chat: Located at the top-left of the Chatbox, this button restarts the session.
Copy Chat: Found directly below the Reset Chat button, it allows you to copy the conversation text.
Temporary Username: The bottom-most button lets you set a temporary username.
Feedback Buttons: On the right side of each character response, you’ll see thumbs-up and thumbs-down icons to provide positive or negative feedback about the reply.
2. Video Call
You can test your character in a video call for both visual and voice interactions.
You have two ways to start a video call:
From the Dashboard:
Locate the character’s card.
Click the green camera icon to start a video call instantly.
From the Character Page:
Click the character’s card to open its details.
In the top-right section, under the character’s thumbnail, click the video call button.
This method allows you to experience both the character’s visual appearance and voice in real-time, offering a more immersive test environment.
Conclusion
Convai Playground’s testing options make it easy to evaluate and refine your characters. Whether you prefer a fast, text-based interaction or a full video call experience with your custom avatar, you can ensure your AI behaves exactly as intended before sharing it with others.
Navigating the Dashboard
Main Dashboard View
When you log in, the Dashboard displays:
Recent Characters – Quickly access and edit your most recently used characters.
Recent Simulations – View and manage your latest Convai Sim Experiences.
Start a New Simulation – Choose from available scene templates such as Airport, Healthcare, Fire Station, Hotel, Police Station, Restaurant, Fitness, Science Lab and more.
Sample Characters – Browse pre-made characters created by the Convai team for quick testing and inspiration.
Creating New Content
In the top-right corner, you’ll find:
Create a new experience – Start building a new Convai Sim Experience from scratch.
Create a new character – Design and customize AI-powered characters for your projects.
Sidebar Navigation
On the left sidebar, you can access:
Dashboard – Return to the Dashboard
My Characters – Access your characters.
Create Character – Launch the character creation tool directly.
My Experiences – Access and edit your simulation experiences.
Profile and Settings
In the top-right profile section:
Click your profile name to open a dropdown menu with:
My Profile – Manage personal account details.
Billing – View usages and update payment information.
To the left of your profile name, you will find the API Key access button to get your API Key.
Top Navigation Bar
From the top navigation menu, you can directly reach:
Playground
Documentation
Videos
Plugins
Pricing
Contact
Conclusion
The Convai Playground Dashboard is designed for efficiency, providing quick access to all tools and resources you need to create and manage AI-driven characters and immersive simulations. Whether you are customizing existing assets or building new experiences from scratch, the intuitive layout ensures a smooth workflow.
Step-by-Step Guide
1. Open Your Character
From your Dashboard, open the character you want to use with Voice Mode.
Navigate to Core AI Settings and ensure you’ve selected a Live Model.
2. Configure Voice Settings
Go to Language and Speech and select a voice for your character.
Choose a voice other than GCP. GCP voices are not compatible with live models.
Once configured, click Update to save your changes.
3. Using Voice Mode
After updating your character, a microphone button will appear next to the chat input field.
Click this button to enter Voice Mode, allowing you to talk to your character hands-free in real time.
When you exit Voice Mode, the conversation transcript will appear in the chat area for review.
If you remain idle for more than 5 minutes, the voice session automatically disconnects. Simply reconnect to resume your conversation.
What You’ll Find Here
Our documentation is divided into several sections so you can easily find what you need:
1. Convai Playground
Learn to create, customize, and test your AI characters directly in Convai Playground.
Get Started – Basics of navigating the dashboard, creating your first character, and testing interactions.
Character Customization – Deep dive into the tools for defining your character’s appearance, voice, knowledge, traits, and more.
2. No Code Experiences
Create interactive AI experiences without writing a single line of code.
Avatar Studio Experiences – Create and customize your character’s visual identity, including appearance, clothing, accessories, environment, animations, lighting, camera angles, and more, all within an easy-to-use no-code editor.
Convai Sim Experiences – Build interactive, large-scale simulation environments where your AI characters engage in realistic scenarios, navigate spaces, and interact with objects.
Convai XR Animation Capture App – Capture high-fidelity motion data using your XR device and apply realistic animations to your avatars for more immersive, lifelike performances.
3. Plugins & Integrations
Extend your characters into your applications and games.
In-depth API documentation for advanced customization and integration.
Before You Begin
To start building with Convai, you only need:
A Convai account – Sign up here if you don’t have one.
Getting Help
If you can’t find what you’re looking for, use the search bar at the top of the documentation to quickly locate relevant topics.
For inspiration, check out the Sample Characters available in your Dashboard.
If you need further assistance, visit the Convai Developer Forum to connect with the community and get support from the Convai team.
Creating a New Character
Learn how to create and customize a new AI character in Convai Playground, including description, avatar, voice, and language settings.
Introduction
The Convai Playground allows you to design AI-powered characters with unique personalities, voices, and visual appearances. This guide will walk you through creating a new character, from initial setup to customization of avatar, voice, and languages.
Step-by-Step Guide
1. Access the Creation Tool
From your Dashboard, click Create a new character in the top-right corner.
A new character creation interface will open.
In the left menu, only Character Description, Avatar, and Voice And Languages are active initially. Other sections will unlock after the character is created.
2. Character Description
Character’s Name – Enter the name for your character (you can edit this later).
Core Description – Write a short background covering the character’s story, personality traits, and distinctive features.
Alternatively, click Generate Core Description to create one automatically.
3. Avatar Customization
Click the Avatar tab in the left menu.
Select Configure Avatar to customize your character’s visual appearance.
Follow the steps in the to adjust facial features, clothing, and other design elements.
4. Voice and Language Settings
Click the Voice And Languages tab.
In Language, select one or more languages your character can speak.
In Voice, choose from the filtered voice options for your selected languages.
5. Finalizing Character Creation
Once all desired settings are configured, click Create Character at the bottom right.
If you skip customization, a random avatar and voice will be assigned automatically.
If you don't choose any language, English will be selected by default.
After Creation
When the character is created, additional sections in the left menu become available for deeper customization:
Character Description
Avatar
Language and Speech
Knowledge Bank
Each of these features allows you to enhance and refine your character for more natural, intelligent, and engaging interactions. These are covered in separate documentation.
Conclusion
The character creation process in Convai Playground is designed for flexibility — you can launch a character in minutes or spend time refining every detail. Whether you start with default settings or fully customize the avatar, voice, and description.
Character Versioning
Manage multiple versions of character and switch between them as required.
Introduction
In this section, we look into Character Versioning, i.e., maintaining different states of the character. This enables the user to preserve a previous stable state before trying out more changes. You can now experiment without the fear of losing an older state of the character, and in case you want discard the current changes and return to a previous version, you now have the ability to restore the version and continue working from there. We conveniently call these saved states as Snapshots of the character.
We will go over the features and how to use them in this section.
Language And Speech
Learn how to configure languages, voices, custom pronunciations, and word recognition for your AI character in Convai Playground.
Introduction
The Language and Speech section allows you to define the spoken languages, select a voice, and improve pronunciation and recognition for your AI character. With support for multiple languages and voice providers, you can ensure that your character communicates naturally and effectively with your audience.
Uploading Avatars
Currently, only Metahuman and Reallusion avatars are supported for upload.
Lighting Adjustments
Set the Right Mood with Lighting
Lighting plays a key role in how your avatar looks and feels within the environment. It affects not only visibility, but also the overall tone and atmosphere of the scene.
Here’s how you can adjust lighting for your avatar:
Introduction
Introduction to Convai's plugins and integrations. Learn how to enhance your projects with AI.
Convai provides a variety of plugins and integrations to help integrate conversational AI and avatars into your projects.
Game Engines
Installation
Choose an installation method and add the Convai Unity SDK to your project.
Introduction
This page helps you choose the right installation method for your workflow. If you’re starting fresh, we recommend UPM for the smoothest update path.
Unity Plugin Utilities - Enhance development with Convai's tools and resources.
The idea of Snapshot and Version has been used interchangeably in the text; however, they refer to the same idea: The state / contents that define the character at a specific point in time.
Overview
The character versioning option is available at the top right-hand side in the character editor section beside the Update button
Once you click on it, you get to see the list of all your previous saved revisions ordered by date.
Character Versioning section. There is no snapshot here yet.
We will go over the steps of creating and maintaining snapshots from scratch in the next section
Create a Version
Let us start with a character that we already have saved. The data that we see when we open the details related to a character denotes the Current Snapshot of the character. When you interact with the character, you are essentially referring to all the date in this Current Snapshot of the character.
To create a new version, first open the Character Versioning section and click on the + Add Snapshot sign at the top.
Let's create our very first snapshot.
A pop-up appears asking you to give your snapshot a name and some description. Please note that a Snapshot Name is a required field to create a new version. Once you have filled the details, click on the Submit button.
We provide a name and a small description.
Now, you can see the new version in the list of snapshots. Now what does this version actually represent?
This snapshot stores all the data related to the character at that point of time. Everything about the character ranging from character description, embodiment to knowledge bank files, narrative-design structure and other details.
Restoring a Version
Assuming you have gone ahead and worked on the character further, but you are unhappy with the results and want to go back and start from the previous version. This is where you have the ability to restore an old snapshot to the current state and work with them again. Here are the steps to follow:
To restore a version, open the Character Versioning section and select the snapshot you want to restore back. You will see the Restore Version button below come to life.
We will be restoring the data from the very first snapshot.
Once you click on the Restore Version button, a pop-up appears asking you if you want to save the current changes as a new snapshot or discard them. You have the option to store your current changes as some test version and refer back later on.
Let's directly restore the data in the snapshot to the Current Snapshot
For now, we are happy to discard the changes, so we will click on Restore button. This brings the data from the selected version to the Current Snapshot of the character.
To save the changes, you can always Cancel and go back to creating a new snapshot with your progress and the restoring it.
Delete a Snapshot
You can also go ahead and delete a snapshot that you no longer require. To that you can click on the 3-dots by the corresponding snapshot in the list of Character Version and select Delete Version
Click on the 3-dots beside the snapshot to see all the options.
Some important points to remember
At any given point you can interact with the Current Snapshot of the character. If you have any publicly available app that utilises the character, your users will only be able to interact with this current version.
We are currently working on a feature to help developers have separate deployed version than the Current Snapshot.
Main Features
1. Set Language
Choose the languages your character can speak and understand.
Supports multilingual select between 1 and 4 languages.
Default language: English.
Over 65+ languages are available.
Selecting a language will filter the available voices in the Voice section.
2. Voice Selection
The Voice field provides access to over 1,200 voices in total. When you select a language, the available voices are filtered accordingly, so the number of voices varies by language.
Supported voice providers:
Google Cloud Platform (GCP)
Microsoft Azure
OpenAI
ElevenLabs
Custom Voices can be added through ElevenLabs.
See: for setup details.
3. Add Custom Pronunciation
Custom pronunciations help your character pronounce specific words correctly, especially unusual or brand-specific terms.
To add:
Spelled As – The word as it appears in text.
Pronounced As – How it should sound, written phonetically in plain English.
Example:
Spelled As: convai
Pronounced As: convey
Case-sensitive: Uppercase and lowercase entries can have different pronunciations.
Currently only supports English.
4. New Word Recognition
New Word Recognition improves your character’s ability to understand unique or challenging words in speech input.
To add:
Spelled As – The correct spelling of the word.
Pronounced As – The phonetic pronunciation using simple syllables.
Example:
Spelled As: Ankur
Pronounced As: Ahnkur
Currently only supports English.
Conclusion
The Language and Speech settings provide complete control over how your character communicates, from language selection and voice choice to fine-tuning pronunciation and recognition. These tools help ensure your AI character delivers clear, accurate, and engaging interactions for users.
Choose a Lighting Preset
Use the dropdown menu to select from several preset lighting setups.
Adjust Lighting Power
Use the power level slider to fine-tune the light.
Subtle lighting changes can make a big difference in realism — experiment to see what best fits your character and scene.
Global Character Controls
A single reference for the shared toolbar controls available across all character pages in Convai Playground, including Versioning, Update, and Character Settings.
Introduction
This page explains the shared controls that appear at the top right of every character page in Convai Playground. You will see the same toolbar on Character Description, Avatar, Language and Speech, Knowledge Bank, Personality Traits, Core AI Settings, State of Mind, Embodied Actions, Narrative Design, External API, Publish, and Memory. Understanding these controls helps you work faster and avoid losing changes.
Where to find these controls
Look at the top right of any character screen. You will see:
Versioning icon
Update button
Character Settings menu (three dots)
These controls behave the same way on every page.
Controls overview
1. Character Versioning
Use Versioning to save and switch between alternative definitions of your character.
What it does
Saves a named snapshot of your character definition so you can test new ideas without losing a preferred setup.
Lets you switch to any saved version and continue editing from there.
For more information, refer to the documentation.
2. Update button
Apply your unsaved changes to the character.
States
Green: there are unsaved edits. Click Update to save.
Gray
3. Character Settings Menu
Open the three dots menu to access actions that affect the entire character.
Clone Character
Create a duplicate so you can branch work safely.
What is copied: all character configuration tabs (e.g., Description, Personality, Languages, etc.) are copied, except the Memory tab.
What changes: the clone receives a new Character ID.
When to use: large experiments, staging vs production split, A or B variants.
Share Character
Let others test your character.
What it does: Generates a share link so recipients can interact with the character (e.g., Chatbox or video call) without being able to modify it.
How it works: Open the dialog to copy a share link, respecting your current visibility setting (Public, Unlisted, or Private).
Delete Character
Remove the character from your characters.
Deletion is permanent and cannot be undone.
Checklist before deletion
Confirm the character is not used in any live experience.
Export or copy any content you may need.
Knowledge Bank
Learn how to upload, manage, and connect knowledge files to your AI character using Knowledge Bank.
Introduction
The Knowledge Bank is where you store and manage information that your AI character can access during conversations. By uploading documents or adding text directly, you can give your character specific domain knowledge, enabling more accurate, relevant, and context-aware responses.
All files uploaded to your Knowledge Bank are linked to your Convai account and can be connected to any of your characters. This makes it an essential tool for training characters to respond with company-specific, product-specific, or topic-specific information.
Knowledge Bank Sections
1. My Documents
Displays all files uploaded to your account.
Information shown:
Name – File name.
2. Upload Knowledge
Upload .txt files from your computer.
Currently, only .txt file format is supported.
Once uploaded, files are stored in your account’s Knowledge Bank for use with any character.
3. Add Knowledge
Create a new file by entering plain text directly into the editor.
Name the file and save it in .txt format.
Frequently restart the page during the learning phase to check if the file status is “Available.”
Using the Knowledge Bank with Your Character
Example
We uploaded a file named Employee Onboarding Guide.txt with the following content:
Testing Without Connecting the File
Open the Chatbox.
Ask: “I’m a new hire. What should I do during my first week here?”
Result: The character responds using its general personality and AI model knowledge, not the uploaded file.
Testing by Connecting the File
Go to Knowledge Bank → My Documents.
Click Connect on the file.
In the Chatbox, click Reset Chat (top left) to start a new session.
Result: This time, the character’s response is based on the exact steps provided in the Employee Onboarding Guide file.
Always reset the chat session after connecting a new knowledge file so the latest data is used.
The total storage size for uploaded files depends on your Convai subscription plan. See the page for limits.
Conclusion
The Knowledge Bank is a powerful way to give your characters precise and reliable information. By connecting domain-specific documents, you ensure that your AI not only has personality but also the expertise to answer questions with accuracy and authority.
State Of Mind
Learn how the State of Mind feature visualizes your AI character’s emotional state in real time during conversations.
Introduction
The State of Mind section provides a visual representation of your AI character’s current emotional state. This dynamic emotional map helps you understand how your character is responding internally during a conversation, allowing for fine-tuning of its personality and interaction style.
How It Works
The State of Mind interface displays a color-coded emotion wheel.
Each segment represents a specific emotion such as Joy, Anger, Trust, Fear, Surprise, Sadness, Disgust, and Anticipation, along with nuanced variations like Serenity, Rage, Admiration, and Amazement.
Active emotions — those the character is currently experiencing — are highlighted more brightly on the graph.
Practical Use Cases
Character Testing: Observe real-time emotional responses to verify that the character reacts as intended.
Personality Tuning: Adjust personality traits in the Personality Traits section and see how they influence emotional patterns.
Storytelling & Roleplay: Ensure emotional consistency in interactive narratives.
Conclusion
The State of Mind feature offers valuable insights into your AI character’s emotional behavior. By monitoring these live emotional changes, you can ensure your character responds in a way that aligns with its designed personality and intended use case.
Memory
Learn how to use the Memory feature to review past sessions, manage conversation history, and enable long-term memory for your character.
Introduction
The Memory section lets you review conversation history for a character and decide whether it should remember information between sessions. Use it to audit interactions, troubleshoot issues, and enable persistent preferences.
Recent Memory
This tab lists all previous sessions with your character.
For each session, you can view:
Date – The date when the session occurred.
Time – The session’s start time, shown in UTC time zone.
Session ID – A unique identifier for that session.
If you experience any issues in a session, support team may request the Session ID so they can investigate in detail.
Available Actions:
View conversation: Click the downward arrow to expand and see the conversation log for that session.
Copy or download: Use the three-dot menu on the right to copy or download the session data.
Memory Settings
This tab allows you to enable or disable Long Term Memory.
When Long Term Memory is enabled, your character can remember preferences, choices, and facts from previous sessions. For example:
If you tell your character “My favorite color is blue” in one session, and later in a different session ask “What’s my favorite color?”, the character will respond with “blue.”
When disabled, the character will not retain information between sessions.
Conclusion
The Memory feature provides powerful control over how your character interacts with you over time. Use Recent Memory to inspect and share specific sessions, and adjust Memory Settings to decide whether your character should retain knowledge across conversations.
Publish
Learn how to publish and share your Convai Experience with the public, selected users, or embed it on your own website.
Introduction
The Publish page allows you to share your fully created and customized Convai Experience with the world or with a selected group of people. From here, you can configure the title, description, thumbnail, and visibility settings for your experience, as well as generate links for sharing.
Publishing Options and Visibility Settings
The Details tab contains all the essential settings for publishing your experience:
Experience Link – A direct link to your experience for easy sharing.
Experience Name – The display name for your published experience.
Experience Description – A short summary describing your experience.
Embed Experience
The "Embed Experience" tab allows you to embed Convai Experiences directly into your own website. This feature makes it easy to integrate interactive experiences into custom platforms or applications.
Convai Pixel Streaming Embed is currently accessible only with the Professional Plan and above.
Conclusion
The Publish page provides all the tools you need to control how your experience is shared, whether you want it available to the public, only to select individuals, or embedded directly into your website. By choosing the right visibility settings, you can ensure your experience reaches the right audience in the right way.
Customizing Your Avatar
Learn how to visually and behaviorally personalize your Convai avatar using the Avatar Studio configurator.
Overview
Once your character is created, you can start customizing how they look, move, and interact using the Avatar Studio.
What You Can Customize
Here’s what you can do inside the Avatar Studio:
Choose a Sample Avatar,
Pick from a library of ready-to-use, high-fidelity avatars.
Customize Appearance
Modify facial features, clothing, hairstyles, and other visual elements to reflect your character’s identity.
Upload Your Own Avatars
Prefer a unique design? Upload your own 3D avatar models for full control over their look.
After you finish customizing, simply save and publish your avatar to bring it into your character’s conversations.
Configure Avatar
Learn how to choose, customize, or upload avatars in Avatar Studio.
Once you enter the Avatar Studio, you can define exactly how your avatar looks and behaves. Here’s how to get started:
Choosing a Sample Avatar
You can start by selecting a high-quality Sample Avatar from Convai’s library. These are designed to cover a wide range of character types and use cases.
Browse the available avatars by scrolling through the list.
Click on the one you want to use.
That avatar will be instantly applied to your character.
Creating a Custom Avatar
If you want something more unique, you can create a custom avatar using the built-in editor.
Go to the “Craft your own” tab.
Click the plus icon (+) to start creating a new custom avatar.
Give your avatar a name.
This allows for highly personalized avatars that align with your branding and narrative needs.
Uploading Your Own Avatar
If you'd like to upload a custom avatar model, please follow our detailed guide here:
Face Filter
Use the Face Filter feature to make your avatar resemble a specific person based on a photo.
The Face Filter feature is available only with the Scale plan and above.
What is Face Filter?
Face Filter allows you to personalize your avatar’s appearance to look like a specific person using a reference photo.
How to Use It?
1. Enable Face Filter
Toggle on the Face Filter option inside the avatar customization panel.
2. Upload an Image
Click “Upload your own image” to add a photo reference.
3. Apply the Image
Select the uploaded image by clicking on it. Your avatar’s face will automatically morph to resemble the person in the photo.
4. Manage Images
To delete an image, click on it and select “Delete image”.
You can upload multiple images to try different looks.
With Face Filter, you can achieve even more lifelike, personalized characters — perfect for storytelling, training simulations, or representing real individuals in virtual settings.
Environment
Choose from immersive 3D and Solid environments to place your avatar in the right setting.
Bring Your Character to Life with the Right Setting
Selecting an environment helps anchor your avatar in a scene that matches your use case — whether it’s professional, playful, or futuristic.
You can choose from a variety of immersive 3D and Solid environments, including:
A sleek, modern office
A sci-fi room with futuristic vibes
A warm and inviting cozy lounge
A minimal and practical kiosk-style backdrop
These environments serve as the visual context for your avatar’s interactions, making conversations feel more realistic and engaging for your audience.
Match the environment with the personality or purpose of your character.
Convai Sim Experiences
Create AI-powered avatars and deploy them in interactive 3D environments—directly from your browser.
Introduction
Convai Sim is a browser-based platform that allows you to instantly create and deploy AI-powered avatars in interactive 3D environments— no downloads, and no complex setup required.
Designed for creators, educators, and developers, it enables rapid prototyping and deployment of lifelike characters inside rich, responsive scenes.
What You Can Do
With Convai Sim, you can:
Add one or more AI-powered avatars into a 3D scene
Set up real-time interactions using voice or text
Deploy avatars with smart navigation and context-aware behaviors
Who It’s For
Convai Sim is perfect for:
Educators and trainers building interactive simulations or learning environments
Storytellers and creators wanting to bring characters to life in immersive scenes
Game developers prototyping scenarios and NPC interactions
Whether you're designing a futuristic training program or a playful game level, Convai Sim makes it easy to bring intelligence and interactivity to 3D worlds.
Key Features
Browser-Based Platform
No installations — launch and edit in-browser.
Multi-Avatar Support
Add and manage multiple intelligent characters in a single scene.
High-Quality Visuals
Use expressive avatars for rich storytelling and realistic simulation.
Avatar Customization
Fine-tune your deployed avatar’s appearance, size, and position within your 3D simulation scene
Refine Your Avatar for a Perfect Fit
Once you've placed your avatar into the scene, it's time to customize its model, pose, and placement to match your simulation's tone and context.
Customizing Your Avatar
Follow these steps to adjust your avatar visually using built-in tools:
1. Select and Open the Character Tools
Click directly on your avatar in the scene.
This will open the transform tools.
2. Customize Position, Rotation, and Scale
You can adjust your avatar’s placement and appearance using either visual tools or precise numeric fields:
Option A – Use Transform Gizmos
Move Tool: Drag the avatar along the XYZ axes.
Rotate Tool: Use the blue ring to turn the avatar’s facing direction.
Scale Tool: Resize the avatar by dragging the top cube handle.
Option B – Use the Edit & Publish Panel
When the avatar is selected, the Edit & Publish panel appears.
Manually enter values for:
Position
This option is ideal when you need precise alignment, consistency across avatars, or exact placement within complex scenes.
With both intuitive drag-and-drop controls and precision inputs, customizing your avatar's presence in the scene is flexible and efficient.
Next up: Let’s bring your avatar to life with tour-guide behaviors and intelligent interactions!
Convai XR Animation Capture App Setup
Learn how to install and connect the Convai XR Animation Capture App on your Meta Quest headset to start recording avatar animations in VR.
Requirements
Before you begin, make sure you have the following:
Meta Quest 2 / 3 / Pro
A registered Convai account –
A stable internet connection
Installation Steps
Step 1: Install the App on your Quest device
Put on your Meta Quest headset.
Open the Meta Quest Store.
Search for "Convai Animation Capture".
Step 2: Log In to Your Convai Account
Launch the Convai Animation Capture app on your headset.
When prompted, log in to your Convai account.
You're Ready to Animate!
After completing the steps above, your setup is complete. You can now begin recording animations directly in VR, which your AI avatars can intelligently perform in simulations, scenes, or guided experiences within Convai Sim.
Getting Started
Get the Convai Unity SDK installed, configured, and verified with a quick conversation test.
Introduction
The Convai Unity SDK lets you bring real-time conversational AI into your Unity projects—ideal for NPC dialogue, voice interactions, and interactive character experiences.
This Getting Started section is designed to guide you from a fresh Unity project to a successful “first conversation” using either a sample scene or your own custom scene.
Overview
Use these pages depending on where you are in the process:
Installation
UPM (Recommended) — easiest updates and dependency management
Unity Asset Store — Asset Store based distribution workflow (Coming soon)
What’s next
If this is your first time installing Convai in Unity:
Start with Install via UPM
Continue to Configure API Key
Validate with Sample Scenes or Custom Scene Setup
Conclusion
You now have a clear path to install and validate the Convai Unity SDK. Start with Installation, then move to Setup, and finish with a quick test conversation.
Need help? For questions, please visit the .
Install via UPM (Recommended)
Install the Convai Unity SDK via the Unity Package Manager using the package name.
Introduction
UPM installation is the recommended approach because it’s easy to maintain, update, and keep consistent across a team.
Prerequisites
A Unity project opened in the Unity Editor
Step-by-step
1
Open Package Manager
In Unity, go to Window → Package Manager.
2
Troubleshooting
Console errors after install
Confirm you are using a supported Unity version.
Conclusion
You’ve installed the Convai Unity SDK via UPM and confirmed the editor compiled successfully. Next, go to Setup → Configure API Key.
Need help? For questions, please visit the .
Setup
Configure credentials and choose how you want to test Convai: samples or your own scene.
Introduction
After installation, you’ll configure your Convai API key and then validate the integration using either:
Convai’s Sample Scenes, or Your own Custom Scene Setup
Overview
Configure API Key(required)
Import & Run Sample Scenes(fastest validation)
Custom Scene Setup(integrate into your scene)
Add Chat UI(optional, text input + transcript)
Add Lip Sync to Your Character(optional, real-time facial animation)
Recommended path
Configure API Key
Import and run a sample scene
(Optional) Set up a custom scene
(Optional) Add Chat UI
Conclusion
You’re ready to configure your project and run your first conversation test. Start with Configure API Key.
Need help? For questions, please visit the .
Disable Assembly Validation
If you ever get an error that looks like this, disable the Assemble Version Validation in Project Settings > Player > Other Settings.
Assembly 'Assets/Convai/Plugins/Grpc.Core.Api/lib/net45/Grpc.Core.Api.dll' will not be loaded due to errors:
Grpc.Core.Api references strong named System.Memory Assembly references: 4.0.1.1 Found in project: 4.0.1.2.
Ensure that Assembly Validation is disabled in Project Settings > Player > Other Settings.
Restart the Unity project after unchecking the box should fix the issue.
Animations have Facial Blendshapes
Resolve facial blendshape issues in Unity animations with Convai. Improve character realism.
If the Lip-sync from characters are either not visible or are very faint, if could be a result of character's animations overriding the blendshape changes made by the script. We recommend deleting the relevant components in the animation dopesheet.
The blendshapes in the CC_Base_Body's Skinned Mesh Renderer. We shall delete these.
Default Animations Incompatibility
Fix default animation incompatibilities in Unity with Convai. Ensure smooth AI character animations.
If the default animations that ship with the animator look bugged such that the hand seems to intersect with the body, it could indicate an issue with the wrong animation avatar being selected.
You can easily fix that by heading to the character's animator component and assigning the correct animator to the Avatar field.
For male avatars
For female avatars
The correct animation will look something like this. The hands should not intersect the body.
Adding Scene Reference and Point-At Crosshairs
You can point at Interactable Objects and Characters and ask your characters about them.
To enable this, simply drag and drop the Convai Crosshair Canvas prefab into the scene.
Pre-Requisites
Review the prerequisites for integrating Convai with Unity. Ensure seamless setup and functionality.
Unity Version
The Convai Unity SDK supports a minimum of Unity 2022.3.x or later.
You should have Git installed locally on your system.
Skills and Knowledge
Before integrating the Convai SDK, you should be comfortable with the following:
Importing Packages: Know how to import external packages into a Unity project.
Unity Editor: Be proficient in navigating the Unity Editor interface.
Animations: Understand how to add and handle animations for assets.
Having these skills will ensure a smooth integration and optimal use of the Convai Unity SDK in your projects.
Personality Traits
Learn how to customize your AI character’s personality using presets or manual trait adjustments.
Introduction
The Personality Traits section defines how your AI character behaves, interacts, and responds during conversations. By adjusting personality parameters, you can align the character’s behavior with its intended role, making interactions more engaging and consistent.
Avatar Studio Experiences
Create intelligent 3D AI avatars directly in your browser — no downloads, no code, fully customizable.
Introduction
Convai’s Avatar Studio is a user-friendly platform that allows anyone to create intelligent, high-quality 3D conversational avatars — right from your web browser.
Animation & Expression Settings
Customize your avatar’s expressiveness with facial and body animations, emotions, and intelligent actions.
Make Your Avatar Come Alive
Convai’s Avatar Studio lets you fine-tune how expressive your avatar is — from subtle facial expressions to full-body gestures and smart actions.
Publishing an Experience
Learn how to publish and share your customized avatar experience for use across web, kiosks, apps, and more.
Ready to Share Your Experience with the World?
Once your character and avatar setup is complete, you can publish your experience directly from the Convai Character Creator dashboard.
Creating Your AI Simulation with Convai Sim
Bring your Convai characters to life by placing them into 3D interactive environments using Convai Sim
Introduction
Now that you’ve created a Convai character, it’s time to place them into a 3D simulation. With Convai Sim, you can bring characters to life inside immersive environments—fully interactive and embodied in high-quality avatars.
Publishing an Experience
Learn how to finalize and publish your AI simulation or tour guide experience created with Convai Sim
Make Your Experience Live
Once you’ve finished building your AI simulation or virtual tour, Convai Sim makes it easy to publish and share your experience across platforms.
Convai XR Animation Capture App
Capture animations in VR using your Meta Quest and animate AI avatars—no mocap suit required.
Introduction
The Convai XR Animation Capture app allows you to record high-quality animations directly in virtual reality using a Meta Quest headset. These animations can be uploaded to your Convai account and used seamlessly across platforms like Unity, Unreal Engine, or within no-code tools like Avatar Studio and Convai Sim.
Adding Your Recorded Animations to AI Avatars Inside Unity
Learn how to import animations recorded in VR and apply them to your AI avatars in Unity.
Overview
Bring your Convai avatars to life inside Unity by integrating animations recorded via the Convai XR Animation Capture App. This guide walks you through importing those animations and attaching them to AI-powered characters in your Unity project.
Downloads
Download Convai tools for Unity. Access the latest plugins and updates for AI integration.
Version
Features
Download Link
Creating a Convai Powered Scene from Template
This guide will help you make a scene in unity with Convai Essentials already present in it. It will help you to get started with our plugin very fast.
Step 1) Open the New Scene window
You can open the new scene window by two ways, first by pressing Ctrl + N for windows or CMD + N for Mac on your keyboard, second way is to navigate to File -> New Scene
Player Data Container
All the information that Convai SDK needs from the player to work properly
This is a scriptable object which is made automatically after you hit play in the editor with Convai SDK installed and in a Scene where Convai Base Scene Essentials Prefab is present
Default Player Name
You can provide a default name of your players.
Player Name
Current name of your player, out of the box if you use our settings panel, we keep it updated automatically, if you are using some custom logic, it will be your responsibility to keep it updated, as our transcript UI use this name to show it in UI
Speaker ID
Unity Plugin (Beta) Overview
Discover the all-new Convai Unity Plugin Beta — redesigned from the ground up for faster, more immersive, and hands-free AI character experiences in Unity.
Introduction
The Convai Unity Plugin (Beta) marks a major leap forward in how developers can bring conversational AI to life inside Unity.
Built entirely from the ground up with a new backend and plugin infrastructure, this release delivers a faster, lighter, and more powerful experience for real-time character interactions.
Every aspect of the plugin has been re-engineered based on extensive developer feedback from our previous version — focusing on performance, ease of use, and seamless integration with modern Unity workflows.
Language Support
Convai offers comprehensive transcript and voice support for a wide range of languages. To facilitate seamless integration, our Unity plugin comes with a custom TextMeshPro (TMP) package, which includes essential fonts and required settings for major languages.
This requires TMP Essentials pre-installed, which can be done through the TextMeshPro option in the Window tab or through a prompt on starting the project.
Microphone Permission Issues
Resolve microphone permission issues in Unity with Convai. Ensure smooth voice interactions.
If you see the microphone indicator turning on in the top left corner but no user transcript in the chat UI and the character's response doesn't seem coherent to what you said, then it is likely that the game or Unity is not accessing the correct microphone or does not have sufficient microphone privilege. To fix this, please follow along.
Narrative Design Keys
This guide shows how to dynamically pass variables to the Narrative Design section and triggers.
We will create a simple scenario where the character welcomes the player and asks them about their evening or morning based on the player's time of day.
Step 1
Activate the Narrative Design for your character in the Playground. Then, create a new Section.
Jaw Bone in Avatar is not Free
Fix jaw bone issues in Unity avatars with Convai. Ensure smooth lip sync and animations.
If the Lip Sync does not seem to cause any facial animations, even after removing all blendshapes from animations, then the following steps should help resolve the issue.
This is a known issue in Reallusion CC4 characters.
Select the Character and head to the Animator component.
Dynamic Information Context
The Dynamic Information feature enables you to pass variables to NPCs in real time, allowing them to react dynamically to changes in the game environment. This can include the player’s current health, inventory items, or contextual world information, greatly enhancing interactivity and immersion.
Step-by-Step Guide to Setting Up Dynamic Config
First, Add the Dynamic Info Controller Component to your Convai NPC.
Unity Plugin
Integrate advanced conversational AI to create intelligent, interactive NPCs for your games.
Overview
Convai's Unity SDK provides you with all the tools you need to integrate conversational AI into your Unity projects. Convai offers specialized NLP-based services to build intelligent NPCs for your games and virtual worlds. Our platform is designed to seamlessly integrate with your game development workflow, enhancing the interactivity and depth of your virtual environments.
Configure API Key
Add your Convai API key in Unity to enable the SDK.
Introduction
The SDK needs your Convai API key to authenticate requests and enable character conversations.
Limitations of WebGL Plugin
Understand the limitations of the WebGL plugin for Unity with Convai. Optimize your development.
Size Constraints
iOS browsers impose strict limitations on the size of WebGL builds. These constraints are primarily due to:
Memory Limits: iOS devices have limited available memory for web applications, which can affect the performance and feasibility of running large WebGL builds.
Building For Supported Platforms
With Convai's Unity SDK, you can build your favorite application for several platforms, including Windows, MacOS and Android. Currently, we also support these platforms:
Easily position characters using drag-and-drop scene editing
Instantly publish your experience for testing or deployment
Run everything directly in your browser.
Enterprises creating training, onboarding, or customer-facing virtual flows
Tourism and museum teams looking for guided, avatar-led experiences
Instant Deployment
Launch scenes immediately and preview interactions with one click.
Intelligent Navigation
Characters move contextually, ideal for tour guide or training scenarios.
Interactive Scene Editing
Easily arrange avatars and elements using drag-and-drop tools.
Versatile Use Cases
Perfect for education, training, tourism, gaming, and more.
Programming in C#: Have a basic experience programming Unity scripts in C#.
Script Integration: Be capable of adding scripts to a game object.
Building and Deployment: Know how to build and deploy an application to your chosen platform.
What’s New
This Beta introduces a wide range of improvements designed to make AI character integration smoother, faster, and more natural than ever before:
Low Response Time — Experience significantly reduced latency for more fluid and realistic exchanges.
Voice Activity Detection (VAD) — Automatically detect when a user is speaking, creating smoother conversational flow.
New Convai Plugin Architecture — Optimized for scalability, extensibility, and future updates.
Lightweight Package Size — The plugin dynamically fetches cloud resources as needed, keeping your project lean.
Together, these updates make building intelligent, interactive worlds with Convai characters easier and more efficient than ever before.
Beta Release
This is the Beta release of the new Convai Unity Plugin.
We’ll be rolling out frequent updates to improve stability, performance, and feature coverage as we move toward the full release.
Your feedback plays a critical role in shaping this development.
We encourage you to share your thoughts, experiences, and suggestions directly on the Convai Developer Forum.
Conclusion
The Convai Unity Plugin (Beta) represents the next evolution of AI-driven interactivity in Unity — blending natural voice, low-latency responses, and seamless integration into one unified framework.
Start exploring, experiment with new features, and help us shape the future of interactive AI experiences.
Browser Storage Quotas: Safari and other iOS browsers restrict the amount of data that can be stored locally. This includes caching and Indexed DB, which are often used to store assets for WebGL builds.
Key Limitations
Maximum Downloadable Asset Size: iOS browsers may restrict the size of individual downloadable assets. Large assets might fail to load, causing the application to break.
Total Build Size: The total size of all assets combined should ideally be kept under 50-100 MB for smooth performance. Exceeding this limit can lead to crashes or extremely slow loading times.
Memory Usage: iOS devices typically have less RAM available compared to desktop environments. High memory usage by WebGL builds can result in frequent browser crashes.
Browser Compatibility
Safari: The default browser on iOS, Safari, is generally the best option for WebGL builds, but it still has significant limitations compared to other desktop browsers.
At the top of the page, you’ll find a dropdown menu containing predefined personality presets:
Adventurous Thinker
Friendly Optimist
Harmonious Empath
Analytical Perfectionist
Curious Mediator
Energetic Dreamer
Social Adventurer
Compassionate Idealist
Selecting a preset automatically adjusts the character’s personality traits to match the chosen style.
Customizing Personality Traits
If you prefer full control, you can manually adjust the vertical sliders for each personality dimension:
Openness
High value: Likes exploring and trying new things.
Low value: Prefers stability and routine.
Meticulousness
High value: Pays great attention to detail.
Low value: More relaxed and spontaneous.
Extraversion
High value: Outgoing and sociable.
Low value: Reserved and introverted.
Agreeableness
High value: Cooperative and empathetic.
Low value: More competitive and independent.
Sensitivity
High value: Highly emotional and expressive.
Low value: Rarely emotional or reserved.
Each slider ranges from 0 to 4, allowing precise adjustments to match your character’s personality profile.
Visual Personality Map
Below the sliders, a radar chart displays a visual representation of the character’s personality. This helps you see how each trait contributes to the overall personality balance.
Conclusion
The Personality Traits section gives you the flexibility to either choose from predefined styles or fine-tune individual traits to create a personality that matches your vision. By combining these settings with your character’s description and voice, you can create truly distinctive AI personas.
What You Can Do
With a simple interface, you can design and deploy fully interactive avatars that:
Speak and respond via voice and text
Perform intelligent animations
Adapt to different virtual environments
Are fully customizable
Optionally use vision-based input to "see" the user and react with natural, personalized responses, enhancing realism
Who It’s For
Avatar Studio is perfect for:
Creators and developers building digital characters
Educators creating engaging learning experiences
Brands looking to enhance digital events
Game designers needing lifelike NPCs
Anyone interested in AI-powered interactive storytelling
Whether you're creating an NPC for a game or a digital host for a virtual event, Convai’s Avatar Studio helps you bring your characters to life—quickly and easily.
Key Features
Conversational AI
Avatars engage in natural, human-like voice or text conversations.
High-Quality Metahuman NPCs
Realistic 3D Metahuman avatars with high-fidelity lip-sync, natural eye-blinking, and intelligent animations.
Runs Entirely on the Browser
No downloads, installations, or GPU power needed — just open and start creating.
Intelligent Actions and Animations
Avatars react with gestures such as waving, thinking, and expressing emotions based on the conversation context.
Proactive & Agentic AI Characters
Characters can initiate conversations and act autonomously in response to their environment.
Vision-Based Interaction
Avatars can perceive users via camera input and respond with contextually appropriate and human-like reactions.
High-Quality Backgrounds
Choose from immersive environments to place and enhance your characters.
Total Customization
Fully personalize the avatar’s appearance, voice, actions, environments, branding and more.
Facial & Body Animation
Use sliders to define the intensity of animations:
Facial Animation
Range: -1 to 1
Lower values result in minimal expressiveness, while higher values make your avatar more emotionally responsive.
Body Animation
Range: Low to High
A low setting keeps the avatar more static, while high adds dynamic hand and body movements for livelier interaction.
Initial Facial Expression
Camera Focus Toggle
Enable or disable eye contact with the user by toggling camera focus.
You can define how your avatar appears at the start of an interaction:
Enable or Lock Expressions
Use toggles to either:
Allow expressions to change during conversation
Lock the avatar into a specific expression
Select an expression from the dropdown:
Joy
Trust
Fear
Surprise
Custom Actions
Give your avatar intelligent behaviors during interactions — like waving hello or thinking.
How to Add a Custom Action:
Click “Add a new action”.
Toggle Eye Focus on or off.
Click “Select animation”.
Choose from available animations (e.g., Wave Animation for greeting).
Name your action (e.g., Waves Cheerfully).
Click “Preview Animation” to test how it looks.
Before You Begin:
Make sure your avatar is saved and the character is created before navigating to the Publish tab.
Publishing Steps
1. Go to the Publish tab inside your character’s dashboard.
2. Finalizing Your Experience
Fill in the necessary details to define and present your simulation:
Experience Name
e.g., Virtual Tour of the Fire Station
Experience Description
e.g., Get a deeper look and understanding of the inner workings of a fire station with your virtual tour guide Lina!
Thumbnail (Optional)
Upload an image to visually represent your experience.
3. Choose Visibility Settings
Select how and with whom the experience should be shared:
Not listed publicly, but can be accessed via a direct link
Embed on Your Site(Enterprise-only)
Publish your experience directly to your own website
Convai Pixel Streaming Embed is currently accessible only with the Enterprise plan.
To learn how to embed an avatar into your own platform, check out the Embedding Documentation.
After Publishing
Once published, your experience is ready to be deployed on:
Follow this step-by-step guide to launch your first AI-powered simulation using Convai Sim.
1. Access the Playground
Go to convai.com and log into your account. Navigate to the Playground section from the dashboard.
2. Create a New Experience
Click on “Create a new experience” to begin setting up your simulation.
3. Choose an Environment
Select an environment that fits your use case (e.g., office, museum, sci-fi room). Then click “Start Experience” to enter the Convai Sim.
4. Explore the Scene
Once the scene loads:
Use WASD keys to move around.
Use your mouse to look around the environment — just like in a first-person game.
5. Add an Avatar
Click the top-left icon to open the avatar menu. Then:
Click “Add Avatar”.
A hologram will appear — place it at the desired location in the scene.
6. Select Your Character and Avatar
You’ll be prompted to:
Choose your previously created Convai character.
Select a Metahuman avatar to visually represent that character.
7. Deploy the Character
Click “Deploy Character” to spawn the avatar into the environment. The avatar will now be active and ready to interact.
8. Add More Avatars (Optional)
Repeat the process to add multiple characters into the same scene and create more dynamic simulations.
Summary
You’ve now:
Created a Convai character
Selected a 3D environment
Embodied your character in a lifelike avatar
Brought them into an interactive simulation
With multi-avatar support, you can quickly build rich, AI-driven experiences—from training simulations and virtual tours to interactive stories and games.
Next, we’ll explore how to customize your avatars and scenes using the available tools.
Publishing Steps
1. Finalizing Your Experience
Fill in the necessary details to define and present your simulation:
Experience Name
e.g., Virtual Tour of the Fire Station
Experience Description
e.g., Get a deeper look and understanding of the inner workings of a fire station with your virtual tour guide Lina!
Thumbnail (Optional)
Upload an image to visually represent your experience.
2. Choose Visibility Settings
Select how and with whom the experience should be shared:
Not listed publicly, but can be accessed via a direct link
Embed on Your Site(Enterprise-only)
Publish your experience directly to your own website
Convai Pixel Streaming Embed is currently accessible only with the Enterprise plan.
To learn how to embed an avatar into your own platform, check out the Embedding Documentation.
What Happens After Publishing?
Once published, your experience becomes:
Accessible to your intended audience
Ready for interaction via web, kiosk, or internal use
Shareable as a training tool, educational demo, or digital showcase
Whether you're running a public-facing simulation or a private module for internal teams, Convai Sim gives you complete control over how your AI-driven experience is distributed.
What You Can Do
With the Convai XR Animation Capture App, you can:
Record natural animations in VR using your Meta Quest
Upload animations directly to your Convai account
Assign these animations to AI avatars, which perform them intelligently during conversation
Use animations in:
Unity
Unreal Engine
Convai Sim
Build custom gesture libraries and animation sets
→ All without the need for mocap suits or external trackers
Who It’s For
This app is ideal for:
Developers & creators building immersive and interactive characters
Game designers enhancing NPC realism in Unity or Unreal
Brands & marketers creating engaging virtual hosts with Avatar Studio
Storytellers & world-builders designing no-code simulations with Convai Sim
Whether you’re building a virtual assistant, NPC, tour guide, or performer — XR Animation Capture helps you bring your AI characters to life with natural, human motion.
Key Features
VR-Based Animation
Record gestures, motions, and actions naturally with your Meta Quest headset.
Direct Upload to Convai
Animations are automatically synced to your Convai account—no manual transfer needed.
Cross-Platform Support
Use animations in Unity, Unreal Engine, Convai Sim, and Avatar Studio — no extra setup required.
AI-Driven Animation Triggers
Let your avatars perform animations intelligently based on dialogue and context.
No Mocap Suit Needed
Capture high-quality animation using just your VR headset — no external trackers or suits required.
Works with No-Code Tools
Deploy intelligent, animated avatars directly in browser-based platforms like Avatar Studio and Convai Sim.
How to Add Recorded Animations to AI Avatars in Unity
Step 1: Set Up Unity & Convai
Before importing animations, ensure your Unity project is correctly set up with Convai:
Go to the Convai Dashboard and navigate to the Server Animations tab.
Locate the animations you recorded in VR.
Click Import and select a location within your Unity project's Assets/ directory.
The files must be placed inside the Unity project folder for them to be detected and used properly.
Step 3: Apply the Animation to a Character
In Unity, drag your AI character model into the scene hierarchy.
Adjust the character’s position if needed.
Open or create an Animator Controller.
Drag the imported animation clip into the Animator Controller.
If the animation should repeat, enable Loop Time in the Animation settings.
Step 4: Test the Scene
Run your Unity scene.
Start a conversation with the avatar or trigger the assigned action.
Watch your AI avatar perform the recorded animation in real-time!
Done!
You’ve now successfully connected a custom VR-recorded animation to an AI-powered avatar in Unity.
Repeat the process to add more animations and create rich, expressive characters in your simulations or games.
There are some limitations for WebGL version of the plugin, to learn about it, please go to Limitations of WebGL Plugin
Unity Verified Solution
This is the Long-Term Support version of our core version. It contains all the necessary tools for adding conversational AI to your characters.
This plugin version should be used if you need to build for WebGL. Please ensure that Git is installed on your computer prior to proceeding.
Step 2) Select Convai Scene Template
There will be many scene templates depending upon your project, but in this guide, we are interested in Convai Scene Template so select that and click on Create button.
Screenshot showing how to create a scene from convai scene template
Step 3) Save Created Scene
You can now save the newly created scene in the project at your desired location by either pressing Ctrl + S on Windows or CMD + S on Mac. Another method is to navigate to File -> Save Scene
Screenshot showing how to save the scene
This is open the Save Scene Window, choose your desired location, for this demo we will save it inside Demo folder, but you can save it anywhere in the assets directory.
Give your scene a name and then click on Save Scene button
Screenshot showing save location of the new Convai powered scene
Now you can import your Convai Character or your Custom Characters by following our complete guide on it
Screenshot showing process of opening up New Scene Window in Unity
Speaker ID for the player. Please note that Speaker ID is directly linked with your API key, so for each API key there should be a unique speaker ID associated with it. We handle the creation of the Speaker ID when it's not found in the Player Prefs if the Boolean is set to true.
Create Speaker ID If Not Found
This Boolean lets the SDK if it should create a unique Speaker ID for that Player Name if it is not found in the Player Prefs.
Buttons
Reset Data
It just makes the Player Name and Speaker ID fields empty.
Copy Data
Copies the data into system buffer so you can paste it anywhere for debugging purpose
Player Pref Settings Button
Load: Loads the Player Name and associated Speaker ID from the player Prefs
Save: Saves the Player Name and associated Speaker ID from the player Prefs
Delete: Deletes the Player Name and associated Speaker ID from the player Prefs
How to maintain the Player Data
Convai provides a pre-made component which you can add to any GameObject to make the PlayerDataContainer work out of the box.
Choose an existing GameObject or create a new GameObject in the scene and add the ConvaiPlayerDataHandler component to your chosen GameObject and it should start working
Optional Step
You can also create the required Scriptable Object by going to Assets > Convai > Resources and right clicking in the project panel and navigating to Create > Convai > Player Data and name it ConvaiPlayerDataSO
Make sure you name the created Scriptable Object exactly ConvaiPlayerDataSO as our system looks for this exact name
Setup
To implement these language-specific features in your project:
Navigate to the Convai Setup Window within Unity.
Locate the Package Management section.
Click on the "Convai Custom TMP Package" button.
Once installed, just import the character for which you require the language support, talk with it and the font will automatically render in the transcript.
For now, we provide fonts for these languages:
Arabic
Japanese
Korean
Chinese
RTL Support
We also provide support for Right-to-Left languages, like Urdu, Persian and Arabic through our Chat UIs. So, for example, if you talk with an Arabic character or if the character's name is in Arabic, the text will automatically enable the RTL feature provided by unity to reflect proper transcripts.
TMP Importer (will appear automatically if TMP Essentials are not imported)
TMP Essentials Manual Import Process
Step 2
In the Objective section of the new Section, add the following text:
The time of day currently is {TimeOfDay}. Welcome the player and ask him how his {TimeOfDay} is going.
Notice that any string placed between curly brackets becomes a variable. In this case, we are adding the time of day as a variable. From Unity, we can pass either the word "Morning" or "Evening," and the character will respond accordingly.
Step 3
Now, let’s back to Unity and make the necessary adjustments. Click on your NPC.
Click the Add Component button and add the Narrative Design Key Controller Component.
Step 4
In the Name field, enter TimeOfDay. In the Value field, specify the corresponding value for that variable, which could be Morning, Evening, or anything else you choose.
That’s it! Now let’s test it out. 🎉😎
Click the Avatar Field once to select the character's avatar in the Project window.
Select the Avatar and click Configure Avatar.
Select the Head option in the Mapping tab.
Select the Jaw Mapping and set it to None.
Finally scroll down and click Apply.
This will free the avatar's jaw mapping and allow the script to manipulate the Jaw bones.
Create a new script or use an existing script to define a variable that will store a reference to the Dynamic Info Controller Component you added to your NPC.
Example: Passing Player Health to the NPC
Initialize the Dynamic Info: In the script’s Start method, call the SetDynamicInfo method on the Dynamic Info Controller reference. This will set the dynamic information that the NPC will use. In this example, we’ll initialize the Player’s health as a dynamic variable.
Updating the Dynamic Info: Whenever you need to update the NPC with new information (such as a change in Player Health), call the SetDynamicInfo method on the Dynamic Info Controller.
Sample Scenario
At the start of the game, we set the Player’s health to 100 and send this information to the NPC as the initial value.
Then, when the player takes damage (simulated here by pressing the "K" key), we reduce the Player’s health and update the Dynamic Info in real time so that the NPC remains aware of the Player's current health status.
Example Conversation
Below, we provide a sample conversation showcasing how the NPC can react based on the dynamic health information of the Player. By dynamically updating the Player's health, NPCs can deliver responses that feel personalized and relevant to the current gameplay.
In summary
Add the Dynamic Info Controller to your NPC. Use SetDynamicInfo to initialize the dynamic variable at the start, and call SetDynamicInfo again whenever updates are needed.
This feature provides a powerful tool for creating NPC interactions that respond in real-time to the state of the game world, creating a more immersive experience for the player.
Key Features
Conversational AI: Leverage advanced NLP capabilities to create NPCs that can understand and respond to player input in natural, engaging ways.
Intelligent NPCs: Build characters with dynamic dialogue and behaviors that adapt to player actions and the game world.
Easy Integration: Our SDK is designed for quick and simple integration with your Unity projects, allowing you to focus on creating compelling gameplay experiences.
Cross-Engine Support: In addition to Unity, Convai supports other popular game engines, ensuring broad compatibility and flexibility for your development needs.
This is the Core version of the plugin. It has a sample scene for anyone to get started. This version of the plugin only contains the basic Convai scripts and Character Downloader.
Visit convai.com for more information and support.
Tailor the visual and functional interface of your avatar experience to match your device, context, and brand needs.
Customize Your Avatar Experience Interface
Convai’s Avatar Studio provides a variety of settings to adapt the interface layout, interaction mode, and branding for different platforms and use cases.
Screen Resolution Presets
Choose the layout that best fits your deployment:
Desktop
Tablet
Mobile
This ensures optimal visual presentation across different screen types.
Chatbox Settings
Enable or customize the chat interface as needed:
Chatbox Type
Select your preferred chatbox style from available templates.
Disable Chat Interface
Use the toggle to hide the chatbox completely if not needed.
Push-to-Talk Mode
Enable push-to-talk using the toggle for voice-activated interactions.
Character Vision Through Webcam
Let your avatar “see” the user and respond accordingly using webcam input.
Enable or disable vision-based input with a toggle.
Position the webcam within your interface layout.
Adjust the webcam display size using a slider for optimal placement.
Camera Settings
Control how the avatar scene is viewed by the user.
Field of View (FOV):
Adjust using the FOV slider (left = narrower, right = wider view)
Pan Camera:
Up/Down with “Pan Up/Down” slider
Branding Options
Integrate your brand identity directly into the avatar experience:
Display Logo
Toggle “Display Logo” to enable branding elements.
Upload Your Logo
Click “Upload your brand logo” to add it to your scene.
Manage Logo Display
These configuration tools ensure that your avatar interface not only works smoothly across platforms but also aligns with your project’s style, interaction needs, and brand.
Experience Settings
Control idle session handling, welcome interactions, microphone behavior, and input processing timing to fine-tune your avatar experience.
Final Touches Before Deployment
These settings define how your avatar behaves during live interaction and how the experience is sustained or terminated based on user activity.
AFK Timeout
Set an AFK (Away-From-Keyboard) timeout to manage idle sessions and conserve your pixel-streaming minutes.
Push-to-Talk Mode
Activate microphone input only when the assigned key is pressed.
You can assign a custom key for push-to-talk functionality.
Processing Frequency
Control how often the avatar processes and reacts to input, allowing it to act more proactively.
Open the dropdown menu for processing frequency.
Choose a time interval for the avatar to periodically evaluate multimodal inputs (voice, text, vision).
This enables agentic behavior — where the avatar can initiate interaction based on user presence or signals.
After making changes:
Click “Save Changes”
If you are creating an Avatar for this character for the first time, press the
Importing Custom Characters
Follow these instructions to set up your imported character with Custom Model with Convai.
To import your custom characters into your Convai-powered Unity project, you will first need to bring your model into your project. The model needs at least two animations: one for talking and one for Idle.
Prerequisites
When you want to set up your custom character with Convai, you will need your character model and two animations: Idle and Talking.
Create an animator controller with the two animations that looks like this. You should also add a 'Talk' Boolean to ensure that you can trigger the animation. . This is the bare minimum animator setup that you need to do.
Step 1: Add Animator to your custom character
Select your character from the Hierarchy and Add Animator Component
Convai Plugin ships with two pre-made animation controller, you can choose these controllers or can assign your custom controller, whatever fits your need. For this demo we are going with Feminine NPC Animator
Step 2: Adding a Trigger Volume
With your custom character selected, add a Collision shape of your choice, for this demo we are going with a Capsule Collider
We will make this Collider a trigger, for this we will enable the Is Trigger option in the inspector panel
We will adjust the Center, Radius and Height of the collider such that it fits our character
Step 3: Add ConvaiNPC Component
With your Custom Character Selection add ConvaiNPC component. By doing so, your Game objectgame should look like this:
We assume that nothing other than pre-instructed components were added by you; your Game Object component list may be different
Copy your character's ID and name from and paste them here.
Now your Custom Character is all set to work with Convai Plugin.
Compatibility
Check Convai plugin compatibility with Unity. Ensure smooth integration with your development tools.
Unity Version
The minimum supported Unity version is 2022.3.x. Earlier versions may not be compatible.
Supported Platforms
Tested Platform
Scripting Backend
API Level
Unity Version
API Level
Import & Run Sample Scenes
Import Convai sample content and run a scene to test a conversation immediately.
Introduction
Sample scenes are the fastest way to verify your installation and API setup end-to-end.
Prerequisites
Convai SDK installed
API key configured successfully
Step-by-step
1
Open Package Manager
In Unity, go to Window → Package Manager.
2
Troubleshooting
Sample doesn’t appear after import
Confirm you imported the sample and check the Assets/Samples folder.
No voice input detected
Conclusion
You’ve successfully imported a sample scene and verified a working conversation. Next, you can integrate Convai into your own scene via Custom Scene Setup.
Need help? For questions, please visit the .
Pre-built UI Prefabs
Convai UI Prefabs - Utilize ready-to-use UI elements for Convai integration.
We provide several UI options to display character and user's transcript out of the box that players can use with the Convai Plugin. You can use and customize these prefabs.
The ConvaiNPC and ConvaiGRPCAPI scripts look for GameObjects with Convai Chat UI Handler as a component, and send any transcripts to the script so that it can be displayed on screen.
Types of UI
ChatBox
Prefab Name: Convai Transcript Canvas - Chat
Both the user's and the character's transcripts are displayed one after other in a scrollable chat box.
Subtitle
Prefab Name: Convai Transcript Canvas - Subtitle
The user and character transcripts are displayed in the bottom like subtitles.
Question-Answer
Prefab Name: Convai Transcript Canvas - QA
The user's transcript is displayed in the top where as the character's transcript is displayed in the bottom.
Mobile Optimised UI Styles
Prefab Name: Convai Transcript Canvas - Mobile Subtitle
Identical to UI. Includes a button that can be pressed and held for the user to speak. Ideal for portrait orientation of screen.
Prefab Name: Convai Transcript Canvas - Mobile QA
Prefab Name: Convai Transcript Canvas - Mobile Chat
Functions to Know
Compatibility & Requirements
Supported Unity versions, render pipelines, and target platforms for the Convai Unity SDK.
This page summarizes the supported Unity versions and platforms for the Convai Unity SDK. If you’re on Unity 2023.1+ (ideally Unity 6) and targeting one of the supported platforms above, you’re ready to proceed with Getting Started → Installation and Setup.
Need help? For questions, please visit the .
Creating Animations for AI Avatars
Capture lifelike animations using your Meta Quest headset to bring your AI avatars to life—no mocap suit required.
Overview
Using the Convai XR Animation Capture app on your Meta Quest headset, you can create custom animations for your AI characters by simply acting them out in VR. These animations help your avatars express themselves naturally during conversations—whether in Unity, Unreal, Avatar Studio, or Convai Sim.
Haven’t set up the app yet?
Head over to the before continuing.
Recording Animations in VR
Step 1: Review Existing Animations (Optional)
When you launch the app, you’ll see your animation dashboard. From here, you can:
View previously recorded animations
Replay them to see how they look
Delete any that you no longer need
Step 2: Start Recording
In the app, click “Start Recording”.
A five-second countdown will start.
Begin performing your animation.
Step 3: Stop & Review
Once you're done, click “Stop”.
You can review the recorded animation by pressing the "Replay Animation" button.
This helps you decide whether to save, redo, or discard.
Step 4: Name & Save
Enter a clear and descriptive name (e.g., Wave Greeting, Points Left).
Click “Save & Upload”.
The animation is now uploaded to your Convai dashboard, ready to be:
Assigned to AI avatars
Used across Unity, Unreal, Avatar Studio, or Convai Sim
Keep Building Your Animation Library
Record and save multiple animations to populate your library. These can be reused across projects, allowing your avatars to intelligently perform gestures during conversations—making your virtual experiences more engaging and realistic.
Add Chat UI (Transcript UI)
Add a ready-made chat UI prefab to enable text input and conversation transcripts.
Introduction
Chat UI is optional, but it’s extremely useful for debugging, testing without voice, and demonstrating text-based conversations.
Prerequisites
Convai SDK installed
A scene with Convai setup
Step-by-step
1
Locate the Transcript UI prefab
In the Project window search bar, search:
Troubleshooting
UI doesn’t respond to clicks/typing
Confirm there is exactly one EventSystem in the scene.
Prefab not found
Conclusion
You’ve added the Transcript UI to your scene, enabling text input and readable conversation logs. You can now test conversations via keyboard or microphone.
Need help? For questions, please visit the .
Missing Newtonsoft Json
Fix missing Newtonsoft JSON issues in Unity with Convai. Resolve integration problems efficiently.
Our plugin has various scripts and dependencies that use Newtonsoft Json. If Newtonsoft Json is missing from the plugin, it could lead to a large number of errors as shown below:
Ensure that NewtonSoft.Json is present in your packages. Go to your project folder.
Then navigate to Packages folder. In the Packages folder. Click on manifest.json. A json file containing the project dependacies should open up.
Add the Newtonsoft Json Package on top.
"com.unity.nuget.newtonsoft-json": "3.2.1",
The final manifest.json should look like this.
Character Emotion
In this guide, we learn about character emotion coming from server
Convai Character emit character emotions when they interact with the player and these emotions help in making the character more human-like, we are starting to implement a system which you as a developer can use to make your game more interactive using the character emotions.
Whenever the character responds to the user, we send back a list of emotions to the SDK, which look something like this
For v0 of this system, we will only be sending the emotions, in future we will apply the facial expressions corresponding to each emotion which will make the character more interactive.
Mindview
Learn how to use the Mindview feature to review the actual prompts to the LLM for your current or previous sessions and interactions.
This feature is available only on the Professional Plan and above.
Introduction
Narrative Design
Build goal‑oriented conversation flows using sections, decisions, and triggers that move the story forward without rigid dialogue trees.
Introduction
Narrative Design lets you guide a character with high‑level objectives while keeping conversations flexible. Instead of hard coding a tree of lines, you define goals and decision points, then allow the character to respond dynamically. This approach works for many domains such as games, learning and training simulations, tourism, retail assistants, and customer support kiosks. You can read more about the considerations behind Narrative Design
Tour Guide
Turn your AI avatar into an interactive tour guide using Convai Sim’s built-in tour planning tools
Introduction
In this guide, you'll learn how to bring your AI avatars to life by turning them into dynamic tour guides within immersive 3D environments.
"Project Settings > Player" is set to Both or Input System Package (New).
Settings Panel
Settings Panel - Customize settings using Convai's Unity plugin utilities.
Settings Panel consists of two main sections.
Audio Settings
Interface Settings
Adding NPC to NPC Conversation
This guide will walk you through setting up the NPC to NPC conversation feature in the Convai SDK.
Step 1: Setting up Convai NPC
Go to your Convai NPCs:
Setting Up Unity Plugin
Follow these instructions to setup the Unity Plugin into your project.
The file structure belongs to the Core version of the plugin downloaded from the documentation.
Setting up Unity Plugin
This document provides step-by-step guidance for new hires.
Complete HR documentation within the first 3 days of joining.
Attend the mandatory orientation session.
Set up company email and access credentials via IT Support.
Review the Code of Conduct and Data Privacy Policy.
Alternatively, share the experience link with the invited user to grant access.
Find the Convai package
Select In Project (left panel).
Click Convai SDK (or the installed Convai package).
3
Import Samples
Open the Samples section in the package details.
Click Import next to a sample (example: <SAMPLE_NAME>).
Expected result: A Samples folder appears under Assets, containing the imported sample content.
4
Open the sample scene
Navigate to:
Assets/Samples/Convai SDK for Unity/x.x.x/<SAMPLE_NAME>/Scenes
Open the scene:
<SCENE_NAME>
5
Run the conversation test
Click Play.
Speak using your microphone or type into the Chat UI input field.
Expected result: The character responds. Microphone conversation is hands-free (no push-to-talk required).
Check OS microphone permissions for Unity.
Confirm the correct microphone device is selected.
The Mindview section provides visibility into the prompt that was sent to the model to generate your character’s response.
It’s a powerful tool for:
Understanding how your character processes context.
Improving your Character Description, Knowledge Bank, and Language Settings.
Troubleshooting unexpected or inconsistent responses.
Accessing Mindview
You can open the Mindview tab directly from the left navigation menu of the Convai Playground.
When first opening it, you’ll be asked to select a conversation or interaction from the Memory tab.
Alternatively, you can start a new conversation — Mindview will automatically display the data for the latest message.
To access Mindview for a previous interaction:
Navigate to the Memory tab.
Expand the desired session.
Click the Mindview icon next to any message to open its corresponding prompt view.
Understanding the Mindview Interface
Once opened, you’ll see a structured view of how the model interpreted and responded to an input.
Header Information
At the top of the screen, the following details are displayed:
Session ID – Identifies which session the interaction belongs to.
Model Name – Shows the LLM used to generate the response.
User Query – Displays the exact message or query that initiated this prompt.
Main Prompt Section
This is the core of Mindview. It shows the entire chain of messages (System, Assistant, and User) that formed the complete prompt sent to the model.
Each section provides insight into how the model understands the character’s context and instructions before producing a response.
What Influences the Main Prompt
The main prompt displayed in Mindview is dynamically constructed using multiple aspects of your character and session:
Source
Description
Character Description
Defines the character’s backstory and core context. Appears within <back-story> ... </back-story> tags.
Language and Speech
Includes the allowed languages and relevant speech configuration.
Personality Traits
Controls the conversational tone, emotion, and formality level of the character.
Narrative Design
Use Cases
Debug and refine how your character’s prompt is constructed.
Identify missing or conflicting information within the character setup.
Validate that the right Knowledge Bank, Personality Traits, and Narrative Design data are being included in responses.
Conclusion
The Mindview tab gives creators deep transparency into the inner workings of Convai’s character response generation.
By analyzing prompts and understanding how context is layered, you can fine-tune your characters for more consistent, accurate, and personality-aligned interactions.
Videos
Watch this series of videos to learn how to create a Narrative Design Graph in the Convai Playground.
The demo features a Tour Guide scenario, showing step-by-step how to design, connect, and implement your own Narrative Design flow.
Accessing Narrative Design
Open your character in the Convai Playground and select 'Narrative Design'from the left sidebar. You will see a graph editor where you can connect the flow using nodes.
Narrative Graph
A narrative graph is made of four building blocks:
Sections
A Section contains:
Objectives – The goal the character aims to achieve in this part of the narrative.
Example: A virtual tour guide’s objective could be to welcome the user and ask if they want to begin the tour.
Decisions – Choices based on user interaction that direct the character to different sections.
Example: If the user says “yes” to a tour, the next section might start the tour route; if “no,” the character might offer alternative information.
Ensure decisions are clear and unambiguous; otherwise, the intended section may not be triggered.
Each Section has a unique ID.
Triggers
A trigger is a simple signal from your application indicating that a certain condition has been met. When fired, triggers advance the graph to the next connected section.
Each Trigger has a unique ID.
Examples
Location Based (Spatial): your app detects the user entered a zone and fires the trigger associated with that Section.
Time Based: a timer in your app expires and fires the trigger.
Event Based: an in‑app event occurs such as “safety demo completed” and you fire the trigger.
Example Scenarios
To better understand how Narrative Design works in practice, here are two example characters you can explore directly in Convai Playground.
Open each link, navigate to the Narrative Design tab, and review how the graph is structured with Sections and Triggers.
A training simulation scenario set in a manufacturing facility.
This character uses location-based triggers (e.g., entering the conveyor belt area or assembly line) to guide users through the workspace, explain safety protocols, and progress the tour.
Ideal for industrial training and onboarding simulations.
A real estate simulation where the character guides potential buyers through different rooms in a property.
Similar to the factory example, it uses location-based triggers — for instance, when the user enters a specific room (e.g., kitchen, bathroom, bedroom), the corresponding Section in the Narrative Graph is triggered.
This allows the character to dynamically adapt its dialogue to the user’s movement through the property.
Useful for virtual property tours, sales presentations, and customer onboarding.
Syntax Instructions
These special characters can be added to nodes in your Narrative Design graph to control specific outcomes and behavior.
Special Characters
Example
Use
<speak>
<speak> I'll say this exact line! </speak>
Forces the character to respond exactly with the phrase inside the tags, without paraphrasing or adding extra context.
*
Forces an immediate transition to the next node, bypassing further decision checks.
Select your avatar to open the Edit & Publish menu.
Locate the Tour Planner Settings section.
Set up your Tour Prompts – these define what the avatar says at the start and end of the tour.
Example Prompts:
Welcome Prompt:
“Hi there! Ready to explore the fire station?”
→ Greet the user, introduce yourself, and invite them to start the tour.
End Prompt:
“That’s the end of the tour. Hope you had fun!”
→ Ask if the user has any questions, answer them, then say goodbye.
2. Defining Behavior
Choose how your avatar initiates the tour:
Wait for Player:
Avatar stays still and waits until the user approaches.
Engage on Sight:
Avatar detects the user visually and initiates conversation.
Max (Timed Engagement):
Avatar starts interacting after a set period of user inactivity.
3. Adding Tour Points
Click “Add Tour Point” — a green gizmo will appear in the scene.
Use the XYZ axes or click the flag icon to position the tour marker.
Enter a Tour Point Name (e.g., “Fire Truck”).
Add an Objective describing what the avatar will explain or do at this point (e.g., “Describe the fire truck and its role in emergencies.”).
Repeat this process to build a full tour path.
4. Managing the Tour
To remove a tour point, click the gizmo and hit the X icon.
Under User Elements, click “Set User Starting Point” to define where the player begins.
Click “Save Narrative Graph” to save your tour configuration.
Use Preview to test the experience.
Click Publish when you're ready to share your tour.
Summary
Convai Sim’s Tour Guide Mode transforms your AI avatar into an interactive, narrative-driven host—ideal for:
Education & virtual field trips
Employee onboarding
Training simulations
Museum or product walkthroughs
Once your tour is complete, you’re just one click away from publishing it across web, kiosks, and other platforms.
Our recommendation is Both. This way, you can use both the new and old input systems. Using the old input system can be faster when creating inputs for testing purposes.
How to Change the Talk Button or Any Input?
Double click on the "Controls" asset in your project tab.
You can setup multiple control schemes for different devices here, currently we have it for PC (Keyboard & Mouse) and Gamepad. For mobile, we have provided joystick and buttons, which are mapped to Gamepad controls for functionality, but you can directly add touchscreen and use its different features to trigger an Input Action. You can also add your own control scheme if you want support for a different device by clicking on "Add Control Scheme".
Find the Input Action you want to change in the above window. If you want to add a new Input Action, refer to the other section in documentation. In this case, we selected "Talk Key Action" to change the talk button. Click on "T [Keyboard]". In the Binding Properties window, click on the " T [Keyboard] " button in the Path field.
Press the " Listen " button in the top left of the opened window. If you prefer, you can choose your desired input from the categories below.
Press the key you want to assign and it will be reflected in the control asset.
How to Add a New Input Action?
First, go to the controls asset mentioned above and use the add button to create an input action. For this example, we will call it interact and provide it the binding with [E] button.
Then, click on the <No Binding> item to setup binding for this action. As before, you can use the listen button (has a UI bug for Windows but works) or you can select the key from dropdown. After selecting the binding (we will select [E] key for this), don't forget to press on the Save Asset option on the top menu.
You will now get an error saying ConvaiInputManager does not implement OnInteract. We need to implement this. Open the " ConvaiInputManager.cs " script to do so. ( " Convai / Scripts / Runtime / Core / ConvaiInputManager.cs " )
Your IDE might suggest you to implement missing members. If it doesn't we can manually write the OnInteract function like in the last figure shown. You receive a callback context which shows which frame input started, performed or got cancelled which you can use for different purposes. And that's it the error should be gone and you are good to go!
Audio Settings
Microphone Settings
The Microphone Settings section is primarily for troubleshooting and testing the microphone when using the Convai plugin.
In the Input section, you can view the microphones connected to your computer and select the desired one.
In the Test Input field, you can record your voice using the selected microphone in the Input section. After clicking Stop, you can listen to the recorded voice and observe the sound levels.
Interface Settings
Appearance
The first setting that greets us here is the Appearance setting.
In the Appearance section, you can switch between Transcript UI designs.
There are three Transcript UI options:
ChatBox
QuestionAnswer
Subtitle
Upon selecting a UI from the dropdown menu, you can preview it briefly.
Display Name
The second section in Interface Settings is the Display Name section. This section allows you to change how the user's name appears in the Transcript UI.
Notifications Checkmark
The last section in Interface Settings is the Notifications Checkmark.
Convai sometimes displays notifications on the screen to inform the user. If you want to disable these notifications, you can click the checkbox here. ( If the box is green, it's active. If empty, it's inactive )
For more information about notifications, you can refer to this link.
On the PC platform, you can open the Settings Panel by pressing F10. For mobile platforms, you need to press the Settings button in the UI designs.
Select the NPCs you want to include in the conversation.
Enable Group NPC Controller:
Click on the Group NPC Controller checkbox in the inspector panel.
Click Apply Changes to add the group NPC controller script.
Create or Find the Speech Bubble Prefab:
Create a new speech bubble prefab or use the one provided in the Prefabs folder.
Attach Required Components:
Add the speech bubble prefab and the player transform (optional, defaults to the main camera if not provided).
Set the conversation distance threshold variable (set it to zero to disable this feature, meaning NPC to NPC conversations will always happen regardless of the player’s distance).
Add Relevant Components:
Add components like lip sync, eye and head tracking, character blinking, etc., to the Convai NPC.
Step 2: Setting up NPC Manager
Create an NPC To NPC Manager GameObject:
Add an empty GameObject and rename it to NPC to NPC Manager (optional).
Add the NPC2NPC Conversation Manager Script:
Attach the NPC2NPCConversationManager script to the GameObject.
Configure the NPC Group List:
In the NPC Group List, click on the + icon to add a new list element.
Add the NPCs you want to include in the group conversation.
Post configuration of NPCs
Bring the NPCs close together
Play the to make sure everything is working as intended.
By following these steps you can set up and manage NPC to NPC conversations in your Convai-powered application. For further customization and integration, refer to the complete implementation code and adjust it as needed for your specific use case.
In the Menu Bar, go the Convai > API Key Setup.
Go to convai.com, and sign in to your Convai account. Signing in will redirect you to the Dashboard. From the dashboard, grab your API key.
Enter the API Key and click begin.
This will create an APIKey asset in the resources folder. This contains your API Key.
Open the demo scene by going to Convai > Demo > Scenes > Full Features
Click the Convai NPC Amelia and add the Character ID (or you can keep the default character ID). You can get the character ID for your custom character from this page . Now you can converse with the character. The script is set up so that you have to go near the character for them to hear you.
Now you can test out the Convai Demo Scene and talk to the character present there. Her name is Amelia and she loves hiking!
You can open the Convai NPC Script to replicate or build on the script to create new NPCs.
Try to extend the ConvaiNPC.cs script instead of directly modifying it to maintain compatibility with other scripts
Core AI Settings
Learn how to configure moderation, foundation model selection, and temperature for your AI character
Introduction
The Core AI Settings section defines the foundational behavior of your AI character by controlling safety filters, the underlying language model, and the creativity level of its responses. These settings have a significant impact on how your character interacts with users, balancing safety, accuracy, and creativity.
Main Features
1. Enable Moderation Filter
This setting allows you to filter out potentially harmful content, including hate speech, profanity, or inappropriate language. You can turn the moderation filter on or off using the toggle located at the top of the page. By default, this setting is enabled.
Disabling the Moderation Filter makes some foundation models unavailable.
Features like Narrative Design and Multilingual support will not work when moderation is disabled.
2. Select Foundation Model
Choose from a variety of Large Language Models (LLMs) from leading providers:
OpenAI
Anthropic
Google
Llama
Model availability depends on whether the Moderation Filter is enabled.
Supported LLMs
Below is a list of Large Language Models (LLMs) available in the Convai Playground under Core AI Settings.
Models marked as ✅ Flagship are the providers’ top-tier, most capable models — but usage of these is subject to the Flagship Interaction Cap based on your plan.
Flagship LLMs
This is the limit on the number of interactions you can perform using Flagship LLMs.
Example:
In the Indie Dev plan, you have a total monthly quota of 3000 Interactions. However, the Flagship LLM Interaction Cap is 1500.
If you use GPT-4.1 after 1500 interactions, your Flagship LLM quota will be exhausted.
You will then need to switch to a non-Flagship LLM for the remaining 1500 interactions.
OpenAI
Model
Flagship
Anthropic
Model
Flagship
Google
Model
Flagship
Llama
Model
Flagship
3. Temperature Control
Function: Adjusts the randomness and creativity in the AI’s responses.
Slider Range:0.0 (most deterministic) to 1.0 (most creative).
Temperature Range
Behavior
Use Case
Lower temperature sharpens the probability distribution for more predictable word choices.
Higher temperature flattens the distribution, allowing less likely words to appear more frequently.
Conclusion
The Core AI Settings give you precise control over your character’s foundation model, safety filters, and response style. By adjusting these parameters, you can create an AI that balances safety, reliability, and creativity to suit your specific application.
Metahuman Avatars
Upload custom Metahuman characters from Unreal Engine to Avatar Studio using the Convai Asset Uploader.
Introduction
This guide walks you through uploading custom Metahuman avatars to Avatar Studio using the Convai Asset Uploader. You'll generate a new project tailored for Metahumans, import your Metahuman asset, configure it, and finally upload it using Convai’s built-in tools.
Prerequisites
Before you begin:
Create your project using the , and answer Y when asked if you’re using a Metahuman.
Ensure you have a downloadable Metahuman available via Quixel Bridge.
Step-by-Step Guide
1. Open the Project
Navigate to the folder where your project was created. Double-click the YourProjectName.uproject file to open it in Unreal Engine.
2. Add a Metahuman via Quixel Bridge
Go to Window > Quixel Bridge.
In Bridge, select Metahumans from the left-hand menu.
Pick a Metahuman and click:
3. Locate and Open the Character Blueprint
After importing:
Go to Content/Metahumans/<CharacterName>/.
Open the Blueprint: BP_<CharacterName>
⏳ This may take some time to load.
4. Fix Compile Errors
If you see compile errors:
In the bottom-right, click Enable Missing under any Missing Plugins or Missing Project Settings notices.
Click Restart Now when prompted.
Reopen the Blueprint and ensure it compiles successfully.
5. Prepare the Asset for Upload
Locate the folder:
Plugins/<random code> Content/
(e.g., Plugins/AHK3LNKVC7FZA3I5JG3V Content/)
Move the entire Content/Metahumans/ folder into this directory.
This folder determines what gets packaged and uploaded. Make sure everything is placed correctly.
6. Open the Asset Uploader Tool
Navigate to Content/Editor/AssetUploader.
Right-click on AssetUploader and select Run Editor Utility Widget.
7. Select the Character Asset
Navigate to the Plugins/<random code> Content/Metahumans/<CharacterName>/ directory.
Select the BP_<CharacterName> Blueprint.
Then, in the Asset Uploader window, click Pick Asset.
8. Capture a Thumbnail
In the Asset Uploader window, click Capture Thumbnail to generate a preview image for your avatar.
9. Verify Functionality Before Upload
Drag BP_<CharacterName> into the Level.
Select the character and locate BP_ConvaiChatbotComponent in the Details panel.
Input a test Character ID.
10. Upload the Avatar
In the Asset Uploader, click Create Asset.
This will:
Package the avatar for Win64
Monitor the Output Log:
Look for Package completed
Then wait for Uploaded Asset
If there’s an error during packaging, check the logs and share them on the for support.
To delete a previously uploaded asset, open AssetUploader and click Delete.
Accessing the Avatar
Go to
Open the Upload Your Custom Avatar section
Your Metahuman will appear, ready for use.
Summary
Using the Convai Asset Uploader, uploading custom Metahuman avatars is quick and reliable. With proper setup and a few clicks, your characters are live in Avatar Studio and ready for real-time AI interaction.
Reallusion Avatars
Upload custom Reallusion characters from Unreal Engine to Avatar Studio using the Convai Asset Uploader.
Introduction
This guide explains how to prepare and upload Reallusion-based avatars using the Convai Asset Uploader. You’ll import your Reallusion character and animations, apply Convai’s animation and lipsync systems, and then upload your avatar to Avatar Studio using the built-in AssetUploader tool.
Prerequisites
Make sure you have the following ready:
A project created with the , where you answered N to “Are you using a Metahuman?”
A custom Reallusion character exported and ready for import
Step-by-Step Guide
1. Open the Project
Navigate to your project directory and open the .uproject file to launch it in Unreal Engine.
2. Import Reallusion Character & Animations
Follow this to import your Reallusion assets:
[00:00 – 07:25]: Import your character and animations
[07:50 – 08:20]: Create a new Blueprint Class for your character
3. Connect Convai Animations
Now we’ll bind the correct animation logic to your character.
We’ve already added the necessary Animation Blueprint for you:
Go to Content/ConvaiReallusion/
Locate and assign the ConvaiReallusion Animation Blueprint to your character’s Skeletal Mesh
This blueprint ensures that your Reallusion character plays proper idle/talking animations in sync with Convai interactions.
Refer to the for this step: [10:12 – 12:48]
4. Add FaceSync for Lipsync
To enable lipsync:
Add the FaceSync component to your character’s Blueprint
See how in the same : [12:48 – 12:56]
5. Set Correct Rotation
Reallusion characters typically face the wrong direction by default. Fix this by:
Opening the character Blueprint
Selecting the SkeletalMesh component
Set the Z Rotation to -90 in the Details panel
6. Prepare Files for Upload
Go to:
Plugins/<random code> Content/
(e.g., Plugins/AHK3LNKVC7FZA3I5JG3V Content/)
Drag and Move both of the following folders into this directory:
Your character’s folder (containing the Blueprint and animations)
This folder determines what gets packaged and uploaded. Make sure everything is placed correctly.
7. Open the AssetUploader Tool
Navigate to Content/Editor/AssetUploader
Right-click and select Run Editor Utility Widget
8. Select the Character Asset
Navigate to Plugins/<random code> Content/YourCharacterFolder/
Select your character’s Blueprint Class
Then, in the Asset Uploader window, click Pick Asset
9. Capture a Thumbnail
Click Capture Thumbnail to create a preview image that will appear in Avatar Studio.
10. Verify Before Upload
Before uploading, do a quick functional test:
Drag the character into your Level
Select it and locate the BP_ConvaiChatbotComponent in the Details panel
Paste in a test Character ID
11. Upload the Avatar
In the Asset Uploader, click Create Asset
This triggers:
Packaging the asset for Win64
Monitor the Output Log:
Wait for Package completed
Then look for Uploaded Asset
If there’s an error during packaging, check the logs and share them on the for support.
To delete a previously uploaded asset, open AssetUploader and click Delete.
Accessing the Avatar
Visit
Go to Upload Your Custom Avatar
Your Reallusion character will now be available for selection and use
Summary
Using the Convai Asset Uploader, uploading Reallusion avatars is quick and reliable. With proper setup and a few clicks, your characters are live in Avatar Studio and ready for real-time AI interaction.
Managing sessionID Locally
Session ID Management - Manage unique session IDs for Convai Unity integration.
In a typical application integrating with the Convai API, maintaining a consistent session ID across different sessions is crucial for providing a seamless user experience. This documentation outlines the best practices for storing and retrieving session IDs using Unity's PlayerPrefs, including detailed steps and example scripts.
Importance of Session IDs
A session ID uniquely identifies a session between the client and the Convai server. Storing the session ID locally ensures that the same session ID is used across different sessions, which helps in maintaining context and continuity in interactions.
Storing Session IDs
When initializing a session, if a session ID is not available locally, it should be fetched from the server and then stored locally for future use. Here's how you can achieve this:
Fetch and Store Session ID: When initializing a session, check if a session ID is stored locally. If not, fetch a new session ID from the server and store it using PlayerPrefs.
Retrieving Session IDs
When initializing your application, retrieve the stored session ID to ensure continuity in user interactions.
Example Class for Session Management
The following example class demonstrates how to manage session IDs using PlayerPrefs in a Unity project:
Detailed Steps for Session Management
Initialize Session: Call InitializeSessionIDAsync to check if a session ID is stored. If not, fetch and store it.
Store Session ID: Use PlayerPrefs.SetString(characterID, sessionID) to store the session ID locally.
Retrieve Session ID: Use
Best Practices
Error Handling: Ensure proper error handling when fetching and storing session IDs.
Security: Consider encrypting sensitive information stored in PlayerPrefs.
Performance: Use asynchronous methods to avoid blocking the main thread when fetching session IDs.
Adding Narrative Design to your Character
Follow this guide to incorporate Narrative Design into your Convai-powered characters. Follow this step-by-step tutorial, open your project, and let's begin!
Convai Playground
Step 1: Select your Character in which you want to enable Narrative Design
For this demo, we are using Seraphine Whisperwind, you can select whatever character you want to enable Narrative Design.
Step 2: Open Narrative Design in Convai Playground
Select the Narrative Design option from the side panel and create your narrative design
For more information how to create narrative design in the please refer to the following YouTube video series
For this sample we have created the following Narrative design
You are all set to bring your character from Convai Playground to Unity, let's hope over to Unity to continue the guide
Unity Setup
Step 1: Add the Narrative Design Manager Component
Using Add Components Button in Convai NPC (Recommended Way)
1: Select your Convai Character in the scene and look for ConvaiNPC component in the inspector panel. Click on Add Components button
2: Select Narrative Design Manager checkbox and then click on Apply Changes button
Using Unity Inspector
1: Select your Convai Character and find Add Component button in the inspector panel
2: Search for Narrative Design Manager in the search box and select it
Step 2: Setup the Narrative Design Component
After adding the Narrative Design Component, you will be able to be the following component
This component system assumes that API key is setup correctly, so ensure that API key is setup correctly otherwise an error will be thrown.
After adding, component will retrieve the sections for the character ID taken from the ConvaiNPC, please wait for some time depending upon your network speed
The following section events are for character used in demo, and you will see section events corresponding to your character in which Narrative Design is enabled.
Getting to know the Narrative Design Component
Expanding the section event, you will see two unity events you can subscribe to, one is triggered when section starts, and another one is triggered when section ends
Getting to know about Section Triggers
Section triggers are a way to directly invoke a section in narrative design and can be used to jump to a different section in your narrative design
Step 1: Select the game object you want to make a trigger, in this example we have selected a simple cube, but it's up to your imagination.
Make sure that game object you have decided to be a trigger have a collider attach to it
Step 2: Add Narrative design Trigger from Add Component menu by searching for it
Step 3: Make the collider a trigger.
Step 4: Assign your Convai NPC to Convai NPC field
Now you can select from the "Trigger" dropdown which trigger should be invoked when player enters this trigger box.
We have added a way for you to manually invoke this trigger also, you can use InvokeSelectedTrigger function to invoke the trigger from any where
Invoke Trigger from any script
You can use this code block as a reference to invoke the trigger from anywhere
Long Term Memory
Learn how to enable character retain conversation history across multiple sessions
Long-Term Memory (LTM) enables the persistent storage of conversational history with NPCs, allowing players to seamlessly continue interactions from where they previously left off, even across multiple sessions. This feature significantly enhances the realism of NPCs, aligning with our goal of creating more immersive and lifelike characters within your game.
Prerequisite: Have a project with Convai SDK version 3.1.0 or higher. If you don't have it, check this documentation
Add the Long-Term Memory Component onto your character
Make sure that Long Term Memory is enabled for that character
Long Term Memory should now be working for your character.
Components of the LTM System
Convai Long Term Memory Component
This component will enable or disable LTM right from the unity editor
Toggling Long Term Memory
1) Click the button provided in the component
2) It will take some time to update, and after that the new status of the LTM should be visible in the inspector.
Since enabling or disabling Long-Term Memory (LTM) for a character is a global action that impacts all players interacting with that character, we strongly recommend against toggling the LTM status at runtime. This functionality should be managed exclusively by developers or designers through the editor to ensure consistent gameplay experiences.
Troubleshooting
Grpc.Core.RpcException: Status(StatusCode=InvalidArgument, Detail="Cannot find speaker with id: 99fbef96-5ecb-11ef-93ce-42010a7be011.")
If you encounter this error, ensure that the SpeakerID was created using the same API key currently in use. If you're uncertain about the API key used, you can reset the SpeakerID and PlayerName by accessing the ConvaiPlayerDataSO file located in Assets > Convai > Resources, allowing you to start the process anew.
Management of Speaker ID(s)
It is essential for developers to efficiently manage the Speaker ID(s) generated using their API key, as the number of IDs that can be created is limited and dependent on the subscription tier. Proper management ensures optimal usage of resources and prevents potential disruptions in the application's functionality.
Speaker ID limit per API key are as follows
Tier
Limit
You can view all the Speaker ID(s) associated with a specific API key by accessing the Convai Window within your Unity project. This feature provides a comprehensive list of IDs, allowing for easier management and monitoring.
Ensure that the API key is correctly entered; otherwise, the feature will not function as expected. Accurate API key input is critical for accessing and managing Speaker ID(s) through the Convai Window in Unity.
Head over to Long Term Memory Section
If the message "No Speaker ID(s) Found" appears, there is no need to proceed with this guide. However, if a Speaker ID list is displayed, it's advisable to delete any ID(s) that are no longer in use or needed to optimize your available resources.
Adding Lip-Sync to your Character
Learn to add lip sync to your Unity characters using Convai. Improve realism and interactivity.
Lip Sync System
Convai sends Visemes or Blend Shape Frame from back-end depending upon the face model the developer chooses to use and when returned Convai SDK out of the box extracts and parses it and provides it to the Convai LipSync Component, after which the component relies on the SkinMeshRenderer's Blendshape Effectors and Bone Effectors to give Convai powered NPC's realistic lipsync.
Components of LipSync System
Viseme Effector List
This is where the developer will tell the Convai SDK, which index of Blendshape Array will be effector how much from which value. To better explain its working let's understand it with a diagram.
Here, it is saying that whatever value is coming from the server will affect Blendshape at the 116th index by 0.2 multipliers and Blendshape at the 114th index by 0.5 multipliers. The engine representation of this would look something like this.
So, you can make your own Effector list or use one of the many that we ship in the SDK.
How to Create your own Viseme Effector List
Right click inside project panel and head over to Create > Convai > Expression > Viseme Skin Effector which will create a Viseme Effector List Scriptable Object and now you can define your own values.
Viseme Bone Effector List
This is where developer will tell the Convai SDK, how much each value coming from the server will affect the rotation of the bone. To better explain its working let's understand it with a diagram.
Here, bone's rotation will be affected by the values coming from the server multiplied by the values in effects. For example, for TH the value will affect the bone's rotation by a 0.2 multiplier and etc. The engine representation of this would look something like this.
So, you can make your own Bone Effector list or use one of the many that we ship in the SDK.
We use this formula to calculate the rotation
How to Create Your Own Viseme Bone Effector List
Right click inside the project panel and head over to Create > Convai > Expression > Viseme Bone Effector which will create a Viseme Bone Effector List Scriptable Object and now you can define your own values.
Convai Lipsync Component
When you attach this component to your Convai Character, you will see something like this.
Let's learn what these learns are
Facial Expression Data
Head | Teeth | Tongue
Renderer: Skin Mesh Renderer which corresponds to that specified part of the body
Steps to add Lipsync to your Convai Character
Select you Convai Powered Character in the hierarchy.
In the inspector panel search for ConvaiNPC component, there you will see Add Component Button.
Click on it and select Convai Lipsync Component and click on apply
Now you can configure the Component according to your custom configuration or use one of the many Presets Convai ships with the SDK
Now your lipsync component would be ready to use in your application.
Migration Guide
Convai Plugin 3.3.4 to 4.0.0
This guide explains how to migrate a Unity project from the old Convai SDK to the latest Convai SDK.
Important: Back Up Your Project
Before you begin, create a full backup of your Unity project to avoid accidental data loss.
1
Remove the old Convai SDK
Open your Unity project.
In the Project window, go to Assets
2
Install the latest Convai SDK
Install the newest SDK using one of the following:
3
Set up API key
4
Update scene setup
Update these key objects in your scene:
5
Lip Sync setup (optional)
If your character is humanoid and uses facial lip movement:
Custom Scene Setup
Add the Convai Manager, set up a player, and connect characters to Convai.
Introduction
This guide shows how to integrate Convai into your own Unity scene by adding the Convai Manager, creating a Convai Player, and configuring Convai Characters.
Prerequisites
Convai SDK installed
API key configured successfully
Your scene opened in Unity
Step-by-step
1
Add the Convai Manager
In Unity top menu, go to GameObject → Convai → Setup Required Componentsor Right-click in the Hierarchy → Convai → Setup Required Components
Troubleshooting
Validation fails
Confirm that a Convai Manager object exists in the scene.
Ensure you added Convai Player Component to a player object.
Conclusion
You’ve integrated Convai into your custom scene, validated the setup, and confirmed characters can respond. Next, optionally add Chat UI to support text input and transcripts.
Need help? For questions, please visit the .
Importing Ready Player Me (RPM) Characters
This guide walks you through the process of importing Ready Player Me (RPM) characters into a Convai-powered Unity project, configuring them, and integrating Convai NPC components.
Introduction
Ready Player Me (RPM) allows users to create and customize 3D avatars easily. By integrating RPM characters into Convai's Unity SDK, you can bring dynamic NPCs to life with advanced AI-driven interactions. This guide covers the step-by-step process to set up RPM characters in your Unity project with Convai.
Building for iOS/iPadOS
This guide will walk you through the process of installing Convai-powered Unity applications on iOS and iPadOS devices.
Prerequisites
Before you begin, make sure you have the following:
Unity 2022.3 or later
macOS Permission Issues
macOS security permission issue with custom DLLs in Unity and Mac Configuration in build settings
Allowing the grpc_csharp_ext.bundle dll file in macOS
Using external DLLs in Unity on MacOS can lead to security permission issues due to Apple's strict security measures. Here's a step-by-step guide to resolving this common problem.
Incorporates objectives or context from active Narrative Design sections into the user’s input.
Knowledge Bank
Adds relevant external knowledge to improve factual accuracy or domain-specific responses.
Long-Term Memory
Injects persistent information learned across sessions, when applicable.
GPT-4o-mini
Gemma-3n-e2b
Diverse, creative, sometimes unpredictable
Storytelling, brainstorming, roleplay
GPT-4.1
GPT-4o
GPT-4.1-mini
GPT-4.1-nano
Claude-Opus-4.1
Claude-Opus-4
Claude-4-Sonnet
Claude-3.7-Sonnet
Gemini-2.5-Flash
Gemini-2.5-Flash-Lite
Gemini-2.0-Flash
Gemma-3n-e4b
Llama-4-Maverick
Llama-4-Scout
Llama-3.3-70B
Low (0.0–0.3)
Deterministic, consistent
Factual Q&A, compliance-critical interactions
Medium (0.4–0.7)
Balanced accuracy and creativity
Conversational agents, customer support
High (0.8–1.0)
Quick Guide On Adding AI Characters to Your Unity Project
PlayerPrefs.GetString(characterID, string.Empty)
to retrieve the stored session ID.
Use Session ID: Pass the session ID to your Convai API calls to maintain session continuity.
public static async Task<string> InitializeSessionIDAsync(string characterName, ConvaiService.ConvaiServiceClient client, string characterID)
{
// Retrieve stored session ID if it exists
string sessionID = PlayerPrefs.GetString(characterID, string.Empty);
// If no session ID is stored, initialize a new one
if (string.IsNullOrEmpty(sessionID))
{
sessionID = await ConvaiGRPCAPI.InitializeSessionIDAsync(characterName, client, characterID, sessionID);
// Store the new session ID locally
if (!string.IsNullOrEmpty(sessionID))
{
PlayerPrefs.SetString(characterID, sessionID);
PlayerPrefs.Save();
}
}
return sessionID;
}
private async void Start()
{
// Initialize session ID on start
string characterID = "YourCharacterID"; // Replace with your actual character ID
string sessionID = await InitializeSessionIDAsync("CharacterName", grpcClient, characterID);
if (!string.IsNullOrEmpty(sessionID))
{
Debug.Log("Session ID initialized and stored: " + sessionID);
}
else
{
Debug.LogError("Failed to initialize session ID.");
}
}
using System;
using System.Threading.Tasks;
using Convai.Scripts.Utils;
using Google.Protobuf;
using Grpc.Core;
using Service;
using UnityEngine;
using static Service.GetResponseRequest.Types;
public class SessionManager : MonoBehaviour
{
public ConvaiService.ConvaiServiceClient grpcClient;
private void Start()
{
// Initialize session ID on start
InitializeSession("CharacterName", grpcClient, "YourCharacterID");
}
private async void InitializeSession(string characterName, ConvaiService.ConvaiServiceClient client, string characterID)
{
string sessionID = await InitializeSessionIDAsync(characterName, client, characterID);
if (!string.IsNullOrEmpty(sessionID))
{
Debug.Log("Session ID initialized and stored: " + sessionID);
}
else
{
Debug.LogError("Failed to initialize session ID.");
}
}
public static async Task<string> InitializeSessionIDAsync(string characterName, ConvaiService.ConvaiServiceClient client, string characterID)
{
string sessionID = PlayerPrefs.GetString(characterID, string.Empty);
if (string.IsNullOrEmpty(sessionID))
{
sessionID = await ConvaiGRPCAPI.InitializeSessionIDAsync(characterName, client, characterID, sessionID);
if (!string.IsNullOrEmpty(sessionID))
{
PlayerPrefs.SetString(characterID, sessionID);
PlayerPrefs.Save();
}
}
return sessionID;
}
}
Set the group discussion topic.
Personal
1
Gamer / Indie / Professional
5
Partner / Enterprise
100 (Can be Customized)
Viseme Effectors List: How the SkinMeshRenderer's Blendshape will be affected by values coming from server.
Jaw | Tongue Bone Effector
How much of Bone's rotation will be affected by values coming from server?
Jaw | Tongue Bone
Reference to the bone which controls jaw and tongue respectively
Weight Blending Power
Percentage to interpolate between two frames in late update.
In the Project Panel, navigate to: Assets > Ready Player Me > Resources > Settings
Right-click inside the folder and go to Create > Ready Player Me > Avatar Configuration.
This will generate an Avatar Config asset.
Select the created asset and, under the Inspector Panel, locate the Morph Targets section.
Click Add, select the required morph targets (Oculus Visemes and ARKit), and save the asset.
Locate Assets > Ready Player Me > Resources > Settings > AvatarLoaderSettings and assign the Avatar Config asset to the Avatar Config field.
Save the asset.
Step 3: Import the RPM Character
Navigate to Tools > Ready Player Me > Avatar Loader.
Paste or enter your RPM Model Link in the provided input field.
Click Load Avatar into Current Scene to import your character.
Step 4: Integrate Convai Components
Select your imported RPM GameObject in the Hierarchy Panel.
Add the Convai NPC component to the GameObject.
Fill in the name and ID of the Convai NPC you wish to integrate.
Click Add Components inside the Convai NPC component.
Choose the components you want and click Apply Changes.
Attach a Capsule Collider to the GameObject and configure its size and center to align with the character's body proportions. Ensure that the collider accurately encapsulates the character for optimal physics interactions and collision detection.
Assign an Animation Controller to the Animator component of the GameObject. The Convai SDK offers two predefined animation controllers (Feminine and Masculine) that you can use. Alternatively, you can integrate a custom controller tailored to your requirements.
Enhance your character with additional features:
Add LipSync: Follow this guide to integrate LipSync into your character.
Implement Narrative Design: Check out to add Narrative Design.
Set up Actions: Explore action-based interactions using .
Conclusion
You have successfully integrated a Ready Player Me character into your Convai-powered Unity project. You can now leverage Convai’s capabilities to bring intelligent, interactive NPCs to life. 🎉😎
For more details about Ready Player Me, visit Ready Player Me.
Xcode (latest version recommended)
Apple Developer account
Project with Convai's Unity SDK integrated and running properly
MacBook for building and deploying to iOS/iPadOS
Step 1: Prepare Your Unity Project
Open your Convai-powered Unity project.
Ensure you have the latest version of the Convai Unity SDK imported and setup into your project.
Unity project with Convai SDK imported
Step 2: Configure Build Settings
In Unity, go to File → Build Settings.
Select iOS as the target platform.
Click Switch Platform if it's not already selected.
Check the Development Build option for testing purposes.
Unity Build Settings window with iOS selected and Development Build checked
If you wish to add a few required files manually, follow step 3. If you want it to be done automatically, jump to step 4
Step 3: Manually add Required Files
Add link.xml
Create a new file named link.xml in your project's Assets folder.
Add the following content to the file:
Unity project view showing the link.xml file in the Assets folder
This file prevents potential FileNotFoundException errors related to the libgrpc_csharp_ext.x64.dylib file.
Add BuildIos.cs Script
Create a new C# script in Assets/Convai/Scripts named iOSBuild.cs.
Add the following content to the script:
Step 4: Install required gRPC dlls for iOS:
Go to Convai -> Custom Package Installer
Click on Install iOS Build Package
Attach the script iOSBuild.cs to any GameObject in your scene.
Step 5: Build the Xcode Project
In Unity, go to File → Build Settings.
Click Build and choose a location to save your Xcode project.
Wait for Unity to generate the Xcode project.
Step 6: Configure and Build in Xcode
Open the generated Xcode project.
In Xcode, select your project in the navigator.
Select your target under the "TARGETS" section.
Go to the "Signing & Capabilities" tab.
Ensure that "Automatically manage signing" is checked.
Select your Team from the dropdown (you need an Apple Developer account for this).
If needed, change the Bundle Identifier to a unique string.
Xcode window showing the Signing & Capabilities tab with Team and Bundle Identifier fields highlighted
Step 7: Build and Run
Connect your iOS device to your Mac.
In Xcode, select your connected device as the build target.
Click the "Play" button or press Cmd + R to build and run the app on your device.
Xcode toolbar showing the connected device selected and the "Play" button highlighted
Troubleshooting
If you encounter any build errors, ensure all the steps above have been followed correctly.
Check that your Apple Developer account has the necessary provisioning profiles and certificates.
If you face any GRPC-related issues, verify that the libgrpc_csharp_ext.a and libgrpc.a files are correctly placed in the Assets/Convai/Plugins/gRPC/Grpc.Core/runtime/ios folder.
Verify the Problem:
Manually Allow Blocked DLLs:
Open System Preferences on your Mac.
Navigate to "Security & Privacy".
Under the "Security" tab, you might see a message at the bottom about the DLL being blocked. Click "Allow Anyway" or "Open Anyway" and enter password if asked.
Modify Gatekeeper settings: MacOS's Gatekeeper can prevent unidentified developers' software from running. To allow the DLL:
Open the Terminal (found in Applications > Utilities).
Type sudo spctl --master-disable and press Enter.
This command will allow apps to be downloaded from anywhere.
Now, try running the Unity project again.
After you're done, you should re-enable Gatekeeper with sudo spctl --master-enable to avoid any malware.
Check File Permissions: Ensure the DLL has the correct file permissions.
In Finder, right-click (or control-click) on the DLL file and choose "Get Info".
Under “Sharing & Permissions”, ensure that your user account has "Read & Write" permissions.
Review Unity's Plugin Settings:
In the Unity editor, select the DLL in the Project view.
In the Inspector window, make sure the appropriate platform (in this case, Mac OS X) and architecture (Apple Silicon, Intel-64) is selected for the DLL.
Ensure that the "Load on Startup" and other pertinent options are checked (should be enabled by default)
Mac Configuration in Player Settings during build
Update Mac Configuration:
In Unity, navigate to Edit > Project Settings > Player.
Scroll down and click on Other Settings
Scroll down again to find Mac Configuration section
Update the Mac Configuration section (follow the below Screenshot)
Screenshot showing location of Add Components button in the Convai NPC inspector panel
Screenshot showing selection of Narrative design option in the Add Component Window
Screenshot showing location of Add Component button in the inspector panel
Screenshot showing which component to select from the search results
Screenshot showing a sample Narrative Design component
Screenshot showing various unity events user can subscribe to
Screenshot showing a game object with a collider selected
Screenshot showing selection of Narrative Design Trigger
Screenshot showing Box Collider becoming a trigger box
Screenshot showing assigning of Convai NPC to trigger component
Screenshot showing ability to select your desired trigger
Notification System
Notification System - Implement notifications with Convai Unity plugin utilities.
The Convai plugin comes with default notifications, totaling four. Here they are:
Notifications
Not Close Enough to the Character
Appears when you press the talk button but there is no active NPC nearby.
Talk Button Released Early
Appears if you release the talk button in less than 0.5 seconds.
Microphone Issue Detected
Appears when the recorded audio input level is below the threshold.
Connection Problem
Appears when there is no internet connection upon launching the application.
How to Add Your Own Notification?
Adding your custom notification is straightforward.
Let's go through the steps to add a " CharacterStartedListening" notification as an example.
Open the script "Convai/Scripts/Notification System/Notification Type.cs." This script stores Notification Types as enums. Give a name to your desired Notification type and add it here.
Right-click on "Convai / Scripts / Notification System / Scriptable Objects" and select "Create > Convai > Notification System > Notification" then create a "Notification Scriptable Object".
Name the created Notification Scriptable Object. Click on it, and fill in the fields in the Inspector as desired.
Add the created Notification Scriptable Object to "Convai/Scripts/Notification System/Scriptable Objects" under "Convai Default Notification Group" (details of Notification Group here****).
Your Notification is now ready. The last step is to call this Notification from where you need it. For example, if you created the " CharacterStartedTalking " Notification, find the location where your character listens and write the code.
Replace the parameter with the NotificationType you created. (For our example, NotificationType.CharacterStartedListening)
Ensure that the Convai Notification System is present in your scene. (accessible from "Convai/Prefabs/ Notification System")
All steps are complete, and you're ready to test!🙂✅
Notification Scriptable Object
This Scriptable Object stores information about a Notification
Notification Type
Notification Icon
Notification Title
To create a new Notification Scriptable Object, right-click anywhere in the Project Window and select "Create > Convai > Notification System > Notification"
Notification Group Scriptable Object
This Scriptable Object stores Notification Scriptable Objects as groups. When a Notification is requested, it searches for the Notification using the specified Notification Group in the Convai Notification System prefab's Notification System Handler script.
You can create different Notification groups based on your needs. Note: If your referenced Notification Group does not have the Notification you want, that Notification won't be called.
The Convai Default Notification Group has four Notifications, but you can add more or create a new group with additional notifications.
#if UNITY_EDITOR && UNITY_IOS
using System.IO;
using UnityEditor;
using UnityEditor.Callbacks;
using UnityEditor.iOS.Xcode;
using UnityEngine;
public class iOSBuild : MonoBehaviour
{
[PostProcessBuild]
public static void OnPostProcessBuild(BuildTarget target, string path)
{
string projectPath = PBXProject.GetPBXProjectPath(path);
PBXProject project = new PBXProject();
project.ReadFromString(File.ReadAllText(projectPath));
#if UNITY_2019_3_OR_NEWER
string targetGuid = project.GetUnityFrameworkTargetGuid();
#else
string targetGuid = project.TargetGuidByName(PBXProject.GetUnityTargetName());
#endif
project.AddFrameworkToProject(targetGuid, "libz.tbd", false);
project.SetBuildProperty(targetGuid, "ENABLE_BITCODE", "NO");
File.WriteAllText(projectPath, project.WriteToString());
}
}
#endif
if(convaiNPC.TryGetComponent(out NarrativeDesignTrigger narrativeDesignTrigger))
{
//Optional message parameter if you want to send some message while invoking
//the trigger
string message = "Player has collected enough resources";
narrativeDesignTrigger.InvokeSelectedTrigger(message);
}
Transcript UI System - Integrate transcript UI with Convai's Unity plugin.
Overview
The Dynamic UI system is a feature within the Convai Unity SDK that provides developers a robust system for in-game communication. This feature allows for displaying messages from characters and players and supports various UI components for chat, Q&A sessions, subtitles, and custom UI types. This document will guide you through the integration, usage, and creation of custom UI types of the Dynamic UI feature in your Unity project.
Usage
Accessing the Chat UI Handler
To interact with the chat system, you need to reference the ConvaiChatUIHandler in your scripts. You can find the Transcript UI prefab in the Prefabs folder.
Here's an example of how to find and assign the handler:
Sending Messages
Once you have a reference to the ConvaiChatUIHandler, you can send messages using the following methods:
Sending Player Text
To send text as the player:
input: The string containing the player's message.
Sending Character Text
To send text as a character:
characterName: The name of the character sending the message.
currentResponseAudio.AudioTranscript: The transcript of the audio response from the character, trimmed of any leading or trailing whitespace.
Adding Custom UI Types to the Dynamic Chatbox
While the Dynamic UI system within the , you may want to create a custom UI that better fits the style and needs of your game and it designed to be extensible, allowing developers to add their custom UI types. This is achieved by inheriting from the ChatUIBase class and implementing the required methods. The ConvaiChatUIHandler manages the different UI types and provides a system to switch between them.
Creating a Custom UI Class
To create a custom UI type, follow these steps:
Step 1: Define Your Custom Class
Create a new C# script in your Unity project and define your class to inherit from ChatUIBase. For example:
Step 2: Implement Required Methods
Implement the abstract methods from ChatUIBase. You must provide implementations for Initialize, SendCharacterText, and SendPlayerText:
Step 3: Add Custom Functionality
Add any additional functionality or customization options that your custom UI may require.
Step 4: Assign and Use Your Custom UI
To use your custom UI class within the dictionaryConvaiChatUIHandler, you need to add it to the GetUIAppearances dictionary. This involves creating a prefab for your custom UI and assigning it in the ConvaiChatUIHandler.
Here's an example of how to do this:
Create a prefab for your custom UI and add your CustomChatUI component to it.
Assign the prefab to a public variable in the ConvaiChatUIHandler script.
Modify the InitializeUIStrategies
Ensure that your custom UI type is added to the UIType enum:
Now you can set your custom UI type as the active UI from the Settings Panel .
By following these steps, you can integrate your custom UI type into the Dynamic Chatbox system and switch between different UI types at runtime.
Creating a Profile
Create and register a custom Lip Sync profile in Unity, understand profile fields, and configure supported transport formats for your project.
Introduction
A Lip Sync Profile defines the channel schema a character setup uses within the Lip Sync system. In most cases, the built-in profiles are enough. However, you may want to create a custom profile asset to better organize your project, use a project-specific identifier, or override how a supported transport format is represented in your Editor workflow.
This page explains the Profile Inspector, the Profile Registry, and how to create a custom profile correctly.
Before You Start
Currently, Convai supports only these transport formats:
Transport Format
Supported Schema
This is important because creating a new profile asset does not create a new transport format.
A custom profile can help you:
Rename or reorganize a supported schema
Use a custom profile ID for your project
Point that custom profile to one of the supported formats
A custom profile cannot be used to introduce an entirely new transport value outside arkit, mha, or cc4_extended.
Understanding the Profile Inspector
When you select a ConvaiLipSyncProfileAsset, the Inspector is divided into three main areas.
Runtime Identity
This section controls how the profile is identified internally.
Profile ID
A unique normalized string used at runtime to identify the profile.
This ID is used for:
profile catalog lookup
map targeting
registry merging
component configuration
Choose this carefully. Once other assets reference this ID, changing it can break those references.
Editor Label
Display Name
This is the human-readable label shown in dropdowns and editor tools.
It has no direct effect on runtime behavior, but it is important for usability. Use a clear name that your team will recognize immediately.
Transport Format
This section determines which supported transport format the profile resolves to.
Override default token
When disabled, the profile uses its own Profile ID as the transport token.
When enabled, the profile can use a different supported transport token. This is useful when you want a custom internal profile ID, but still need the profile to resolve to one of the built-in supported formats.
Transport Token
The transport token must be one of the currently supported values:
arkit
mha
cc4_extended
For example, a profile with ID my_metahuman_variant can still use the mha transport token.
Create a Custom Profile
1
Create the profile asset
In the Unity Project window, create a new profile asset:
Give it a descriptive name, such as:
2
Understanding Profile Registries
Profiles are discovered through Profile Registry assets.
A registry is a ConvaiLipSyncProfileRegistryAsset that contains one or more profile references and a priority value used during runtime merging.
Registry fields
Field
Description
The built-in registry uses priority 0. Your own custom registry should use a higher value, such as 1, so it is merged after the built-in set.
Register the Profile
1
Create a Profile Registry
Create a registry asset in the Project window:
Give it a name such as:
2
How Runtime Discovery Works
When the Lip Sync profile catalog initializes, it:
Loads the built-in registry
Scans for additional registries under Resources/LipSync/ProfileRegistries/
Sorts them by priority
If two registries define the same Profile ID, the higher-priority definition replaces the lower-priority one and a warning is logged.
Important Limitations
Keep these points in mind when creating custom profiles:
Profile IDs should be treated as permanent
Once a profile is referenced by maps, registries, or components, changing the ID can silently break those references.
Transport formats are fixed
Only these transport formats are supported:
arkit
mha
cc4_extended
Entering a completely custom value does not add support for a new format.
Registry priority affects replacement behavior
If two registries define the same profile ID, the higher-priority definition replaces the earlier one. There is no merge between duplicate IDs.
Next Step
After creating and registering a profile, the next step is to create or assign a map that targets it.
Continue with to define how that profile drives your character's actual blendshapes.
Conclusion
A custom profile is primarily a way to organize and identify a supported Lip Sync schema inside your project. It gives you flexibility in naming and project structure, while still staying within the currently supported transport formats.
If your character needs custom routing to mesh blendshapes, create a map next.
Need help? For questions, please visit the .
Lip Sync Profiles and Mappings
Learn how Convai Lip Sync uses profiles and maps to drive real-time facial blendshape animation, how built-in defaults work, and when to create custom assets.
Introduction
Convai Lip Sync drives facial blendshape animation in real time by matching incoming speech animation channels to the blendshapes on your character. To make that work reliably, the system needs two things:
A Profile, which defines the channel schema the character uses
A Map, which tells the SDK how those channels connect to actual blendshape names on the mesh
Together, these two assets make the Lip Sync pipeline predictable, editable, and easy to adapt in the Unity Editor.
Overview
At a high level, the Lip Sync system answers two questions:
Which facial rig schema is active?
This is defined by the Profile.
How should each incoming channel affect this specific character mesh?
This is defined by the Map.
Both are stored as Unity ScriptableObject assets, so they can be inspected, assigned, and customized directly in the Editor.
What Is a Profile?
A Lip Sync Profile defines the expected channel layout for a facial rig. It acts as the schema for incoming facial animation data.
For example, if a profile expects a channel called jawOpen, the system interprets that channel according to the rules of that profile. This allows the SDK to know what data is being sent and how to categorize it before any mesh-specific mapping happens.
A profile is not tied to a single character. It defines a reusable facial rig format that multiple characters can share.
What Is a Map?
A Lip Sync Map connects profile channels to the actual blendshape names on a character's SkinnedMeshRenderer.
This is what makes Lip Sync work on real character assets. Even if the incoming channel schema is valid, the animation cannot be applied correctly unless the system knows which mesh blendshape each channel should drive.
A map can do more than simple one-to-one routing. It can also:
Route one source channel to multiple target blendshapes
Scale or offset individual channels
Clamp overly strong values
How Profiles and Maps Work Together
The flow is simple:
A Lip Sync profile determines which channel schema is active
A Lip Sync map reads channels from that schema
The map writes the processed values to the target blendshapes on the character mesh
This separation is important because it allows one profile to be reused across many different characters, while each character can still have its own map.
For example, two characters may both use the arkit profile, but one may use the default map while another uses a custom map because its blendshape names differ.
Supported Profile Formats
Currently, Convai supports the following Lip Sync profile formats:
Profile
ID
These are the only supported transport formats at this time.
This means:
You can create custom profile assets inside your project
You can rename or organize profiles for your workflow
You can override which supported transport format a profile uses
In other words, creating a custom profile does not add support for a new backend format. The transport format must still resolve to one of the supported values: arkit, mha, or cc4_extended.
Built-in Profiles
The SDK includes built-in profiles for the supported formats. These represent the standard schemas used by the Lip Sync system and are intended to be the authoritative built-in definitions.
Each profile asset includes:
Field
Purpose
Built-in profile assets are located under:
Profile Registries
Profiles are grouped into a Profile Registry rather than loaded one by one.
The built-in registry is located at:
At runtime, the SDK loads the built-in registry, scans for additional registries under the same Resources path, and merges them into a single catalog.
Registries are merged by priority:
Lower priority values are processed first
Higher priority values can override existing profile IDs
Duplicate profile IDs produce a warning and the higher-priority definition wins
This lets you extend or override profile definitions without editing built-in SDK assets directly.
Built-in Default Maps
The SDK also includes built-in default maps for supported profile types.
These are located under:
The built-in set includes:
Asset
Target Profile
Purpose
These default maps are designed to cover common use cases out of the box.
Why some built-in channels are clamped or disabled
Some built-in mappings intentionally reduce or suppress certain channels to keep results stable and natural on common character setups.
Examples include:
Jaw open clamping to reduce exaggerated mouth motion
Eye rotation channel disabling for rigs that do not use blendshape-driven eye motion
Cosmetic channel disabling on rigs where those channels are not appropriate for speech animation
Default Map Registry
The Default Map Registry defines which default map is used automatically for each profile.
It is located at:
This registry maps each supported profile ID to its default ConvaiLipSyncMapAsset.
How Map Resolution Works
When a Lip Sync component initializes, the SDK determines which map to use through a fallback chain:
Explicit map on the component
If a map asset is assigned directly and its target profile matches the active profile, that map is used.
Default map registry lookup
If no valid explicit map is assigned, the system checks the Default Map Registry for the active profile.
Safe disabled fallback
If no valid map is found, the SDK creates a safe fallback that outputs zero values instead of animating the character.
This behavior ensures that missing or mismatched setups fail safely without crashing the scene.
When to Create Custom Assets
You typically do not need custom assets if your character already matches one of the built-in supported formats and its blendshape names follow the expected naming convention.
You should create a custom map when:
Your character uses different blendshape names
You need custom clamping or scaling
You want one source channel to drive multiple targets
You may create a custom profile when:
You want a project-specific profile identity or label
You want to organize supported formats differently inside your project
You need a custom profile asset that still resolves to one of the currently supported transport formats
For step-by-step instructions, continue with:
Conclusion
Profiles define the channel schema. Maps define how that schema drives a specific mesh.
Once you understand that separation, the Lip Sync workflow becomes straightforward: choose the supported profile format that matches your character setup, then use either a built-in map or a custom one to connect those channels to your character's blendshapes.
Need help? For questions, please visit the .
Additional Feature Migration
Additional Feature Migration
LTM (Session Resume)
No API migration is required. Continue enabling/disabling session resume as needed in your setup.
Dynamic info APIs are now routed through ConvaiRoomManager.
Narrative Design Migration (Legacy -> Current SDK)
Narrative Design is still supported, but references now align with the new SDK architecture (ConvaiCharacter + modular narrative components).
Legacy setup reference: .
Narrative quick mapping
ConvaiNPC (old character component) -> ConvaiCharacter
For teams migrating from the old SDK docs, this information was previously listed under .
Migration Complete
After completing the steps above:
Project uses the latest Convai SDK.
NPC interaction runs through ConvaiCharacter.
Scene defaults run through ConvaiDefaults.
If you face issues after migration, check:
Missing script references.
API usage updates in your custom scripts.
Audio source setup on character objects.
External API
Learn how to integrate and configure the External API feature to enable your characters to access real-time information, create tasks, and interact with third-party platforms.
Introduction
The External API feature empowers your characters to interact intelligently with real-time data sources and third-party services. Whether it’s retrieving live weather updates, tracking sports scores, or creating tickets in platforms like Jira and Trello, this feature allows seamless API-based integration. With just a few configuration steps, your characters can fetch data, trigger workflows, and execute automated actions, making them significantly more capable.
Adding Actions to your Character
Follow these instructions to enable actions for your Convai-powered characters.
Setting Up Action Configurations
Select the Convai NPC character from the hierarchy.
In the Runtime Identity section, enter a unique ID.
Example:
Use lowercase letters, numbers, and underscores. Avoid spaces.
3
Set the Display Name
In the Editor Label section, enter the display name that should appear in the Inspector.
Example:
4
Set the transport format
Choose which supported Lip Sync schema this profile should use.
Examples:
Use arkit for ARKit-compatible blendshape layouts
Use mha for MetaHuman rigs
Use cc4_extended for CC4 Extended rigs
If your custom profile ID does not match one of those supported tokens, enable Override default token and enter the correct supported transport token manually.
Set the registry priority
Set Priority to a value higher than the built-in registry.
Recommended starting value:
3
Add the profile to the registry
Add your new ConvaiLipSyncProfileAsset to the Profiles list.
4
Place the registry in the correct Resources path
For the SDK to discover it automatically, the registry must be placed under:
Once the asset is saved there, Unity will include it on the next domain reload or Play Mode refresh.
Merges all discovered profiles into one runtime catalog
private ConvaiChatUIHandler _convaiChatUIHandler;
private void OnEnable()
{
// Find and assign the ConvaiChatUIHandler component in the scene
_convaiChatUIHandler = ConvaiChatUIHandler.Instance;
if (_convaiChatUIHandler != null) _convaiChatUIHandler.UpdateCharacterList();
}
using Convai.Scripts.Utils;
using UnityEngine;
public class CustomChatUI : ChatUIBase
{
// Implement the required methods from ChatUIBase here.
}
public override void Initialize(GameObject uiPrefab)
{
// Instantiate and set up your custom UI prefab here.
}
public override void SendCharacterText(string charName, string text, Color characterTextColor)
{
// Handle sending character text to your custom UI here.
}
public override void SendPlayerText(string playerName, string text, Color playerTextColor)
{
Handle sending player text to your custom UI here.
}
[Tooltip (Prefab for the customChatUI.")]
public GameObject customChatUIPrefab;
private void InitializeUIStrategies()
{
Existing UI types
InitializeUI(chatBoxPrefab, UIType.ChatBox);
InitializeUI(questionAnswerPrefab, UIType.QuestionAnswer);
InitializeUI(subtitlePrefab, UIType.Subtitle);
// Custom UI type
InitializeUI(customChatUIPrefab, UIType.Custom); // Make sure to define UIType.Custom in the UIType enum
}
private void InitializeUI (GameObject uiPrefab, UIType uiType)
{
// existing code...
Add your custom UI initialization here
if (uiType == UIType.Custom)
{
CustomChatUI customUIComponent = uiPrefab.GetComponent<CustomChatUI>();
if (customUIComponent == null)
{
Debug.LogError("CustomChatUI component not found on prefab.");
return;
}
customUIComponent.Initialize(uiPrefab);
GetUIAppearances[uiType] = customUIComponent;
}
}
public enum UIType
{
ChatBox,
QuestionAnswer,
Subtitle,
CustomUI // Your custom UI type
}
// Old
public class PlayerHealth : MonoBehaviour
{
[SerializeField] private DynamicInfoController _dynamicInfoController;
private int _health = 100;
private void Start()
{
_dynamicInfoController.SetDynamicInfo("Player Health is " + _health);
Debug.Log("Player Health is " + _health);
}
}
// New
public class PlayerHealth : MonoBehaviour
{
[SerializeField] private ConvaiRoomManager _convaiRoomManager;
private int _health = 100;
private void Start()
{
_convaiRoomManager.SendDynamicInfo("Player Health is " + _health);
Debug.Log("Player Health is " + _health);
}
}
// Old
if (convaiNPC.TryGetComponent(out NarrativeDesignTrigger narrativeDesignTrigger))
{
string message = "Player has collected enough resources";
narrativeDesignTrigger.InvokeSelectedTrigger(message);
}
// New
if (convaiCharacter.TryGetComponent(out ConvaiNarrativeDesignTrigger narrativeDesignTrigger))
{
string message = "Player has collected enough resources";
narrativeDesignTrigger.SetTriggerMessage(message);
narrativeDesignTrigger.InvokeTrigger();
}
Configuration and Usage
1. Accessing the External API Page
Navigate to the External API section in your dashboard. Here you can view existing API methods, activate or deactivate them, and create new methods. To add a new API method, click Add API Method.
2. Creating an API Method
Method Fields Overview
Method Name – Select an existing template or enter a unique name for your method.
Method Description – Provide a concise explanation of the method’s functionality.
Input Description (JSON Format) – Define required input parameters and their descriptions.
Implementation Code – Write the Python implementation for your API logic.
Inputs – Enter test parameters for validating your method.
Output – Displays the result when you click Test API.
Example 1 – Get Weather Data
Method NameGet Weather
Method DescriptionFetches current weather data for a given city
If the test passes, click Save Changes, return to the main API list, and enable the method by toggling Connect to green.
Test with a character
Once activated, test the method in a conversation with your character.
As seen in the screenshot below, the character correctly returned the current weather for Roma and Wrangell.
Example 2 – Create Jira Support Ticket
Method NameCreating Support Tickets
Method DescriptionCreates a support ticket on Jira
Input Description
Implementation Code
Where to Find Required Values
JIRA_DOMAIN – Found in your Jira account URL. Example:
https://mycompany.atlassian.net → JIRA_DOMAIN = "mycompany.atlassian.net"
JIRA_PROJECT_KEY – Found in your project URL or next to the project name.
ISSUE_TYPE – Must be valid in your Jira project (Story, Task, Bug).
Test Input
Click Test API.
A successful Output Example:
Activate the method
If the test passes, click Save Changes, return to the main API list, and enable the method by toggling Connect to green.
Test with a character
Once activated, test the method in a conversation with your character.
As seen in the screenshot below, the character successfully created a Jira ticket and returned the ticket key.
Limitations and Supported Environment
Supported LLM Models: GPT-4o, GPT-4o-mini, Claude-3.5, Claude 4.0
Max Execution Time: 5 seconds
Python Version: 3.12
Libraries Available: Standard library + requests
Conclusion
By configuring the External API feature, you can transform your characters into powerful, data-driven assistants. From retrieving real-time weather information to creating Jira tickets directly from a conversation, the possibilities are vast. This integration capability enables highly interactive, automated, and intelligent workflows.
Scroll down to the ConvaiNPC script attached to your character.
Click the "Add Component" button.
Use the checkbox to add the action script to the NPC Actions.
Click "Apply Changes" to confirm.
Pre-defined Actions
Convai offers predefined actions for a quick start.
Click the "+" button to add a new action.
From the dropdown menu, select "Move To."
Enter the action name as "Move To" (the name doesn't have to match the action choice name).
Leave the Animation Name field empty for now.
Repeat these steps to add more actions like "Pickup" and "Drop" etc.
Adding an Object in the Scene
Add any object into the scene—a sphere, a cube, a rock, etc.—that can be interacted with
Resize and place the object in your scene.
Adding the Convai Interactables Data Script
Create an empty GameObject and name it "Convai Interactables."
Attach the Convai Interactables Data script to this GameObject.
Add characters and objects to the script by clicking the "+" button and attaching the corresponding GameObjects.
Convai Interactables Setup
Add the "There" object in Objects list, so that we can use the Dynamic Move Target indicator.
Bake a NavMesh for your scene if you haven't already:
Go to Window > AI > Navigation.
In the Navigation window, under the Bake tab, adjust the settings as needed.
Click "Bake" to generate the NavMesh.
Ensure that the NPC character has a NavMeshAgent component:
If not already attached, click "Add Component" and search for NavMeshAgent.
Adjust the Agent Radius, Speed, and other parameters according to your NPC's requirements.
Adding a Dynamic Move Target Indicator
To visually indicate where your NPC will move:
Create a new empty GameObject in the scene and name it accordingly or use the pre-made prefab named Dynamic Move Target Indicator.
Link this Move Target Indicator to your NPC's action script so it updates dynamically when you point the cursor to the ground and ask the NPC to move to "There".
Test the Setup
Click "Play" to start the scene.
Ask the NPC, "Bring me the Box."
If setup properly, the NPC should walk upto the box and bring it to you
This feature is currently experimental and can misbehave. Feel free to try it out and leave us any feedback.
Adding Custom Actions to Your Unity NPC in Convai
Introduction
Make your NPC perform custom actions like dancing.
Action that Only Requires an Animation
Locate the dance animation file within our plugin.
Incorporate this animation into your NPC's actions.
Setting Up the Animator Controller
Open the Animator Controller from the Inspector window.
Drag and drop the dance animation onto the controller, creating a new node named "Dancing."
Adding custom Animation Action
Go to the Action Handler Script attached to your Convai NPC.
Add a new action named "Dancing."
In the Animation Name field, enter "Dancing" (it must exactly match the Animator Controller node name).
Leave the enum as "None."
Testing the Custom Action
Click "Play" to start the scene.
Instruct the NPC, "Show me a dance move," and the NPC should start dancing.
Creating Complex Custom Actions in Unity with Convai: Throwing a Rock
Introduction
Adding advanced custom actions, such as a throw action, to your NPC.
In the "Do Action" function, add a switch case for the throw action.
Define the "Throw()" function.
Adding the Throw Action
Add a new action named "Throw" and select the "Throw" enum.
Leave the animation name field empty.
Adding the Object (Rock) to the Convai Interactables Data script
Add any rock prefab into the scene.
Add the rock to the Convai Interactable Data script.
Adding a location to Convai Interactables Data script
Add a stage/new location in the ground of the scene.
Add that new location game object in the Convai Interactable Data.
Testing the Complex Action
Click "Play" to start the scene.
Instruct the NPC, "Pick up the rock and throw it from the stage."
If everything is set up properly, the NPC should pick up the rock and throw it from the stage.
{
"parameters": {
"city": {
"type": "string",
"description": "Name of the city to get weather information for (e.g., 'London', 'New York', 'Tokyo')"
}
},
"required": [
"city"
]
}
Create a custom Lip Sync map, understand the Map Inspector, and connect supported profile channels to your character's blendshapes in Unity.
Introduction
A Lip Sync Map defines how incoming Lip Sync channels are routed to the blendshapes on a specific character mesh.
You need a custom map when your character does not follow the built-in blendshape naming conventions, or when you want more control over how specific channels behave.
This page walks through the Map Inspector and shows how to build a custom map from scratch.
Before You Start
Before creating a map, make sure you already know which supported profile format your character uses.
Currently supported profile formats are:
arkit
mha
cc4_extended
Your map must target the correct profile. A map only works correctly when its target profile matches the active Lip Sync profile used by the character setup.
Understanding the Map Inspector
When you select a ConvaiLipSyncMapAsset, the Inspector is divided into several sections.
Header
At the top of the Inspector, you will see a summary of the current mapping state.
Counter
Meaning
The profile badge indicates which profile this map targets.
Configuration Section
This section defines the map identity and global behavior.
Target Profile
Select the profile that this map is built for.
This must match the profile used by the Lip Sync component.
Description
An optional editor-only note for your own project organization.
Global Modifiers
These settings affect the output of the map as a whole.
Setting
Description
A global multiplier around 0.8 is often a good starting point for natural-looking results on many rigs.
Allow Unmapped
When enabled, channels without explicit entries can be forwarded directly using the source channel name as the target blendshape name.
This can be useful during setup or testing, especially when your character already follows most of the expected naming convention.
Tools Section
This section helps populate or import mappings more quickly.
From Mesh: Auto Detect
This is the fastest way to generate mappings for a real character.
Add a preview mesh using a SkinnedMeshRenderer
Choose a matching mode
Run Auto-Detect From Mesh
The SDK compares the mesh blendshape names to the profile source channels and tries to match them automatically.
Matching modes
Mode
Behavior
Recommended workflow:
Start with Exact
If coverage is low, try Contains
If needed, try Fuzzy
From Mapping Text
You can also import mapping data from JSON.
Available options include:
importing a mapping file
pasting mapping text directly
copying the current mapping as JSON
This is useful for team workflows, backup, and migration.
Mapping Actions
Action
Result
Bulk Operations
Bulk tools help you manage large maps quickly.
Action
Result
These operations are especially useful when debugging or isolating part of a face rig.
Mappings Section
This is the main routing table of the map.
Each row is a mapping entry that connects one source channel to one or more target blendshapes.
Column
Description
You can search the list, filter by enabled entries, and isolate unmapped items to finish setup faster.
Mapping Entry Behavior
Each mapping entry can include additional controls beyond its visible table fields.
Field
Description
A single source channel can also drive multiple target blendshapes. This is useful when one expression needs to affect several shapes on the mesh.
Output Processing Order
The final output value is calculated in this order:
If Ignore Global Modifiers is enabled, the last two steps are skipped for that entry.
Create a Custom Map
1
Create the map asset
In the Project window, create a new map asset:
Use a descriptive name such as:
2
Practical Tips
Use coverage as a setup indicator
Coverage is one of the fastest ways to judge how complete your mapping is.
Start simple
Begin with identity mapping or auto-detect, then refine only the entries that actually need adjustment.
Disable what your rig does not support
If your character has no relevant target for a channel, disabling that entry is often cleaner than leaving it partially configured.
Tune jaw motion carefully
Jaw-related channels often benefit from clamping so that speech stays expressive without becoming exaggerated.
Conclusion
A custom map gives you precise control over how supported Lip Sync channels drive a specific character mesh. Once the target profile is set correctly, the map becomes the layer that turns incoming facial data into stable, character-specific animation inside Unity.
Need help? For questions, please visit the .
Troubleshooting Guide
Troubleshoot common issues with Convai's Unity plugin. Get solutions for seamless AI integration.
Common Issues (FAQ)
Q. I cannot see the Convai menu.
A. Please check if there are any errors in the console. Unity needs to be able to compile all the scripts to be able to display any custom editor menu options. Resolving all the console errors will fix this issue.
Q. There are a lot of errors on my console.
A. Primarily, two issues cause errors in the console that can stem from the Convai Unity Plugin. You can use the links below to fix them quickly.
Q. I am talking to the character, but I cannot see the user transcript and the character does not seem to be coherently responding to what I am saying.
A. This may indicate issues with the microphone. Please ensure that the microphone is connected correctly. You also need to ensure that the applications have permission to access the menu.
Q. The animations for my characters are looking very weird.
A. The animation avatar that we are using might be incompatible with the character mesh. Fixing that can solve the issue.
Q. There are two Settings Panel Buttons in Mobile Transcript UI.
A. If you are using Unity 2021, unexpected prefab variant issues may arise. This is because Unity Mobile Transcript UIs are variants of the main transcript UI prefab. With changes in the Prefab system in Unity 2022, it works correctly in Unity 2022. If you are using Unity 2021, you may encounter issues with prefabs. You can remove the redundant Settings Panel Button to address this problem.
Q: The lipsync is very faint or not visible.
A: The animations that we are using may be modifying facial animations. Editing the animations to remove facial animations should fix any issues related to lipsync.
A: The script also needs the avatar to not be mapped to the jaw bone to be manipulate the jaw bones itself.
Q: I'm facing security permission issues using the grpc_csharp_ext.bundle DLL inside the Unity Editor on MacOS
A: macOS's strict security measures can block certain external unsigned DLLs. To address this, you can manually allow the DLL in "Security & Privacy" settings, modify Gatekeeper's settings through Terminal, ensure correct file permissions for the DLL, check its settings in Unity, and update the Mac Configuration in Unity's Player Settings
Q: I'm not able to talk to my character after building my Unity project for macOS (Intel64+Apple Silicon builds), especially on Intel Macs
A: The issue is rooted in the grpc_csharp_ext.bundle used in Unity for networking. This DLL has separate versions optimized for Intel and Apple Silicon architectures. When trying to create a Universal build that serves both, compatibility problems arise, especially on Intel Macs. Presently, the best solution is to use Standalone build settings specific to each architecture.
Error Index
Follow this Table to navigate to our most common errors.
Name
Sample Error
Reason for Error
For any other issues, feel free to contact us on the .
Enable Eyes Only
Enables only eye-related channels
Enable Mouth Only
Enables only mouth-related channels
Enable Brows Only
Enables only brow-related channels
Clamp Min / Max
Limits the final output range
Select the target profile
In the Configuration section, set the Target Profile to the profile your character uses.
3
Populate the entries
You can choose one of two common workflows.
Option A: Auto-detect from mesh
Add your SkinnedMeshRenderer as the preview mesh
Choose Exact mode first
Run Auto-Detect From Mesh
Review the header coverage result
If needed, retry with Contains or Fuzzy
Option B: Initialize defaults and edit manually
Click Initialize Defaults
Review the generated identity-style entries
Replace target names wherever your mesh uses different blendshape names
4
Tune the motion
Adjust the map until the character behaves naturally.
Common adjustments include:
lowering the global multiplier if expressions feel too strong
adding per-entry clamping for channels like jaw open
disabling channels your rig should not use
using fan-out when one source should drive multiple targets
5
Assign the map
Once the map is ready, assign it to the Lip Sync Map field on your character's Lip Sync component.
When a valid custom map is assigned and its target profile matches, it takes precedence over the built-in default map.
Total
Total number of mapping entries
Enabled
Number of active entries
Mapped
Number of entries with at least one assigned target
Coverage
Multiplier
Scales all output values
Offset
Adds a constant value to all output values
Exact
Names must match exactly, ignoring case
Contains
One name can contain the other
Fuzzy
Common rig prefixes are stripped before comparison
Initialize Defaults
Creates identity-style entries for the selected profile
rawValue -> per-entry multiplier -> per-entry offset -> clamp
-> global multiplier -> global offset
Create > Convai > LipSync > Map Asset
LipSyncMap_MyCharacter
Our plugin needs Newtonsoft Json as a dependency. It is often present as part of Unity but occasionally, it can be missing.
Missing Animation Rigging
We use the Animation Rigging package for Eye and Neck tracking. If Unity does not automatically add it, we need to add it manually from the package manager.
Microphone Permission Issues
The microphone icon lights up but there is no user transcript in the chat UI. The character seemingly not replying to what the user is saying.
The plugin requires microphone access which is sometimes not enabled by default.
Default Animations Incompatibility
The default animations that ship with the plugin seems broken. The hands seem to intersect with the body.
The animation avatar is incompatible with the character mesh.
Animations have Facial Blendshapes
The Lip-sync from characters are either not visible or are very faint.
Some types of animations control facial blendshapes. These animations prevent the lip-sync scripts to properly edit the facial blendshapes.
Jaw Bone in Avatar is not Free
The Lip-sync from characters are either not visible or are very faint.
The animation avatar for the character may be using the Jaw Bone. If we set the mapping free to none, the script will be able to manipulate the jaw bone freely.
Mac Security Permission Issue
Security Permission Issues with grpc_csharp_ext.bundle DLL in Unity on MacOS.
MacOS's security protocols can prevent certain unsigned external DLLs, like grpc_csharp_ext.bundle, from functioning correctly in Unity.
Microphone Permission Issue with Universal Builds on Intel Macs in Unity
No Microphone access request pops up
Incompatibility between Intel and Apple Silicon versions of grpc_csharp_ext.bundle when attempting a Universal build.
Enabled Assembly Validation
Unity, by default, checks for exact version numbers for the included assemblies. For our plugin, this is not necessary, since we use the latest libraries.
Assembly 'Assets/Convai/Plugins/Grpc.Core.Api/lib/net45/Grpc.Core.Api.dll' will not be loaded due to errors:
Grpc.Core.Api references strong named System.Memory Assembly references: 4.0.1.1 Found in project: 4.0.1.2.
Assets\Convai\Plugins\GLTFUtility\Scripts\Spec\GLTFPrimitive.cs(8,4): error CS0246: The type or namespace name 'JsonPropertyAttribute' could not be found (are you missing a using directive or an assembly reference?)
Assets\Convai\Scripts\Utils\HeadMovement.cs (2,30): error CS0234: The type or namespace name 'Rigging' does not exist in the namespace 'UnityEngine.Animations' (are you missing an assembly reference?)
Add Lip Sync to Your Character
Learn how to add and configure the Convai Lip Sync component on your character, assign profiles and maps, configure playback and latency settings, and verify real-time facial animation in Unity.
Introduction
The Convai Lip Sync component connects real-time speech animation to your character's face. While your character is speaking, it receives incoming Lip Sync data, processes it through the active Lip Sync map, and drives the blendshapes on your character's meshes automatically.
This page explains how to add the component, what each Inspector section does, and how to configure it correctly for a working Lip Sync setup in Unity.
Before You Start
Before adding the component, make sure your setup includes:
A character in the scene with at least one SkinnedMeshRenderer that contains facial blendshapes
A Convai Character component on the same GameObject
Add the Component
Select your character's root GameObject in the Hierarchy, then in the Inspector choose:
Once added, the component appears with four main sections in the Inspector:
Core Setup
Playback & Behavior
Streaming & Latency
Core Setup
This is the main setup section. It defines which Lip Sync profile the character uses, which map is applied, and which meshes will be animated.
Profile
The Profile dropdown selects the Lip Sync profile used by the character.
This tells the system which channel schema to expect for the current setup.
Available options are:
Option
Use when your character is...
This setting must match the format your character is designed to work with. If the wrong profile is selected, incoming channels will not line up correctly and the face will animate incorrectly.
For more detail on profile behavior and supported formats, see .
Mapping
The Mapping field assigns the ConvaiLipSyncMapAsset used by the component.
A map connects incoming Lip Sync channels to the actual blendshape names on your character meshes.
Buttons next to the field:
Button
What it does
If this field is left empty, the component uses the built-in default map for the selected profile. For many standard ARKit, MetaHuman, or CC4 Extended setups, this is enough to get started.
If your character uses custom blendshape names, create and assign a custom map instead. For that workflow, see .
Target Meshes
The Target Meshes list defines which SkinnedMeshRenderer components will receive blendshape animation.
You can populate this list in three ways:
Click + to add a slot manually
Drag a SkinnedMeshRenderer into an existing slot
Click Auto-Find to search the current GameObject and all children automatically
After the list is populated, the component shows a summary such as:
This indicates how many meshes were found and how many total blendshape slots are available across them.
If this count is 0, there is nothing for the Lip Sync system to animate.
For most characters, Auto-Find is the fastest way to build this list. After that, remove any meshes that should not be driven, such as clothing or accessories with no facial blendshapes.
Playback & Behavior
This section controls how the facial animation feels during playback.
Lip Smoothing
Lip Smoothing controls how strongly incoming values are smoothed from frame to frame.
Range: 0 to 0.9
Default: 0.5
Behavior:
0: no smoothing, more direct but potentially jittery
0.9: very smooth, but slower to react
0.5: balanced default for most characters
A higher value makes facial motion feel softer and more stable. A lower value makes the face react more quickly to incoming changes.
Fade Transition
Fade Transition controls how long it takes for the face to return to neutral after speech ends.
Range: 0.05 to 2 seconds
Default: 0.2
Behavior:
0.05: nearly instant return to neutral
0.2: natural default for most humanoid characters
2.0: very slow fade
This helps avoid abrupt snapping when speech finishes.
A/V Sync Offset
A/V Sync Offset shifts Lip Sync playback earlier or later relative to the audio.
Range: -0.5 to +0.5 seconds
Default: 0
Behavior:
Negative values: lips move slightly before audio
Positive values: lips move slightly after audio
0: no timing offset
In most setups, this should remain at 0 unless you consistently notice visual desync during playback.
Streaming & Latency
This section controls how incoming Lip Sync data is buffered and played back.
For most users, the default setting is the right choice.
Latency Mode
Latency Mode applies a preset buffering strategy.
Available modes:
Mode
Best for
Trade-off
Internal values used by each preset:
Mode
Max Buffer
Min Headroom
Max Buffered Seconds
This defines how much animation data can accumulate before playback begins.
A larger value improves stability on inconsistent connections, but increases visible delay.
This field is editable only in Custom mode.
Min Resume Headroom
If playback runs out of buffered frames and pauses, this determines how much data must build up before playback resumes.
A higher value makes resume behavior more conservative and stable.
This field is editable only in Custom mode.
For most projects, leave Latency Mode on Balanced.
Live Status
The Live Status section is read-only and updates during Play mode.
It gives you a live view of what the Lip Sync component is doing internally, which makes it very useful for debugging.
Status Indicator
A colored status label in the Inspector shows the current playback state.
State
Color
Meaning
The profile badge also confirms which profile is active at runtime.
Timing Counters
The component also shows runtime counters such as:
Counter
Meaning
If Headroom frequently drops near zero during testing, consider switching to Network Safe or reviewing network quality.
Step-by-Step Setup
Follow this flow to set up Lip Sync on a character from scratch.
1
Add the Convai Lip Sync component
Select your character's root GameObject, then add:
2
Common Issues
Symptom
Likely cause
Fix
Related Pages
For more detailed setup and customization, continue with:
Conclusion
The Convai Lip Sync component is the runtime layer that brings profiles, maps, and character meshes together into a working facial animation setup.
Once the correct profile is selected, the target meshes are assigned, and the map is valid, Lip Sync playback becomes mostly automatic. From there, playback smoothing, fade timing, and latency settings help you refine how the final result feels in your project.
Need help? For questions, please visit the .
Live Status
Mobile or unstable network conditions
Higher delay, more stable playback
Custom
Advanced manual tuning
Requires direct control of buffer settings
6.0 s
0.25 s
Custom
unchanged
unchanged
Green
Lip Sync is actively being applied to the meshes
Starving
Red
Playback has run out of buffered data and is waiting for more
Fading Out
Orange
Speech ended and the face is returning to neutral
Buffer Size
Total current buffer size in seconds
Is Talking
Whether the character is currently speaking
Select the correct profile
In Core Setup > Profile, choose the profile that matches your character:
ARKit
MetaHuman
CC4 Extended
If you are unsure why this matters, review .
3
Assign target meshes
Under Target Meshes, click Auto-Find.
Make sure the component reports a non-zero number of meshes and blendshapes. If some meshes should not receive Lip Sync animation, remove them manually.
4
Check or assign a map
If your character already uses standard blendshape names for the selected profile, you can leave Mapping empty and use the built-in default map or you can choose a map from provided maps.
If your character uses different blendshape names, create a custom map and assign it here.
For that process, see .
5
Run the Validator
Click Validator next to the Mapping field.
This checks how well the active map matches the assigned meshes and helps identify unmapped or mismatched channels.
A high coverage result, especially on mouth-related channels, is a strong indicator that the setup is correct.
6
Choose a latency mode
Under Streaming & Latency, keep Latency Mode on Balanced unless you already know you need a lower-latency or more network-safe configuration.
7
Enter Play Mode and test
Start Play Mode and trigger a speech event from your Convai character.
Watch the Live Status section. In a working setup, the status typically moves through:
At the same time, your character's face should animate in sync with the voice.
If the status never leaves Idle, check that the Convai Character component is on the same GameObject and fully configured.
Incorrect profile selected
Select the profile that matches the character rig
Some blendshapes do not animate
Incomplete map coverage
Run Validator, fix unmapped entries, or use a custom map
Animation feels too strong
Map multiplier is too high
Lower the map multiplier or reduce specific entry values
Animation feels too weak
Map multiplier is too low
Increase the map multiplier
Lips move before the audio
Offset needs earlier correction
Use a small positive or negative adjustment and retest carefully
Lips move after the audio
Offset needs later correction
Use a small positive or negative adjustment and retest carefully
Component disables itself during Play
Validation or setup failure
Check the Console for errors related to profile, character setup, or required references
ARKit
A standard Unity character or any rig with ARKit-compatible blendshape names
MetaHuman
An Unreal Engine MetaHuman brought into Unity
CC4 Extended
A character built with Reallusion Character Creator 4
Create New
Creates a new empty ConvaiLipSyncMapAsset and assigns it immediately
Edit
Opens the assigned map asset in the Inspector
Validator
Checks the active map against the assigned meshes and reports mapping coverage issues
Ultra Low Latency
Very stable low-latency environments
Lower delay, higher risk of stutter
Balanced
Most production use cases
Best balance of stability and responsiveness
Ultra Low Latency
1.0 s
0.05 s
Balanced
3.0 s
0.12 s
Idle
Green
No speech data is being received
Buffering
Yellow
Data is arriving and buffering before playback
Elapsed Time
Time since the current speech event started
Remaining
Seconds of buffered animation left
Received Data
Total Lip Sync data received for the current event
Headroom
Status stays Idle
Convai Character component missing or not connected
Make sure the Convai Character component exists on the same GameObject