Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This document explains how to create MetaHuman Characters with Convai Plugin.
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Get your arsenal equipped with everything needed to dive into exciting AI-based conversations.
Note: In case, you are planning to extensively use our API for production, it's better to reach out to us at support@convai.com so that we can help you in increasing the efficiencies, tuned to your use case. We will provide complete support with everything.
Your API requests are authenticated using API keys. Any request that doesn't include an API key will return an error.
As soon as you sign up to Convai, an API key is generated for you. You can always visit your profile section on convai.com and reveal your API key for use, which will be present at the top right section of the page. You can always generate new ones in case the other is compromised.
Get started with Convai Playground. Follow our walkthrough to create and manage your AI characters.
Step into the Convai Playground, where you let your creativity shine. Here's a concise guide to bringing your character, let's call her Sabrina, to life.
Begin with Basics
Start by naming your character and selecting a voice from an array of options. This choice sets the stage for Sabrina's digital persona.
Craft a Unique Backstory
Next, dive into Sabrina's backstory. Is she energized by city life, or does she aspire to make people laugh as a stand-up comedian? This backstory isn't just filler; it's the soul of your character, making her relatable and dynamic.
if you are out of ideas just click on Generate Backstory
and relax.
Custom Actions, Personality and more
The Convai Playground offers extensive customisation options. From Sabrina's actions and language abilities to her knowledge and emotional state, every detail you choose adds depth to her character. Imagine adjusting her mood to react dynamically within a game, adding a layer of realism to the interaction.
Create an 3D Avatar
You can create your 3d character from scratch picking up hairstyles, body type, clothes and accessories and give your character a unique appearance according to the personality.
Interact with Your Creation
Finally, engage with Sabrina in a 3D web experience. This interaction is the result of your creativity, chat with the character on various topics, ask her about previous experiences from the memory or do some actions.
Your Creative Playground Awaits
The Convai Playground is more than just a tool; it's a gateway to endless possibilities. Whether you're a developer, storyteller, or simply curious, it offers a unique opportunity to explore the potential of AI in character creation. Dive in and let your imagination lead the way.
Explore Convai's character creator tool. Design and develop AI characters with advanced features.
The Character Creator tool enables you to create and update characters with human-like capabilities for use in your applications. Use this tool to configure, test, and even share your characters with others online. Your characters will be available for use in any Convai plugin or SDK, and any updates will be reflected immediately in your applications.
Let's take a closer look into the Character Creator tool features.
The dashboard view shows all your characters and the Convai sample characters. When you first log in to Convai and visit the Playground, we recommend that you first explore the sample characters.
Click on one of the sample characters to open the Character Editor section. Try interacting with the character using either text or speech input. Be sure to allow the use of your microphone when prompted by the browser. The Sample Characters all have different backstories that guide their responses. Feel free to try them all and explore how their backstories affects how they respond. We'll go over all the features and how to set up a new character in the next section.
In the next section, we'll show you how to create your first character.
Find answers to common questions about Convai Playground. Get help with AI character creation.
We'll be updating this page with the most frequently asked questions from our users! Our Convai Developer Forum and YouTube Channels are also great resources if you have further questions or want to learn more!
What is "Character Concurrency"?
Character Concurrency is the number of users who can interact with the NPC at the same time.
For example, there are 5 people using separate instances of your application. In order for everyone to be able to interact with the NPC at the same time, the character concurrency limit for your account needs to be at least 5 or more.
Character Concurrency Example If there are two NPCs in the game, will the system allow three users to interact with NPC A and another three users to interact with NPC B at the same time?
Only if you have a character concurrency limit of at least 6.
Does the "Character Concurrency" limit apply individually to each NPC?
No, the limit applies to the API key being deployed. You can think of them as a limit on actively connected sessions. With a character concurrency limit of 1, you can have 10 AI NPC's or more, but you will only be able to have one user speaking to any of the NPCs at the same time.
How do I get my character to stick to their knowledge bank and backstory?
Utilize the Moderation & Guardrails toggle and minimize external information. We continue to work on improving character cohesion and prevent hallucinations in conversation responses.
What LLM's are powering Convai?
Convai utilizes a variety of models across the text-to-speech, input, and speech-to-text output, including fine-tuned open source, as well as commercial foundational models like ChatGPT. The best model for the current context (audio, text, video, etc) is used.
Is there an API to change my character's information?
Yes! You can see the list of API calls and their functions for characters here.
Do the SDK's support Custom Characters?
Yes, you can use your own custom characters! Note that your custom characters will require facial blendshapes and a rigged skeleton in order for the default animations and lipsync to function as similar to the other natively supported Avatar systems. Here is the Unreal custom character tutorial, and the Unity video will be shared soon.
Here we demonstrate how you can connect your own ReadyPlayerMe avatar with a character.
When you create a new character, it loads with a default 3D ReadyPlayerMe avatar. You can easily configure and upload your own customized 3D avatar from ReadyPlayerMe (RPM) with a few simple steps.
We are currently working towards allowing users to upload their custom models directly. Stay tuned for more updates.
Follow these steps to create your custom RPM avatar for your character:
Open convai.com and visit the Dashboard section of the playground. You have to log in to access this page.
Click on the character that you wish to edit. This will open the character creator tool.
You will notice that the page already has a 3D model that Convai randomly assigns when the character is first created. To add your own model, click on the Configure Avatar option from the left menu. This will open the RPM Avatar Creator section.
In the RPM Avatar Creator section, you can either sign in to RPM and access one of your existing characters, or you can create a completely new one. Here we will create one from scratch. As the character I have chosen is a female, I will select the Feminine option.
You can now upload a photo to create an avatar from or you can continue without one. I will select the Continue without a photo option.
Now select a facial structure to start with. We'll select the first one for our example. Click on Next.
Now you can configure your avatar just the way you want. RPM provides a list of configurable options starting with minute facial structure and details, to hair style, to dress, and many more. All of these are available on the right-hand side of the RPM Avatar Creator section. Once you are satisfied with the avatar, click on NEXT at the top right corner.
You should now have your own custom RPM avatar. Once the processing completes, you will be able to see the avatar on the right-hand side of the screen.
Now, every time you open your character details page, your newly created avatar will appear.
We currently do not support editing an existing avatar. To make any change, you have to start from scratch. Or, you can log in with an RPM account to easily access and edit existing characters.
Convai API and Plugin Documentation
Convai is a platform for developers and creators. It provides the most intuitive interface for designing characters with multimodal perception abilities in both virtual and real world environments. Creators, game designers, and developers can modify the NPC's backstory, personality, and knowledge at any given moment via the playground or programmatically through the API.
Dive a little deeper and start exploring our API reference to get an idea of everything that's possible with Convai:
Customize language and speech for your AI characters with Convai. Create multilingual, interactive AIs.
This section enables you to choose languages and voice settings for your character.
Multilingual Support : Addressing global language diversity, this multilingual feature enables widespread adoption of human-like AI systems for games and digital human applications. Currently supporting 21 languages, Convai is committed to ensuring more inclusive and accessible AI interactions across different cultures and regions.
You can use the language field to set up-to four different languages that your character can speak. These choices define the languages that the character can speak and does not affect the input language from the user. Currently convey supports the following languages: Arabic, Chinese (Cantonese), Chinese (Mandarin), Dutch, Dutch (Belgium), English, Finnish, French, German, Hindi, Italian, Japanese, Korean, Polish, Portuguese (Brazil), Portuguese (Portugal), Russian, Spanish, Spanish (Mexico), Spanish (US), Swedish, Turkish, and Vietnamese. New languages are added on an ongoing basis.
Custom Pronunciations enhances the pronunciation capabilities of the character. It allows you to specify how certain words should be pronounced. This can be useful for words the character may struggle with or pronounce incorrectly.
To use this feature, add the word you want to specify in the "Word" column. Then, in the "Pronunciation" column, spell out how the word should sound using plain English letters and syllables.
For example, let's try the name "convai":
Spelled As: convai
Pronounced As: convey
Please note that entries in the Custom Pronunciations table are case-sensitive. This gives you control over pronunciations, allowing you to specify different pronunciations for capitalized and non-capitalized versions of the same word if needed.
The New Word Recognition feature enhances the character's speech recognition capabilities, allowing it to better understand unique or challenging words.
To use this feature, simply enter the word you want the system to learn in the "Spelled As" column. Then, in the corresponding "Pronounced As" column, spell out the word's pronunciation using easy-to-read syllables and sounds.
For example, let's try the name "Ankur":
Spelled As: Ankur
Pronounced As: Ahnkur
Configure the Personality and Style of Your Characters
This section enables you to adjust the personality and style of your character. Personality and style settings affect how your character responds and the type of language it uses when speaking. Under the speaking style tab, you can choose from a number of preset styles including things like a cowboy, a pirate, or a jazz musician. You can add to these styles or create your own by adding examples and catchphrases in the field provided.
The personality trait tab in this section allows you to adjust five main personality traits to help you further shape how your character responds. Each trait, including openness, meticulousness, extroversion, agreeableness, and sensitivity can be adjusted from zero to four, with zero being the lowest setting and four the highest. You can try changing the settings to see how they affect the output from your character. There is also a drop-down list of some predefined personalities for you to explore.
Define state of mind for AI characters in Convai. Create nuanced, realistic interactions.
This section displays a State of Mind graph, which relates to the emotional state of your character. The graph depicts a wheel of emotions and dynamically highlights the particular emotions that your character is experiencing during the conversation. Stay tuned for more about emotions in future updates.
The graph below shows the current emotional state based on the last chat message.
Enhance AI characters with Convai's memory tool. Create more responsive and engaging interactions.
Memory section allows you to view your previous conversations with the character, organized by date and time. Each chat session is assigned a unique ID for easy reference. You can download, copy, or delete individual chat logs from the character's memory as needed.
Enabling Long-Term Memory
For an even more immersive experience, you can enable the Long-Term Memory feature in the Memory Settings. This allows the character to remember details about you, the player, fostering deeper, personalized relationships over time. With Long-Term Memory enabled, characters can recall your preferences and choices, adapting their responses to your individual playstyle for a truly unique experience.
Convai Unreal Engine Plugin supports the following platforms.
Windows
4.26, 4.27, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5
MacOS
5.0, 5.1, 5.2, 5.3, 5.4, 5.5
Android
4.27, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5
Linux
5.0, 5.1, 5.2, 5.3, 5.4, 5.5
iOS
Coming Soon
Pixel Streaming
5.0, 5.1, 5.2, 5.3, 5.4
A2F Lip-sync (Let's Talk)
5.0, 5.1, 5.2, 5.3, 5.4, 5.5
Review the prerequisites for integrating Convai with Unity. Ensure seamless setup and functionality.
The Convai Unity SDK supports a minimum of Unity 2022.3.x or later.
You should have Git installed locally on your system.
Before integrating the Convai SDK, you should be comfortable with the following:
Importing Packages: Know how to import external packages into a Unity project.
Unity Editor: Be proficient in navigating the Unity Editor interface.
Animations: Understand how to add and handle animations for assets.
Programming in C#: Have a basic experience programming Unity scripts in C#.
Script Integration: Be capable of adding scripts to a game object.
Building and Deployment: Know how to build and deploy an application to your chosen platform.
Having these skills will ensure a smooth integration and optimal use of the Convai Unity SDK in your projects.
Understand the limitations of the WebGL plugin for Unity with Convai. Optimize your development.
iOS browsers impose strict limitations on the size of WebGL builds. These constraints are primarily due to:
Memory Limits: iOS devices have limited available memory for web applications, which can affect the performance and feasibility of running large WebGL builds.
Browser Storage Quotas: Safari and other iOS browsers restrict the amount of data that can be stored locally. This includes caching and Indexed DB, which are often used to store assets for WebGL builds.
Maximum Downloadable Asset Size: iOS browsers may restrict the size of individual downloadable assets. Large assets might fail to load, causing the application to break.
Total Build Size: The total size of all assets combined should ideally be kept under 50-100 MB for smooth performance. Exceeding this limit can lead to crashes or extremely slow loading times.
Memory Usage: iOS devices typically have less RAM available compared to desktop environments. High memory usage by WebGL builds can result in frequent browser crashes.
Safari: The default browser on iOS, Safari, is generally the best option for WebGL builds, but it still has significant limitations compared to other desktop browsers.
This document explains how to create Reallusion Characters with Convai Plugin.
Unity Plugin Utilities - Enhance development with Convai's tools and resources.
Session ID Management - Manage unique session IDs for Convai Unity integration.
In a typical application integrating with the Convai API, maintaining a consistent session ID across different sessions is crucial for providing a seamless user experience. This documentation outlines the best practices for storing and retrieving session IDs using Unity's PlayerPrefs
, including detailed steps and example scripts.
A session ID uniquely identifies a session between the client and the Convai server. Storing the session ID locally ensures that the same session ID is used across different sessions, which helps in maintaining context and continuity in interactions.
When initializing a session, if a session ID is not available locally, it should be fetched from the server and then stored locally for future use. Here's how you can achieve this:
Fetch and Store Session ID: When initializing a session, check if a session ID is stored locally. If not, fetch a new session ID from the server and store it using PlayerPrefs
.
When initializing your application, retrieve the stored session ID to ensure continuity in user interactions.
The following example class demonstrates how to manage session IDs using PlayerPrefs
in a Unity project:
Initialize Session: Call InitializeSessionIDAsync
to check if a session ID is stored. If not, fetch and store it.
Store Session ID: Use PlayerPrefs.SetString(characterID, sessionID)
to store the session ID locally.
Retrieve Session ID: Use PlayerPrefs.GetString(characterID, string.Empty)
to retrieve the stored session ID.
Use Session ID: Pass the session ID to your Convai API calls to maintain session continuity.
Error Handling: Ensure proper error handling when fetching and storing session IDs.
Security: Consider encrypting sensitive information stored in PlayerPrefs
.
Performance: Use asynchronous methods to avoid blocking the main thread when fetching session IDs.
Troubleshoot common issues with Convai's Unity plugin. Get solutions for seamless AI integration.
A. Please check if there are any errors in the console. Unity needs to be able to compile all the scripts to be able to display any custom editor menu options. Resolving all the console errors will fix this issue.
A. Primarily, two issues cause errors in the console that can stem from the Convai Unity Plugin. You can use the links below to fix them quickly.
A. This may indicate issues with the microphone. Please ensure that the microphone is connected correctly. You also need to ensure that the applications have permission to access the menu.
A. The animation avatar that we are using might be incompatible with the character mesh. Fixing that can solve the issue.
Default Animations Incompatibility
A. If you are using Unity 2021, unexpected prefab variant issues may arise. This is because Unity Mobile Transcript UIs are variants of the main transcript UI prefab. With changes in the Prefab system in Unity 2022, it works correctly in Unity 2022. If you are using Unity 2021, you may encounter issues with prefabs. You can remove the redundant Settings Panel Button to address this problem.
A: The animations that we are using may be modifying facial animations. Editing the animations to remove facial animations should fix any issues related to lipsync.
Animations have Facial Blendshapes
A: The script also needs the avatar to not be mapped to the jaw bone to be manipulate the jaw bones itself.
Jaw Bone in Avatar is not Free
grpc_csharp_ext.bundle
DLL inside the Unity Editor on MacOSA: macOS's strict security measures can block certain external unsigned DLLs. To address this, you can manually allow the DLL in "Security & Privacy" settings, modify Gatekeeper's settings through Terminal, ensure correct file permissions for the DLL, check its settings in Unity, and update the Mac Configuration in Unity's Player Settings
A: The issue is rooted in the grpc_csharp_ext.bundle
used in Unity for networking. This DLL has separate versions optimized for Intel and Apple Silicon architectures. When trying to create a Universal build that serves both, compatibility problems arise, especially on Intel Macs. Presently, the best solution is to use Standalone build settings specific to each architecture.
Building for macOS Universal apps
Follow this Table to navigate to our most common errors.
Enabled Assembly Validation
Unity, by default, checks for exact version numbers for the included assemblies. For our plugin, this is not necessary, since we use the latest libraries.
Missing NewtonSoft Json
Our plugin needs Newtonsoft Json as a dependency. It is often present as part of Unity but occasionally, it can be missing.
Missing Animation Rigging
We use the Animation Rigging package for Eye and Neck tracking. If Unity does not automatically add it, we need to add it manually from the package manager.
Microphone Permission Issues
The microphone icon lights up but there is no user transcript in the chat UI. The character seemingly not replying to what the user is saying.
The plugin requires microphone access which is sometimes not enabled by default.
Default Animations Incompatibility
The default animations that ship with the plugin seems broken. The hands seem to intersect with the body.
The animation avatar is incompatible with the character mesh.
Animations have Facial Blendshapes
The Lip-sync from characters are either not visible or are very faint.
Some types of animations control facial blendshapes. These animations prevent the lip-sync scripts to properly edit the facial blendshapes.
Jaw Bone in Avatar is not Free
The Lip-sync from characters are either not visible or are very faint.
The animation avatar for the character may be using the Jaw Bone. If we set the mapping free to none, the script will be able to manipulate the jaw bone freely.
Mac Security Permission Issue
Security Permission Issues with grpc_csharp_ext.bundle
DLL in Unity on MacOS.
MacOS's security protocols can prevent certain unsigned external DLLs, like grpc_csharp_ext.bundle
, from functioning correctly in Unity.
Microphone Permission Issue with Universal Builds on Intel Macs in Unity
No Microphone access request pops up
Incompatibility between Intel and Apple Silicon versions of grpc_csharp_ext.bundle
when attempting a Universal build.
For any other issues, feel free to contact us on the Convai Developer Forum.
This page illustrates the method for integrating actions into your characters.
Welcome to the dynamic realm of character actions! Just imagine: your meticulously crafted characters are no longer standing still, but rather dancing, waving, or even performing complex maneuvers at your command. Isn't it exhilarating to think about the endless possibilities? Whether you want to add a simple gesture or craft an intricate sequence of movements, this page will serve as your gateway. Dive in to learn how to enable your characters into action.
Please note that the action section discussed here is specifically for using actions in the Convai Playground and is not related to game engine mechanics.
Let's examine the Actions interface, and then explore how to assign actions to your characters.
Let us prompt the character, and observe their response before enabling any specific actions.
Add your preferred action
Click the 'Update' button to refresh your character, then prompt it to execute the chosen action.
Add Further Actions to Your Character and click 'Update'
Prompt your complex action
This page demonstrates how to use the knowledge bank to add more information to your character than you can using the backstory alone.
We know that language models are helpful for a variety of different tasks. But their capabilities are severely limited by the input length of these models. What this means for our character chatbot is that we now have access to a knowledge bank where you can store large amounts of text-based knowledge for your character.
Let us look at the knowledge bank interface and then drill down into each element.
We have two main ways to add a knowledge bank:
Use the Text Box
Upload files (Limited Right now to 1MB)
Let us go through these and see how to use each of these ways.
Two things must be mentioned here:
Currently, we support a total file size of 1 MB. That means the total size of all the files an account uploads is limited to 1 MB.
Please separate your file names with an underscore when uploading them or creating a file using the text box.
Right now, we support only uploading text files as a knowledge bank. You can add information for your character as text files. You can upload files simply by clicking on the “Upload” button. You can click on the highlighted button to upload files from your computer.
Once you have uploaded the file, it will require some time before it is available for use (we currently have an upper limit of around 10 minutes for this). Once done, your file will appear under the “Available files on your account”, and the “Connect” button will become green like in the image above. You can then choose to associate the file with the current character.
Once connected, you can ask questions which would be only present in the knowledge bank and get the relevant information from your character.
Using the knowledge bank is pretty simple; once you have uploaded the knowledge bank for your character, you can use the chat UI or the /getResponse
API to ask about anything stored in the knowledge bank. You should receive accurate results.
Let us look at an example. We have a file called "mb4.txt", which is a made-up story about a Moon Base and its commander, Samantha. We can be sure that it was not part of the training data for our models because this story was made up just for this tutorial. Let us see how the character chatbot responds when asked about our dashing commander Samantha without the correct document connected to this character.
That definitely does not look correct, it is just a generic response without any particulars. Moreover, there is no mention of the moon base at all. Let us use the UI to connect the relevant file.
Once we have connected the file, you will get a pop-up when the connection is complete. Let us ask our character once again about Samantha and see if we get the correct answer.
As soon as you connect the character, you should make sure to refresh the page if a chat is already underway. Otherwise, the model might get conflicting information from the chat history and knowledge bank, producing inconsistencies.
The correct information is present in the reply this time. It means our operation was successful.
We hope this page gives you enough information about how to use the knowledge bank to start utilising it for your own purposes. Feel free to contact us at support@convai.com if you have any questions.
This guide is designed to help effectively utilize the Knowledge Bank to create engaging and informative conversational AI experiences.
The Convai Knowledge Bank is a powerful tool that allows you to provide your AI character with a wealth of information on various subjects. It uses a technique called RAG (Retrieval-Augmented Generation) to efficiently store and retrieve relevant information during conversations.
RAG works by automatically chunking the uploaded text, PDF, or other files into smaller segments based on the spaces between paragraphs. This enables the AI to quickly locate and access the most relevant information when responding to user queries.
To ensure that your AI character can effectively understand and utilize the information in the Knowledge Bank, follow these best practices when preparing your files:
Single File Format: Upload your information as a single file, such as a text document or PDF.
Paragraph Structure: Each paragraph in the file should focus on a single subject and be approximately 5 lines long. This allows the AI to clearly understand the topic and context of the information.
Q&A Format: Alternatively, you can structure your information in a question-and-answer format within the same paragraph. This helps the AI understand the topic and how to respond appropriately.
To create a more engaging and believable AI character, it's crucial to ensure that the information in the Knowledge Bank aligns with the character's way of thinking and speaking. Consider the following tips:
Use language and terminology that fits the character's background, personality, and domain expertise.
Incorporate the character's unique perspective, opinions, and experiences when crafting the knowledge bank content.
Maintain consistency in tone, style, and information throughout the knowledge bank to reinforce the character's identity.
By following these best practices and tips, you'll be able to create a rich and immersive conversational AI experience that brings your character to life.
Happy creating with the ConvAI Knowledge Bank!
Master narrative design with Convai's character creator tool. Create engaging AI-driven stories.
Narrative Design enables game developers to outline high-level objectives for NPCs, thereby guiding the narrative flow without constraining it to traditionally rigid dialogue trees. This approach allows for similar behaviour as a state machine but with support for scripted as well as dynamic responses as the NPC progresses through the decision making process from interactions with the player or via triggers from the game state. You can read more about the considerations behind Narrative Design here.
Here is a helpful series of videos outlining how to create a Narrative Design Graph in the Convai Playground, as well as the demo use case we created with a Tour Guide that shows the steps involved in creating your own Narrative Design solution and implementation.
You can find this tool under the "Narrative Design" tab on the Convai Playground. There are three fundamental elements to the graph, Sections, Triggers and Decisions.
Sections consist of two components Objectives and Decisions; and each Section also has a unique ID for ease of reference and tracking.
This defines the overarching goal that the character aims to fulfill. For example, the initial objective for a museum tour guide NPC, is to extend a warm welcome and inquire whether the player is interested in taking a guided tour.
As the conversation unfolds, it becomes essential to adapt to the player's preferences and responses, adjusting the NPC's objectives accordingly. Decisions are critical in this context. Taking the tour guide example further, when the NPC poses the question about taking a tour, the player's affirmative or negative response will lead the NPC to pursue a different objective, tailored to the player's choice. Decisions lead to new sections and allow for the storyline or experience to progress.
You can have a variety of decisions that result from the same Section, each with their own corresponding connection and new objective.
These special character can be utilized in the nodes to trigger specific outcomes.
<speak>
<speak> I'll say this exact line! </speak>
Characters will respond with exactly the phrase used after <speak> in the node when activated, until the closing </speak> tag.
*
Forces transition to the next node
There are three types of triggers currently: Spatial, Time-Based, and Event-Based. These are essential mechanisms that enable NPCs to discern when certain conditions have been met or events have occurred before proceeding to the next Section. Each trigger has a unique ID for referencing and tracking.
Spatial triggers are activated when characters or players are in the correct location in the experience. For example, standing in front of an information booth could be the Spatial trigger for the NPC to ask the player if they require assistance.
These triggers would occur after a set amount of time has elapsed. For example, if the player has not said anything or responded after a certain number of seconds, the character could repeat the question or inquire what was the delay. Adding more dimensions and natural engagement to the experience.
Event-based triggers correspond to events that occur in the experience or game engine that you would want the characters to respond to. For example, if there was an explosion event in the game, you could have that trigger responses and new Sections from the characters within the range of the explosion event.
Manage multiple versions of character and switch between them as required.
In this section, we look into Character Versioning, i.e., maintaining different states of the character. This enables the user to preserve a previous stable state before trying out more changes. You can now experiment without the fear of losing an older state of the character, and in case you want discard the current changes and return to a previous version, you now have the ability to restore the version and continue working from there. We conveniently call these saved states as Snapshots of the character.
We will go over the features and how to use them in this section.
The idea of Snapshot and Version has been used interchangeably in the text; however, they refer to the same idea: The state / contents that define the character at a specific point in time.
The character versioning option is available at the top right-hand side in the character editor section beside the Update button
Once you click on it, you get to see the list of all your previous saved revisions ordered by date.
We will go over the steps of creating and maintaining snapshots from scratch in the next section
Let us start with a character that we already have saved. The data that we see when we open the details related to a character denotes the Current Snapshot of the character. When you interact with the character, you are essentially referring to all the date in this Current Snapshot of the character.
To create a new version, first open the Character Versioning section and click on the + Add Snapshot sign at the top.
A pop-up appears asking you to give your snapshot a name and some description. Please note that a Snapshot Name is a required field to create a new version. Once you have filled the details, click on the Submit button.
Now, you can see the new version in the list of snapshots. Now what does this version actually represent? This snapshot stores all the data related to the character at that point of time. Everything about the character ranging from character description, embodiment to knowledge bank files, narrative-design structure and other details.
Assuming you have gone ahead and worked on the character further, but you are unhappy with the results and want to go back and start from the previous version. This is where you have the ability to restore an old snapshot to the current state and work with them again. Here are the steps to follow:
To restore a version, open the Character Versioning section and select the snapshot you want to restore back. You will see the Restore Version button below come to life.
Once you click on the Restore Version button, a pop-up appears asking you if you want to save the current changes as a new snapshot or discard them. You have the option to store your current changes as some test version and refer back later on.
For now, we are happy to discard the changes, so we will click on Restore button. This brings the data from the selected version to the Current Snapshot of the character.
To save the changes, you can always Cancel and go back to creating a new snapshot with your progress and the restoring it.
You can also go ahead and delete a snapshot that you no longer require. To that you can click on the 3-dots by the corresponding snapshot in the list of Character Version and select Delete Version
At any given point you can interact with the Current Snapshot of the character. If you have any publicly available app that utilises the character, your users will only be able to interact with this current version.
We are currently working on a feature to help developers have separate deployed version than the Current Snapshot.
Integrate advanced conversational AI to create intelligent, interactive NPCs for your games.
Convai's Unity SDK provides you with all the tools you need to integrate conversational AI into your Unity projects. Convai offers specialized NLP-based services to build intelligent NPCs for your games and virtual worlds. Our platform is designed to seamlessly integrate with your game development workflow, enhancing the interactivity and depth of your virtual environments.
Conversational AI: Leverage advanced NLP capabilities to create NPCs that can understand and respond to player input in natural, engaging ways.
Intelligent NPCs: Build characters with dynamic dialogue and behaviors that adapt to player actions and the game world.
Easy Integration: Our SDK is designed for quick and simple integration with your Unity projects, allowing you to focus on creating compelling gameplay experiences.
Cross-Engine Support: In addition to Unity, Convai supports other popular game engines, ensuring broad compatibility and flexibility for your development needs.
This is the Core version of the plugin. It has a sample scene for anyone to get started. This version of the plugin only contains the basic Convai scripts and Character Downloader.
Visit convai.com for more information and support.
Install Convai plugin for Unreal Engine. Follow step-by-step instructions for Visual Studio and XCode setup.
Download Visual Studio from here.
Ensure having the required C++ toolchains mentioned here. If you already have Visual Studio installed, you may open the installer and select 'Modify' to add the above mentioned toolchains.
Download XCode from the app store
For UE 5.3 and UE 5.0, follow this guide to enable microphone permissions.
There are two methods to install the plugin depending on your requirements
Directly from the Marketplace Link: Recommended for easy installation and tracking updates to the plugin. (UE 5.1 - 5.3)
Building from Github Source: For source UE builds and for unsupported UE versions on the marketplace, but not guranteed as the marketplace approach. (UE 4.26 - 5.0)
From the top toolbar, go to Edit > Plugins.
Find the Convai plugin.
Click the checkbox to enable the plugin.
Restart Unreal Engine.
Go to Edit > Project Settings.
Choose Convai under the Plugins section on the left bar.
Paste the API key into the API Key field.
Transmit real-time environmental data to characters without requiring a response. Supported from Plugin Version 3.5.1.
Dynamic Environment Info is a powerful feature that allows users to pass additional environmental data to characters without direct interaction. This enables more immersive and creative gameplay scenarios by enhancing how characters perceive their surroundings.
For example:
The character can understand the time of day (e.g., "time of day is night").
The character can access inventory details (e.g., "You currently have a gun and a healing potion").
The feature supports structured data formats for richer information exchange.
Follow the steps below to integrate Dynamic Environment Info into your project.
Open the Character Blueprint in your project.
In the Begin Play event, locate the ConvaiChatbot
component.
Set the Dynamic Environment Info
variable with a string value of your choice.
Example 1 (Simple):
Example 2 (Structured Format):
Save and Play
Save your blueprint changes.
Hit Play to test the interaction.
Observe how the character dynamically responds based on the information passed.
By following these simple steps, you can unlock more engaging gameplay mechanics and enrich the interactions.
Set up the Convai Unreal Engine plugin and add conversational AI to your apps.
Get started with sample projects for some of our popular tutorials:
Creating an Intelligent Character with Convai
Here, we outline the steps to create your own AI character on Convai.com in a simple way. We will cover advanced configuration options in separate sections later.
Sign in to Convai and click on the Create Character button from your Dashboard.
The landing page provides intuitive fields for creating a character. Don't worry about configuring an avatar yet - we'll cover that later.
Enter a name for your character, select a voice from the provided options, and write a backstory. ready.
Click Create Character when you are satisfied with your character details.
You will receive a unique ID for your new character. This will allow you to access the character outside of Convai.com. A default avatar will be loaded that you can customize later.
That's it! You now have a basic character to work with. You can start talking to your character in the chat window on the right side of the page.
In the next few pages we'll go through more advanced options for enhancing your character.
Check Convai plugin compatibility with Unity. Ensure smooth integration with your development tools.
The minimum supported Unity version is 2022.3.x. Earlier versions may not be compatible.
Windows
MONO
.NET Standard 2.1 or .NET 4.x+
MacOS
MONO
.NET Standard 2.1 or .NET 4.x+
Android
IL2CPP
.NET 4.x+
iOS
IL2CPP
.NET 4.x+
MONO
.NET Standard 2.1 or .NET 4.x+
IL2CPP
.NET 4.x+
2022.3 or Higher
.NET Standard 2.1 or .NET Framework
Download Convai tools for Unity. Access the latest plugins and updates for AI integration.
Unity Verified Solution
This is the Long-Term Support version of our core version. It contains all the necessary tools for adding conversational AI to your characters.
WebGL
This plugin version should be used if you need to build for WebGL. Please ensure that Git is installed on your computer prior to proceeding.
There are some limitations for WebGL version of the plugin, to learn about it, please go to Limitations of WebGL Plugin
Follow these instructions to setup the Unity Plugin into your project.
The file structure belongs to the Core version of the plugin downloaded from the documentation.
In the Menu Bar, go the Convai > API Key Setup.
Go to convai.com, and sign in to your Convai account. Signing in will redirect you to the Dashboard. From the dashboard, grab your API key.
Enter the API Key and click begin.
This will create an APIKey asset in the resources folder. This contains your API Key.
Open the demo scene by going to Convai > Demo > Scenes > Full Features
Click the Convai NPC Amelia and add the Character ID (or you can keep the default character ID). You can get the character ID for your custom character from this page Create Character. Now you can converse with the character. The script is set up so that you have to go near the character for them to hear you.
Now you can test out the Convai Demo Scene and talk to the character present there. Her name is Amelia and she loves hiking!
You can open the Convai NPC Script to replicate or build on the script to create new NPCs.
Try to extend the ConvaiNPC.cs script instead of directly modifying it to maintain compatibility with other scripts
If you ever get an error that looks like this, disable the Assemble Version Validation in Project Settings
> Player
> Other Settings
.
Ensure that Assembly Validation is disabled in Project Settings
> Player
> Other Settings
.
Restart the Unity project after unchecking the box should fix the issue.
Adjust Interaction Radius - Unreal Engine Guide for Convai integration.
Go to your player blueprint.
Click on the gear icon and select Show Inherited Variables
under My Blueprint
tab.
Search for the variable MaxChatDistance
.
Set it to a number of your choice. Note: Set it to 0 for infinite distance.
Change AI Character Movement Speed - Unreal Engine Guide with Convai.
Open your AI character blueprint.
Select the Floating Pawn Movement
component from the components list.
Set the Max Speed
field under the details panel to your required speed.
Resolve microphone permission issues in Unity with Convai. Ensure smooth voice interactions.
If you see the microphone indicator turning on in the top left corner but no user transcript in the chat UI and the character's response doesn't seem coherent to what you said, then it is likely that the game or Unity is not accessing the correct microphone or does not have sufficient microphone privilege. To fix this, please follow along.
Resolve facial blendshape issues in Unity animations with Convai. Improve character realism.
If the Lip-sync from characters are either not visible or are very faint, if could be a result of character's animations overriding the blendshape changes made by the script. We recommend deleting the relevant components in the animation dopesheet.
Fix default animation incompatibilities in Unity with Convai. Ensure smooth AI character animations.
If the default animations that ship with the animator look bugged such that the hand seems to intersect with the body, it could indicate an issue with the wrong animation avatar being selected.
You can easily fix that by heading to the character's animator component and assigning the correct animator to the Avatar field.
The correct animation will look something like this. The hands should not intersect the body.
Fix missing Newtonsoft JSON issues in Unity with Convai. Resolve integration problems efficiently.
Our plugin has various scripts and dependencies that use Newtonsoft Json. If Newtonsoft Json is missing from the plugin, it could lead to a large number of errors as shown below:
Ensure that NewtonSoft.Json is present in your packages. Go to your project folder.
Then navigate to Packages folder. In the Packages folder. Click on manifest.json. A json file containing the project dependacies should open up.
Add the Newtonsoft Json Package on top.
The final manifest.json should look like this.
macOS security permission issue with custom DLLs in Unity and Mac Configuration in build settings
Using external DLLs in Unity on MacOS can lead to security permission issues due to Apple's strict security measures. Here's a step-by-step guide to resolving this common problem.
Verify the Problem:
Manually Allow Blocked DLLs:
Open System Preferences on your Mac.
Navigate to "Security & Privacy".
Under the "Security" tab, you might see a message at the bottom about the DLL being blocked. Click "Allow Anyway" or "Open Anyway" and enter password if asked.
Modify Gatekeeper settings: MacOS's Gatekeeper can prevent unidentified developers' software from running. To allow the DLL:
Open the Terminal (found in Applications > Utilities).
Type sudo spctl --master-disable
and press Enter.
This command will allow apps to be downloaded from anywhere.
Now, try running the Unity project again.
After you're done, you should re-enable Gatekeeper with sudo spctl --master-enable
to avoid any malware.
Check File Permissions: Ensure the DLL has the correct file permissions.
In Finder, right-click (or control-click) on the DLL file and choose "Get Info".
Under “Sharing & Permissions”, ensure that your user account has "Read & Write" permissions.
Review Unity's Plugin Settings:
In the Unity editor, select the DLL in the Project view.
In the Inspector window, make sure the appropriate platform (in this case, Mac OS X) and architecture (Apple Silicon, Intel-64) is selected for the DLL.
Ensure that the "Load on Startup" and other pertinent options are checked (should be enabled by default)
Update Mac Configuration:
In Unity, navigate to Edit > Project Settings > Player
.
Scroll down and click on Other Settings
Scroll down again to find Mac Configuration section
Update the Mac Configuration section (follow the below Screenshot)
This document explains how to create ReadyPlayerMe Characters with Convai Plugin.
This document explains how to make characters respond to events happening in their surroundings with a small example.
Our goal in this example is to have the character welcome the player whenever the player enters a certain area, this can be done by using the Invoke Speech
node that basically invokes the AI character to talk and a simple collision box.
Open your AI character blueprint and select the Viewport
tab.
Note: the character blueprint can be a MetaHuman, ReadyPlayerMe, Reallusion or even a custom one you have created, just ensure that it has the Convai Chatbot
component.
From the Components
list add a Box Collision
.
Switch back to the Event Graph
tab.
Select the Box Collision
you just added and scroll down in the Details panel
. Under Events
add the On Component Begin Overlap
event to your event graph.
Setup the following blueprint schematic which uses the Invoke Speech
node from the Convai Chatbot
component.
Enter a Trigger Message
that expresses what happened (i.e. "Player Approached") and you can add a simple instruction (i.e. Greet the player).
Setting the In Generate Actions
and In Voice Response
boolean to true will let the Convai Characters perform actions and generate audio responses respectively.
Hit Compile
and Save
then run the program.
On approaching a certain vicinity will trigger the event and the Convai Character will greet the character as mentioned in the Trigger Message.
The above example is just a simple use case. However, the use of Invoke Speech opens new doors to limitless use cases.
Fix jaw bone issues in Unity avatars with Convai. Ensure smooth lip sync and animations.
If the Lip Sync does not seem to cause any facial animations, even after removing all blendshapes from animations, then the following steps should help resolve the issue.
This is a known issue in Reallusion CC4 characters.
Select the Character and head to the Animator component.
Click the Avatar Field once to select the character's avatar in the Project window.
Select the Avatar and click Configure Avatar.
Select the Head option in the Mapping tab.
Select the Jaw Mapping and set it to None.
Finally scroll down and click Apply.
This will free the avatar's jaw mapping and allow the script to manipulate the Jaw bones.
This document explains how to detect words and perform certain operations based on it.
Open your Convai Character blueprint and click on Class Settings
and then on ConvaiChatbot
component.
Under the Details
section scroll down to the Events
section and add On Transcription Received
event.
Once we have the transcription of player input, we can perform a substring search on it.
The Print String at the end is just an example. You can add your logic after the substring match.
Follow these instructions to bring a character from the Convai Playground into your Unity Project.
This is how you can import characters from the Convai Playground into your Unity Project.
In the Menu Bar, go the Convai > Character Importer.
Enter the Character ID and click Import.
If you are unsure how to get the character ID, click the "How do I create a character?".
You can get the character ID from the Character Description.
The downloading will take a while. On successful download, you will see the character in the scene with the same GameObject as the character ID.
This character will automatically be set up with the basic Convai Setup including the ConvaiNPC Script and Out-Of-Box Animations.
If you are facing issues with the animations in your imported character, make sure to change the animation type of Ellen@IdleNew
and Ellen@TalkingNew
Animations in the Assets/Convai/Animations
folder to Humanoid
.
Now you are ready to set up the character with transcriptions.
This guide will help you make a scene in unity with Convai Essentials already present in it. It will help you to get started with our plugin very fast.
You can open the new scene window by two ways, first by pressing Ctrl + N
for windows or CMD + N
for Mac on your keyboard, second way is to navigate to File -> New Scene
There will be many scene templates depending upon your project, but in this guide, we are interested in Convai Scene Template so select that and click on Create button.
You can now save the newly created scene in the project at your desired location by either pressing Ctrl + S
on Windows or CMD + S
on Mac. Another method is to navigate to File -> Save Scene
This is open the Save Scene Window, choose your desired location, for this demo we will save it inside Demo folder, but you can save it anywhere in the assets directory.
Give your scene a name and then click on Save Scene button
Now you can import your Convai Character or your Custom Characters by following our complete guide on it
Learn to add lip sync to your Unity characters using Convai. Improve realism and interactivity.
Convai sends Visemes or Blend Shape Frame from back-end depending upon the face model the developer chooses to use and when returned Convai SDK out of the box extracts and parses it and provides it to the Convai LipSync Component
, after which the component relies on the SkinMeshRenderer
's Blendshape Effectors
and Bone Effectors
to give Convai powered NPC's realistic lipsync.
This is where the developer will tell the Convai SDK, which index of Blendshape Array will be effector how much from which value. To better explain its working let's understand it with a diagram.
Here, it is saying that whatever value is coming from the server will affect Blendshape at the 116th index by 0.2 multipliers and Blendshape at the 114th index by 0.5 multipliers. The engine representation of this would look something like this.
So, you can make your own Effector list or use one of the many that we ship in the SDK.
How to Create your own Viseme Effector List
Right click inside project panel and head over to Create > Convai > Expression > Viseme Skin Effector
which will create a Viseme Effector List Scriptable Object and now you can define your own values.
This is where developer will tell the Convai SDK, how much each value coming from the server will affect the rotation of the bone. To better explain its working let's understand it with a diagram.
Here, bone's rotation will be affected by the values coming from the server multiplied by the values in effects. For example, for TH the value will affect the bone's rotation by a 0.2 multiplier and etc. The engine representation of this would look something like this.
So, you can make your own Bone Effector list or use one of the many that we ship in the SDK.
We use this formula to calculate the rotation
How to Create Your Own Viseme Bone Effector List
Right click inside the project panel and head over to Create > Convai > Expression > Viseme Bone Effector
which will create a Viseme Bone Effector List Scriptable Object and now you can define your own values.
When you attach this component to your Convai Character, you will see something like this.
Let's learn what these learns are
Facial Expression Data
Head | Teeth | Tongue
Renderer: Skin Mesh Renderer which corresponds to that specified part of the body
Viseme Effectors List: How the SkinMeshRenderer's Blendshape will be affected by values coming from server.
Jaw | Tongue Bone Effector
How much of Bone's rotation will be affected by values coming from server?
Jaw | Tongue Bone
Reference to the bone which controls jaw and tongue respectively
Weight Blending Power
Percentage to interpolate between two frames in late update.
Character Emotions
Learn More about Character Emotions here Character Emotion
Now you can configure the Component according to your custom configuration or use one of the many Presets Convai ships with the SDK
Now your lipsync component would be ready to use in your application.
This guide shows how to dynamically pass variables to the Narrative Design section and triggers.
We will create a simple scenario where the character welcomes the player and asks them about their evening or morning based on the player's time of day.
Activate the Narrative Design for your character in the Playground. Then, create a new Section.
In the Objective section of the new Section, add the following text:
The time of day currently is {TimeOfDay}. Welcome the player and ask him how his {TimeOfDay} is going.
Notice that any string placed between curly brackets becomes a variable. In this case, we are adding the time of day as a variable. From Unity, we can pass either the word "Morning" or "Evening," and the character will respond accordingly.
Now, let’s back to Unity and make the necessary adjustments. Click on your NPC.
Click the Add Component button and add the Narrative Design Key Controller Component.
In the Name field, enter TimeOfDay. In the Value field, specify the corresponding value for that variable, which could be Morning, Evening, or anything else you choose.
Follow these instructions to enable actions for your Convai-powered characters.
Select the Convai NPC character from the hierarchy.
Scroll down to the ConvaiNPC script attached to your character.
Click the "Add Component" button.
Use the checkbox to add the action script to the NPC Actions.
Click "Apply Changes" to confirm.
Convai offers predefined actions for a quick start.
Click the "+" button to add a new action.
From the dropdown menu, select "Move To."
Enter the action name as "Move To" (the name doesn't have to match the action choice name).
Leave the Animation Name field empty for now.
Repeat these steps to add more actions like "Pickup" and "Drop" etc.
Add any object into the scene—a sphere, a cube, a rock, etc.—that can be interacted with
Resize and place the object in your scene.
Create an empty GameObject and name it "Convai Interactables."
Attach the Convai Interactables Data script to this GameObject.
Add characters and objects to the script by clicking the "+" button and attaching the corresponding GameObjects.
Add the "There" object in Objects list, so that we can use the Dynamic Move Target indicator.
Add the Dynamic Move Target Indicator and setup NavMesh agent to you NPC.
To ensure your NPCs can navigate the scene:
Bake a NavMesh for your scene if you haven't already:
Go to Window > AI > Navigation.
In the Navigation window, under the Bake tab, adjust the settings as needed.
Click "Bake" to generate the NavMesh.
Ensure that the NPC character has a NavMeshAgent component:
If not already attached, click "Add Component" and search for NavMeshAgent.
Adjust the Agent Radius, Speed, and other parameters according to your NPC's requirements.
To visually indicate where your NPC will move:
Create a new empty GameObject in the scene and name it accordingly or use the pre-made prefab named Dynamic Move Target Indicator.
Link this Move Target Indicator to your NPC's action script so it updates dynamically when you point the cursor to the ground and ask the NPC to move to "There".
Click "Play" to start the scene.
Ask the NPC, "Bring me the Box."
If setup properly, the NPC should walk upto the box and bring it to you
This feature is currently experimental and can misbehave. Feel free to try it out and leave us any feedback.
Make your NPC perform custom actions like dancing.
Locate the dance animation file within our plugin.
Incorporate this animation into your NPC's actions.
Open the Animator Controller from the Inspector window.
Drag and drop the dance animation onto the controller, creating a new node named "Dancing."
Go to the Action Handler Script attached to your Convai NPC.
Add a new action named "Dancing."
In the Animation Name field, enter "Dancing" (it must exactly match the Animator Controller node name).
Leave the enum as "None."
Click "Play" to start the scene.
Instruct the NPC, "Show me a dance move," and the NPC should start dancing.
Adding advanced custom actions, such as a throw action, to your NPC.
Grab a throw animation from Mixamo or anywhere you like.
Import it into Unity.
Drag and drop the throw animation onto the controller, creating a new node named "Throwing." (Follow steps in #action-that-only-requires-an-animation)
Add the "Throw" enum to the script.
In the "Do Action" function, add a switch case for the throw action.
Define the "Throw()" function.
Add a new action named "Throw" and select the "Throw" enum.
Leave the animation name field empty.
Add any rock prefab into the scene.
Add the rock to the Convai Interactable Data script.
Add a stage/new location in the ground of the scene.
Add that new location game object in the Convai Interactable Data.
Click "Play" to start the scene.
Instruct the NPC, "Pick up the rock and throw it from the stage."
If everything is set up properly, the NPC should pick up the rock and throw it from the stage.
Follow these instructions to set up your imported character with Custom Model with Convai.
To import your custom characters into your Convai-powered Unity project, you will first need to bring your model into your project. The model needs at least two animations: one for talking and one for Idle.
When you want to set up your custom character with Convai, you will need your character model and two animations: Idle and Talking.
Create an animator controller with the two animations that looks like this. You should also add a 'Talk' Boolean to ensure that you can trigger the animation. Here is a YouTube tutorial on how to set up an animator controller. This is the bare minimum animator setup that you need to do.
Select your character from the Hierarchy and Add Animator Component
Convai Plugin ships with two pre-made animation controller, you can choose these controllers or can assign your custom controller, whatever fits your need. For this demo we are going with Feminine NPC Animator
With your custom character selected, add a Collision shape of your choice, for this demo we are going with a Capsule Collider
We will make this Collider a trigger, for this we will enable the Is Trigger
option in the inspector panel
We will adjust the Center, Radius and Height of the collider such that it fits our character
With your Custom Character Selection add ConvaiNPC component. By doing so, your Game objectgame should look like this:
We assume that nothing other than pre-instructed components were added by you; your Game Object component list may be different
Copy your character's ID and name from Convai Playground and paste them here.
Now your Custom Character is all set to work with Convai Plugin.
You can point at Interactable Objects and Characters and ask your characters about them.
To enable this, simply drag and drop the Convai Crosshair Canvas
prefab into the scene.
This guide will walk you through setting up the NPC to NPC conversation feature in the Convai SDK.
Go to your Convai NPCs:
Select the NPCs you want to include in the conversation.
Enable Group NPC Controller:
Click on the Group NPC Controller
checkbox in the inspector panel.
Click Apply Changes
to add the group NPC controller script.
Create or Find the Speech Bubble Prefab:
Create a new speech bubble prefab or use the one provided in the Prefabs
folder.
Attach Required Components:
Add the speech bubble prefab and the player transform (optional, defaults to the main camera if not provided).
Set the conversation distance threshold variable (set it to zero to disable this feature, meaning NPC to NPC conversations will always happen regardless of the player’s distance).
Add Relevant Components:
Add components like lip sync, eye and head tracking, character blinking, etc., to the Convai NPC.
Create an NPC To NPC Manager GameObject:
Add an empty GameObject and rename it to NPC to NPC Manager
(optional).
Add the NPC2NPC Conversation Manager Script:
Attach the NPC2NPCConversationManager
script to the GameObject.
Configure the NPC Group List:
In the NPC Group List
, click on the +
icon to add a new list element.
Add the NPCs you want to include in the group conversation.
Set the group discussion topic.
Post configuration of NPCs
Bring the NPCs close together
Play the to make sure everything is working as intended.
By following these steps you can set up and manage NPC to NPC conversations in your Convai-powered application. For further customization and integration, refer to the complete implementation code and adjust it as needed for your specific use case.
All the information that Convai SDK needs from the player to work properly
This is a scriptable object which is made automatically after you hit play in the editor with Convai SDK installed and in a Scene where Convai Base Scene Essentials Prefab is present
Default Player Name
You can provide a default name of your players.
Player Name
Current name of your player, out of the box if you use our settings panel, we keep it updated automatically, if you are using some custom logic, it will be your responsibility to keep it updated, as our transcript UI use this name to show it in UI
Speaker ID
Speaker ID for the player. Please note that Speaker ID is directly linked with your API key, so for each API key there should be a unique speaker ID associated with it. We handle the creation of the Speaker ID when it's not found in the Player Prefs if the Boolean is set to true.
Create Speaker ID If Not Found
This Boolean lets the SDK if it should create a unique Speaker ID for that Player Name if it is not found in the Player Prefs.
It just makes the Player Name and Speaker ID fields empty.
Copies the data into system buffer so you can paste it anywhere for debugging purpose
Load: Loads the Player Name and associated Speaker ID from the player Prefs
Save: Saves the Player Name and associated Speaker ID from the player Prefs
Delete: Deletes the Player Name and associated Speaker ID from the player Prefs
Convai provides a pre-made component which you can add to any GameObject
to make the PlayerDataContainer
work out of the box.
Choose an existing GameObject or create a new GameObject
in the scene and add the ConvaiPlayerDataHandler
component to your chosen GameObject
and it should start working
You can also create the required Scriptable Object by going to Assets > Convai > Resources
and right clicking in the project panel and navigating to Create > Convai > Player Data
and name it ConvaiPlayerDataSO
Make sure you name the created Scriptable Object exactly ConvaiPlayerDataSO
as our system looks for this exact name
Learn how to enable character retain conversation history across multiple sessions
Long-Term Memory (LTM) enables the persistent storage of conversational history with NPCs, allowing players to seamlessly continue interactions from where they previously left off, even across multiple sessions. This feature significantly enhances the realism of NPCs, aligning with our goal of creating more immersive and lifelike characters within your game.
Prerequisite: Have a project with Convai SDK version 3.1.0 or higher. If you don't have it, check this documentation
Select your Convai Character
Add the Long-Term Memory Component onto your character
Make sure that Long Term Memory is enabled for that character
Long Term Memory should now be working for your character.
This component will enable or disable LTM right from the unity editor
Toggling Long Term Memory
1) Click the button provided in the component
2) It will take some time to update, and after that the new status of the LTM should be visible in the inspector.
Since enabling or disabling Long-Term Memory (LTM) for a character is a global action that impacts all players interacting with that character, we strongly recommend against toggling the LTM status at runtime. This functionality should be managed exclusively by developers or designers through the editor to ensure consistent gameplay experiences.
Grpc.Core.RpcException: Status(StatusCode=InvalidArgument, Detail="Cannot find speaker with id: 99fbef96-5ecb-11ef-93ce-42010a7be011.")
If you encounter this error, ensure that the SpeakerID was created using the same API key currently in use. If you're uncertain about the API key used, you can reset the SpeakerID and PlayerName by accessing the ConvaiPlayerDataSO
file located in Assets > Convai > Resources
, allowing you to start the process anew.
It is essential for developers to efficiently manage the Speaker ID(s) generated using their API key, as the number of IDs that can be created is limited and dependent on the subscription tier. Proper management ensures optimal usage of resources and prevents potential disruptions in the application's functionality.
Personal
1
Gamer / Indie / Professional
5
Partner / Enterprise
100 (Can be Customized)
You can view all the Speaker ID(s) associated with a specific API key by accessing the Convai Window within your Unity project. This feature provides a comprehensive list of IDs, allowing for easier management and monitoring.
Ensure that the API key is correctly entered; otherwise, the feature will not function as expected. Accurate API key input is critical for accessing and managing Speaker ID(s) through the Convai Window in Unity.
Head over to Long Term Memory Section
If the message "No Speaker ID(s) Found" appears, there is no need to proceed with this guide. However, if a Speaker ID list is displayed, it's advisable to delete any ID(s) that are no longer in use or needed to optimize your available resources.
Convai offers comprehensive transcript and voice support for a wide range of languages. To facilitate seamless integration, our Unity plugin comes with a custom TextMeshPro (TMP) package, which includes essential fonts and required settings for major languages.
This requires TMP Essentials pre-installed, which can be done through the TextMeshPro option in the Window tab or through a prompt on starting the project.
To implement these language-specific features in your project:
Navigate to the Convai Setup Window within Unity.
Locate the Package Management section.
Click on the "Convai Custom TMP Package" button.
Once installed, just import the character for which you require the language support, talk with it and the font will automatically render in the transcript.
For now, we provide fonts for these languages:
Arabic
Japanese
Korean
Chinese
We also provide support for Right-to-Left languages, like Urdu, Persian and Arabic through our Chat UIs. So, for example, if you talk with an Arabic character or if the character's name is in Arabic, the text will automatically enable the RTL feature provided by unity to reflect proper transcripts.
Follow this guide to incorporate Narrative Design into your Convai-powered characters. Follow this step-by-step tutorial, open your project, and let's begin!
For this demo, we are using Seraphine Whisperwind
, you can select whatever character you want to enable Narrative Design.
Select the Narrative Design option from the side panel and create your narrative design
For more information how to create narrative design in the Convai Playground please refer to the following YouTube video series
For this sample we have created the following Narrative design
You are all set to bring your character from Convai Playground to Unity, let's hope over to Unity to continue the guide
Narrative Design Manager
in the search box and select itAfter adding the Narrative Design Component, you will be able to be the following component
This component system assumes that API key is setup correctly, so ensure that API key is setup correctly otherwise an error will be thrown.
After adding, component will retrieve the sections for the character ID taken from the ConvaiNPC, please wait for some time depending upon your network speed
The following section events are for character used in demo, and you will see section events corresponding to your character in which Narrative Design is enabled.
Expanding the section event, you will see two unity events you can subscribe to, one is triggered when section starts, and another one is triggered when section ends
Section triggers are a way to directly invoke a section in narrative design and can be used to jump to a different section in your narrative design
Make sure that game object you have decided to be a trigger have a collider attach to it
Now you can select from the "Trigger" dropdown which trigger should be invoked when player enters this trigger box.
We have added a way for you to manually invoke this trigger also, you can use InvokeSelectedTrigger
function to invoke the trigger from any where
You can use this code block as a reference to invoke the trigger from anywhere
Input Management - Efficiently handle input for Convai's Unity plugin integration.
Make sure that Active Input Handling in
"Project Settings > Player" is set to Both or Input System Package (New).
Our recommendation is Both. This way, you can use both the new and old input systems. Using the old input system can be faster when creating inputs for testing purposes.
Double click on the "Controls" asset in your project tab.
You can setup multiple control schemes for different devices here, currently we have it for PC (Keyboard & Mouse) and Gamepad. For mobile, we have provided joystick and buttons, which are mapped to Gamepad controls for functionality, but you can directly add touchscreen and use its different features to trigger an Input Action. You can also add your own control scheme if you want support for a different device by clicking on "Add Control Scheme".
Find the Input Action you want to change in the above window. If you want to add a new Input Action, refer to the other section in documentation. In this case, we selected "Talk Key Action" to change the talk button. Click on "T [Keyboard]". In the Binding Properties window, click on the " T [Keyboard] " button in the Path field.
Press the " Listen " button in the top left of the opened window. If you prefer, you can choose your desired input from the categories below.
Press the key you want to assign and it will be reflected in the control asset.
First, go to the controls asset mentioned above and use the add button to create an input action. For this example, we will call it interact and provide it the binding with [E] button.
Then, click on the <No Binding> item to setup binding for this action. As before, you can use the listen button (has a UI bug for Windows but works) or you can select the key from dropdown. After selecting the binding (we will select [E] key for this), don't forget to press on the Save Asset option on the top menu.
You will now get an error saying ConvaiInputManager does not implement OnInteract. We need to implement this. Open the " ConvaiInputManager.cs " script to do so. ( " Convai / Scripts / Runtime / Core / ConvaiInputManager.cs " )
Your IDE might suggest you to implement missing members. If it doesn't we can manually write the OnInteract function like in the last figure shown. You receive a callback context which shows which frame input started, performed or got cancelled which you can use for different purposes. And that's it the error should be gone and you are good to go!
Transcript UI System - Integrate transcript UI with Convai's Unity plugin.
The Dynamic UI system is a feature within the Convai Unity SDK that provides developers a robust system for in-game communication. This feature allows for displaying messages from characters and players and supports various UI components for chat, Q&A sessions, subtitles, and custom UI types. This document will guide you through the integration, usage, and creation of custom UI types of the Dynamic UI feature in your Unity project.
To interact with the chat system, you need to reference the ConvaiChatUIHandler
in your scripts. You can find the Transcript UI prefab in the Prefabs folder.
Here's an example of how to find and assign the handler:
Once you have a reference to the ConvaiChatUIHandler
, you can send messages using the following methods:
Sending Player Text
To send text as the player:
input
: The string containing the player's message.
To send text as a character:
characterName
: The name of the character sending the message.
currentResponseAudio.AudioTranscript
: The transcript of the audio response from the character, trimmed of any leading or trailing whitespace.
While the Dynamic UI system within the SDK provides several pre-built UI types, you may want to create a custom UI that better fits the style and needs of your game and it designed to be extensible, allowing developers to add their custom UI types. This is achieved by inheriting from the ChatUIBase
class and implementing the required methods. The ConvaiChatUIHandler
manages the different UI types and provides a system to switch between them.
To create a custom UI type, follow these steps:
Create a new C# script in your Unity project and define your class to inherit from ChatUIBase
. For example:
Implement the abstract methods from ChatUIBase
. You must provide implementations for Initialize
, SendCharacterText
, and SendPlayerText
:
Add any additional functionality or customization options that your custom UI may require.
To use your custom UI class within the dictionaryConvaiChatUIHandler
, you need to add it to the GetUIAppearances
dictionary. This involves creating a prefab for your custom UI and assigning it in the ConvaiChatUIHandler
.
Here's an example of how to do this:
Create a prefab for your custom UI and add your CustomChatUI
component to it.
Assign the prefab to a public variable in the ConvaiChatUIHandler
script.
Modify the InitializeUIStrategies
method in the ConvaiChatUIHandler
script to include your custom UI type.
Ensure that your custom UI type is added to the UIType
enum:
Now you can set your custom UI type as the active UI from the Settings Panel Settings Panel.
By following these steps, you can integrate your custom UI type into the Dynamic Chatbox system and switch between different UI types at runtime.
Notification System - Implement notifications with Convai Unity plugin utilities.
The Convai plugin comes with default notifications, totaling four. Here they are:
Appears when you press the talk button but there is no active NPC nearby.
Appears if you release the talk button in less than 0.5 seconds.
Appears when the recorded audio input level is below the threshold.
Appears when there is no internet connection upon launching the application.
Adding your custom notification is straightforward.
Let's go through the steps to add a " CharacterStartedListening" notification as an example.
Open the script "Convai/Scripts/Notification System/Notification Type.cs." This script stores Notification Types as enums. Give a name to your desired Notification type and add it here.
Right-click on "Convai / Scripts / Notification System / Scriptable Objects" and select "Create > Convai > Notification System > Notification" then create a "Notification Scriptable Object".
Name the created Notification Scriptable Object. Click on it, and fill in the fields in the Inspector as desired.
Add the created Notification Scriptable Object to "Convai/Scripts/Notification System/Scriptable Objects" under "Convai Default Notification Group" (details of Notification Group here****).
Your Notification is now ready. The last step is to call this Notification from where you need it. For example, if you created the " CharacterStartedTalking " Notification, find the location where your character listens and write the code.
Replace the parameter with the NotificationType you created. (For our example, NotificationType.CharacterStartedListening)
Ensure that the Convai Notification System is present in your scene. (accessible from "Convai/Prefabs/ Notification System")
All steps are complete, and you're ready to test!🙂✅
This Scriptable Object stores information about a Notification
Notification Type
Notification Icon
Notification Title
Notification Message
To create a new Notification Scriptable Object, right-click anywhere in the Project Window and select "Create > Convai > Notification System > Notification"
This Scriptable Object stores Notification Scriptable Objects as groups. When a Notification is requested, it searches for the Notification using the specified Notification Group in the Convai Notification System prefab's Notification System Handler script.
You can create different Notification groups based on your needs. Note: If your referenced Notification Group does not have the Notification you want, that Notification won't be called.
The Convai Default Notification Group has four Notifications, but you can add more or create a new group with additional notifications.
Settings Panel - Customize settings using Convai's Unity plugin utilities.
Settings Panel consists of two main sections.
Audio Settings
Interface Settings
The Microphone Settings section is primarily for troubleshooting and testing the microphone when using the Convai plugin.
In the Input section, you can view the microphones connected to your computer and select the desired one.
In the Test Input field, you can record your voice using the selected microphone in the Input section. After clicking Stop, you can listen to the recorded voice and observe the sound levels.
The first setting that greets us here is the Appearance setting.
In the Appearance section, you can switch between Transcript UI designs.
There are three Transcript UI options:
ChatBox
QuestionAnswer
Subtitle
The second section in Interface Settings is the Display Name section. This section allows you to change how the user's name appears in the Transcript UI.
The last section in Interface Settings is the Notifications Checkmark.
Convai sometimes displays notifications on the screen to inform the user. If you want to disable these notifications, you can click the checkbox here. ( If the box is green, it's active. If empty, it's inactive )
In this guide, we learn about character emotion coming from server
Convai Character emit character emotions when they interact with the player and these emotions help in making the character more human-like, we are starting to implement a system which you as a developer can use to make your game more interactive using the character emotions.
Whenever the character responds to the user, we send back a list of emotions to the SDK, which look something like this
For v0 of this system, we will only be sending the emotions, in future we will apply the facial expressions corresponding to each emotion which will make the character more interactive.
With Convai's Unity SDK, you can build your favorite application for several platforms, including Windows, MacOS and Android. Currently, we also support these platforms:
(Android/iOS)
The Dynamic Config feature enables you to pass variables to NPCs in real time, allowing them to react dynamically to changes in the game environment. This can include the player’s current health, inventory items, or contextual world information, greatly enhancing interactivity and immersion.
First, Add the Dynamic Info Controller Component to your Convai NPC.
Create a new script or use an existing script to define a variable that will store a reference to the Dynamic Info Controller Component you added to your NPC.
Initialize the Dynamic Info: In the script’s Start method, call the SetDynamicInfo
method on the Dynamic Info Controller reference. This will set the dynamic information that the NPC will use. In this example, we’ll initialize the Player’s health as a dynamic variable.
Updating the Dynamic Info: Whenever you need to update the NPC with new information (such as a change in Player Health), call the SetDynamicInfo
method on the Dynamic Info Controller.
At the start of the game, we set the Player’s health to 100 and send this information to the NPC as the initial value.
Then, when the player takes damage (simulated here by pressing the "K" key), we reduce the Player’s health and update the Dynamic Info in real time so that the NPC remains aware of the Player's current health status.
Below, we provide a sample conversation showcasing how the NPC can react based on the dynamic health information of the Player. By dynamically updating the Player's health, NPCs can deliver responses that feel personalized and relevant to the current gameplay.
Add the Dynamic Info Controller to your NPC. Use SetDynamicInfo
to initialize the dynamic variable at the start, and call SetDynamicInfo
again whenever updates are needed.
This guide will walk you through the process of installing Convai-powered Unity applications on iOS and iPadOS devices.
Before you begin, make sure you have the following:
Unity 2022.3 or later
Xcode (latest version recommended)
Apple Developer account
Project with Convai's Unity SDK integrated and running properly
MacBook for building and deploying to iOS/iPadOS
Open your Convai-powered Unity project.
Ensure you have the latest version of the Convai Unity SDK into your project.
In Unity, go to File
→ Build Settings
.
Select iOS
as the target platform.
Click Switch Platform
if it's not already selected.
Check the Development Build
option for testing purposes.
If you wish to add a few required files manually, follow step 3. If you want it to be done automatically, jump to step 4
Create a new file named link.xml
in your project's Assets
folder.
Add the following content to the file:
This file prevents potential FileNotFoundException
errors related to the libgrpc_csharp_ext.x64.dylib
file.
Create a new C# script in Assets/Convai/Scripts
named iOSBuild.cs
.
Add the following content to the script:
Go to Convai -> Custom Package Installer
Click on Install iOS Build Package
Attach the script iOSBuild.cs
to any GameObject in your scene.
In Unity, go to File
→ Build Settings
.
Click Build
and choose a location to save your Xcode project.
Wait for Unity to generate the Xcode project.
Open the generated Xcode project.
In Xcode, select your project in the navigator.
Select your target under the "TARGETS" section.
Go to the "Signing & Capabilities" tab.
Ensure that "Automatically manage signing" is checked.
Select your Team from the dropdown (you need an Apple Developer account for this).
If needed, change the Bundle Identifier to a unique string.
Connect your iOS device to your Mac.
In Xcode, select your connected device as the build target.
Click the "Play" button or press Cmd + R
to build and run the app on your device.
If you encounter any build errors, ensure all the steps above have been followed correctly.
Check that your Apple Developer account has the necessary provisioning profiles and certificates.
If you face any GRPC-related issues, verify that the libgrpc_csharp_ext.a
and libgrpc.a
files are correctly placed in the Assets/Convai/Plugins/gRPC/Grpc.Core/runtime/ios
folder.
Convai UI Prefabs - Utilize ready-to-use UI elements for Convai integration.
We provide several UI options to display character and user's transcript out of the box that players can use with the Convai Plugin. You can use and customize these prefabs.
The ConvaiNPC and ConvaiGRPCAPI scripts look for GameObjects with Convai Chat UI Handler as a component, and send any transcripts to the script so that it can be displayed on screen.
Prefab Name: Convai Transcript Canvas - Chat
Both the user's and the character's transcripts are displayed one after other in a scrollable chat box.
Prefab Name: Convai Transcript Canvas - Subtitle
The user and character transcripts are displayed in the bottom like subtitles.
Prefab Name: Convai Transcript Canvas - QA
The user's transcript is displayed in the top where as the character's transcript is displayed in the bottom.
Prefab Name: Convai Transcript Canvas - Mobile Subtitle
Prefab Name: Convai Transcript Canvas - Mobile QA
Prefab Name: Convai Transcript Canvas - Mobile Chat
Activate the 'Enable Action Generation' feature
That’s it! Now let’s test it out.
For more information about notifications, you can refer to this .
This feature provides a powerful tool for creating NPC interactions that respond in real-time to the state of the game world, creating a more immersive experience for the player.
Identical to UI. Includes a button that can be pressed and held for the user to speak. Ideal for portrait orientation of screen.
Setting Up
Setup your project with Convai plugin.
Creating Metahuman Characters
Metahuman characters with lip sync
Creating Reallusion Characters
Reallusion characters with lip sync
Setting Up
Setup your project with Convai plugin.
Creating MetaHuman Characters
MetaHuman characters with lip sync
Creating Reallusion Characters
Reallusion characters with lip sync
Chanegelog
Track changes to the Unreal SDK.
Github Repository
Access the source code for the SDK.
Tutorial Playlist
Implement basic conversation either written or with voice.
Blueprint Reference
Detailed documentation for every blueprint function or component.
SendCharacterText
A public function that sends a string of text to be displayed as character transcript along with the name of the character who said it.
SendUserText
A public function that sends a string of text to be displayed as user transcript.
Convai Unity Plugin Changelogs - Stay updated with the latest changes.
Released: October 31, 2024
Implemented Dynamic Config Feature:
This feature allows you to dynamically pass variables to NPCs. For example, you can update NPCs with the player’s current health, inventory items, or information about the world, enhancing interactivity and immersion.
Implemented Narrative Design Keys:
This feature enables dynamic variable passing within the Narrative Design section and triggers. For instance, you can use placeholders like {TimeOfDay} to create personalized dialogues, such as "Welcome, player! How is your {TimeOfDay} going?"
Added MR Demo Scene
Added MR Automatic Installation and Manual Installation
Added Convai XR Package (compatibility with Meta SDK and other XR SDKs provided)
Added Long Term Memory API(s) to View and Delete Speaker ID(s)
Improved VR Manual Installation
Improved Custom Package Installation
Minor Bug Fixes
Released: September 16, 2024
Minor Bug Fixes
Released: September 12, 2024
Fixed NPC2NPC response delay
Released: Aug 28, 2024
New Convai Setup Editor Window
Long term memory (beta) integration in Unity SDK
ChatBox UI revamp with RTL support
Input system revamp and Mobile UI improvements
UI Improvements
Revamped ChatBox UI
Improved mobile UI
Implemented chat disabling feature
Added usage limit exceeded notification
Dialog box added for no API Key scenario
API and Backend
Refactored ProcessUserQuery for better transcript handling
Implemented fuzzy matching for Action System
Ready Player Me and CC_Tools automatic import process
Character and Animation
Updated Character Importer Pipeline
Added OVR effectors for RPM characters
Fixed animators for all characters
Provided Weight Multiplier for LipSync user preference
RPM Characters will have Lipsync added when imported
Bug Fixes
Fixed character resuming dialogue after toggle
Fixed section deletion issue in NarrativeDesignManager
Fixed layer issues
Optimised Convai LipSync
Developer Tools and Workflow
Improved Convai Logger System
Updated namespace and formatting for all editor scripts
Removed CC Tools Folder and other temporary/junk files
Added "Update Triggers" button to NarrativeDesignTrigger inspector
Implemented approximate string matching for actions system
Miscellaneous
Added PlayerDataHandler and PlayerDataSO
Updated NPC positions and topics in demo scenes
Fixed Convai logo in Convai Setup window
Released: Jun 21, 2024
Fix macOS TMP UGUI render issue in demo scene
Prefab missing animator
Updated ActiveNPC layer check logic
Released: Jun 13, 2024
Implemented NPC2NPC conversation flow system.
Added handling for conversation interruptions and restarts.
Enhanced conversation history tracking and flow management.
Added Narrative Design-related files and trigger narrative section function.
Refactored Narrative Design API, created new behavior trees for movement and added section change events.
Folder Restructuring
Complete folder and scripts folder restructure
Gender-Based Animator: Added gender-based Animator controller
Feedback System
Implemented a feedback system with thumb icons and animations
Updated Transcript UI Prefabs with feedback buttons
Convai Custom Packages: Updated Convai Custom Package Installer and added iOS DLL Downloader
Scene Perception: Added feature to allow players to point at game objects and talk about them
Texture and Material Compression
Compressed Amelia and other images (POT and Crunch)
Updated image names and removed unused image assets
UI Updates
Updated UI prefabs, including Transcript UI, Mobile UI, and Mobile QA UI
Transcript UIs and text updated
Updated logos and logo paths
System Improvements
Refactored Lipsync system with added teeth support and implemented facial expression proto files
Updated ConvaiURPConverterPackage, burst and TMPro packages; Convai Custom Package Installer/Exporter
Updated NavMesh and NPC2NPC character rotation
Added new demo scene and RPM characters
Updated various demo scenes for consistency
Microphone Manager: Updated Microphone Manager to a singleton class
API Key Access: Simplified API Key access
Convai Scene Template: Created new scene template and dynamic input system assigner
Demo Scenes
Added NPC2NPC demo scene
Added Narrative Design demo scene
Added new demo scene with all features encompassed
"Convai Essentials" prefab for desktop and mobile
Lipsync Overhaul
Overhauled lipsync system, added AR-Kit and Reallusion character support
Updated version and added various improvements to frame processing
Input System
Added new input system pragma checks and Convai Character Layer
Simplified Input Manager and ensured future-proofing
Transcript UI Bug Fixes: Fixed bugs and improved system for Transcript UI character list
Microphone Permission: Fixed Android and iOS microphone permission issues
VR Support: Implement Virtual Reality features to create a fully immersive experience with the press of a button.
AR Support: Integrate Augmented Reality capabilities, allowing characters and environments to interact with you in the real world with the press of a button.
Settings Panel: Introduce a comprehensive settings panel that allows users to customize their experience.
Microphone Test System: Incorporate a microphone testing feature to ensure optimal audio input quality.
Notification System: Implement a robust notification system to inform users of in-game events - specifically microphone-based issues.
Input Manager: Develop a custom input management system that supports various input devices such as keyboards, gamepads, and touchscreens using Unity's new Input System.
Fixed: Head Tracking Doesn’t Work Without Action Component issue fixed.
Improvement: Added support for a customizable and dynamic Chatbox.
Improvement: Improved Lip-Sync Smoothing and audio-visual synchronization.
Improvement: Implement Action Events and Event Callbacks.
Improvement: Improved Logging System.
Improvement: Added ability to interrupt Character Response with Voice Interruption.
Improvement: Improved mobile platform transcription UI.
Lip-sync: Integrate off-the-shelf Lip-sync for Reallusion and Oculus-based Characters.
Text-in Voice-out: Chat with the character using text.
Character Importer: Import Ready Player Me characters created on the Convai Playground.
Feature Control System: Enable Convai features as needed through the Convai NPC component.
Logging System: Have better control over what Convai information you see on the debug console.
Enhanced player controller: Automatically triggers the characters when you focus on them and then deactivates them when your focus has shifted.
URP Upgrader: Upgrade the Render Pipeline to Universal Render Pipeline with the URP Upgrader package (present in the Convai Folder).
UI Improvements: Improved user experience with automatically fading UI canvas.
Fixed: Unlocking the cursor will still cause the first-person camera to move around.
Fixed: Exiting play mode before the character is done speaking will cause Unity to crash or not complete compilation.
Fixed: Extra space between multiple chunks of text in the UI Text Fields.
Fixed: Actions crashing the Android scene.
Fixed: Empty responses from the server will not crash the game but only throw an error.
Improvement: Smoothened Blinking.
Improvement: Smoothened Gaze-Following-based Neck movement.
Improvement: Plugin structure reorganization.
Automatic Installation Steps
The Automatic Installation method is ideal for users starting new projects who want a straightforward setup. It provides a fully integrated Convai MR project from the beginning, ensuring a smoother development experience.
In the top menu, click on Convai. Then, select Custom Package Installer.
In the Convai Panel that appears, click on Package Management and then select Install MR Package.
A new window will appear prompting you to select your installation type. For this documentation, we will proceed with Automatic Installation.
In the next window, carefully read the setup instructions, warnings, and details of the changes that will be applied.
When ready to proceed, click Yes, proceed.
The installation process will begin, taking approximately 5 minutes. The duration may vary depending on your computer's performance.
If a prompt titled OVRPlugin Detected from MetaSDK appears, click Restart Editor to continue.
Once the installation is complete, it’s time to open the demo scene.
Navigate to Assets > Convai > ConvaiXR > ConvaiMR > Scenes and open Convai Demo - MR.
If you don’t have the TextMeshPro package installed, you’ll need it to display text properly in the demo scene. To install it:
Go to Window > TextMeshPro > Import TMP Essential Resources.
In the Unity Package Import window that appears, click Import.
You can build your project by going to File > Build Settings.
In the demo scene, select [BuildingBlock] Find Spawn Positions in the hierarchy.
In the Spawn Object field, drag and drop the character you imported into the project.
If you’re unsure how to import your character, please refer to the relevant documentation here.
This demo scene uses MetaSDK by default. However, you are free to use other methods or SDKs to spawn your character, as there are no restrictions.
The Automatic Installation method is ideal for users starting new projects who want a straightforward setup. It provides a fully integrated Convai VR project from the beginning, ensuring a smoother development experience.
In the top menu, click on Convai. Then, select Custom Package Installer.
In the Convai Panel that appears, click on Package Management and then select Install VR Package.
A new window will appear prompting you to select your installation type. For this documentation, we will proceed with Automatic Installation.
In the next window, carefully read the setup instructions, warnings, and details of the changes that will be applied.
When ready to proceed, click Yes, proceed.
The installation process will begin, taking approximately 5 minutes. The duration may vary depending on your computer's performance.
Once the installation is complete, it’s time to open the demo scene.
Navigate to Assets > Convai > ConvaiXR > ConvaiVR > Scenes and open Convai Demo - VR.
If you don’t have the TextMeshPro package installed, you’ll need it to display text properly in the demo scene. To install it:
Go to Window > TextMeshPro > Import TMP Essential Resources.
In the Unity Package Import window that appears, click Import.
You can build your project by going to File > Build Settings.
This guide will help you integrate Convai's WebGL capabilities into your Unity projects, enabling you to bring to life AI characters with human-like conversational abilities.
Convai's Unity WebGL SDK is designed to complement the standalone application capabilities of our Unity Asset Store version. With this specialized SDK, you can build and deploy interactive WebGL applications that leverage Convai's advanced conversation pipeline. Please see the instructions below or check out our latest tutorial video on YouTube.
Please ensure that Git is installed on your computer prior to proceeding. Download Git from here.
Follow the Import and Setup Instructions from Import and Setup (nightly) and Setting Up Unity Plugin.
When attempting to play the scene in the Unity Editor, you may encounter the following error:
This error occurs because the WebGL SDK cannot be tested directly within the Unity Editor. To test your WebGL application, you must create a development build.
Now, your Unity setup is done, let's setup WebGL
Head on over to File
→ Build Settings
, then:
Click on WebGL
.
Check the Development Build
box.
Select Switch Platform.
Patience, remember? This shift takes a bit.
After the platform is switched to WebGL, click on Player Settings
.This is where the fun begins:
Once the platform conversion is complete,
Open the filePlayer Settings
.
Navigate to the pageWebGL settings
.
Under theResolution and Presentation
tab, select theConvai PWA Template
.
After the reloads is completed, check if the settings have changed. then, close all the open menus and follow these steps :
Double-click the Convai folder
and go to scenes.
Open the Convey Demo WebGL
scene.
Head to Convai's website, grab your API key, and input it back in Unity via Convai
→ Convai Setup
.
Now, head again to the Convai’s website and grab your favourite character’s id and paste it to Convai
→ Character Importer
.
Remember, Unity's editor won't let us test WebGL directly. But fear not, there's a Build and Run
option:
Go back to File
→ Build Settings
.
Click Add Open Scenes
and then Build and Run
.
Choose a folder for the build output, make a new one if needed, and name it "WebGL."
The First build may take some time. For subsequent builds and runs, use the Unity shortcut key Ctrl + B.
The first build is the longest, so feel free to stretch a bit – but don't venture too far. Soon, you'll greet our demo character, Amelia, or any other character you brought into your digital oasis. Just give your Microphone permissions and here you go!
Now the magic happens. Press and hold 'T' to chat with your carefully cultivated character. Or click on the text box to type out a question. And for the attention to detail – press F10 to access the settings panel where you can change your name and the UI style to your liking.
Feeling accomplished? You should! You now have a successfully working WebGL build in your browser. Curious developers can take a step further by downloading the project files from GitHub, available for all who desire to peek behind the curtain.
When you are ready with your production build, just uncheck the Development Build field in the Build Settings before publishing
MR development with Convai
Setting up Convai for Mixed Reality (MR) is easier than you might think. With just a few clicks, you can get Convai integrated directly into your MR projects.
To give users flexibility and customization options, we offer two installation methods:
Automatic Installation
Manual Installation
The Automatic Installation method is ideal if you’re starting a new project and want to avoid dealing with MR packages or additional setup requirements. This method provides a fully Convai-integrated MR project right from the start, including the Convai MR Demo Scene for quick testing and exploration. This approach is particularly suitable for beginners.
Note: This installation method will modify Unity Project settings. These changes are necessary to ensure compatibility with Convai.
The Manual Installation method is designed for existing MR projects. With this method, you have full control over the setup, as no project settings are modified, no demo scenes are added, and no customizations are applied. Only Convai’s XR Unity Package will be integrated, giving you the freedom to use any MR SDKs without restrictions.
Select one of the two installation methods and click on the corresponding setup section below to follow the instructions.
This installation method is ideal for users working with existing projects who wish to customize their setup. It allows for compatibility with various other SDKs, enabling you to integrate Convai seamlessly into your current workflows without limitations on using third-party SDKs.
In the top menu, click on Convai. Then, select Custom Package Installer.
In the Convai Panel that appears, click on Package Management and then select Install VR Package.
A new window will appear prompting you to select your installation type. For this documentation, we will proceed with Manual Installation.
The installation will start. This process will be completed quickly as only the ConvaiXR Package will be installed.
Once installation is complete, the Convai Essentials - XR Prefab will be added to your scene.
This is the only GameObject required for Convai to run in your scene.
The imported files can be found under Assets > Convai > ConvaiXR.
The final step is to import your character into the scene.
If you need guidance on this, please refer to the relevant documentation here.
To activate Convai in your scene, simply add the Convai Essentials - XR Prefab.
There are no limitations on Third-Party SDKs, so you are free to use Convai with any XR SDK of your choice.
Please follow this documentation to interact with Convai's Settings Panel.
VR development with Convai
Setting up Convai for Virtual Reality (VR) is easier than you might think. With just a few clicks, you can get Convai integrated directly into your VR projects.
To give users flexibility and customization options, we offer two installation methods:
Automatic Installation
Manual Installation
The Automatic Installation method is ideal if you’re starting a new project and want to avoid dealing with VR packages or additional setup requirements. This method provides a fully Convai-integrated VR project right from the start, including the Convai VR Demo Scene for quick testing and exploration. This approach is particularly suitable for beginners.
Note: This installation method will modify Unity Project settings. These changes are necessary to ensure compatibility with Convai.
The Manual Installation method is designed for existing VR projects. With this method, you have full control over the setup, as no project settings are modified, no demo scenes are added, and no customizations are applied. Only Convai’s XR Unity Package will be integrated, giving you the freedom to use any VR SDKs without restrictions.
Select one of the two installation methods and click on the corresponding setup section below to follow the instructions.
Building for AR - Unity Plugin Guide for AR development with Convai.
If you want to make your Convai Plugin compatible with AR, you can do so in two ways. Please see the instructions below or check out our on YouTube.
Recommended for new projects.
The following processes will be performed:
Universal Render Pipeline (URP)
ARCore Plugin
Convai Custom AR Package
Convai URP Converter
If these packages are not present, they will be installed.
If the target build platform is not Android, it will be switched to Android.
Make sure to download the Android platform support from Unity Hub for your project's version.
Click on " Convai / Convai Custom Package Installer / Install AR Package "
Confirm the changes and processes to be made. If you agree, the process will start. Click " Yes, Proceed " and the process will begin. You'll see logs in the console.
If you encounter an error like "Failed to Resolve Packages," don't worry. The process will continue, and the error will be resolved automatically after the package installations are complete.
Open the " Convai / Scenes / Convai Demo - AR " demo scene. If the TMP Importer window appears ( It will appear if TMP Essentials is not installed in your project ), click " Import TMP Essentials " to install TextMeshPro Essentials for UI text objects.
Alternatively, you can use the " Window / TextMeshPro / Import TMP Essential Resources " to install it.
After importing TMP Essentials, you can remove the empty GameObject in your scene that triggers the Prompt window to appear.
Build your project by going to " File / Build Settings / Build " Ensure that the " Convai Demo - AR " scene is included in the Scenes in Build section.
Ensure you've set up your API Key. ( Convai / Convai Setup )
Now everything is ready for testing. 🙂✅
Ensure you have the following packages installed in your project:
ARCore
URP (Universal Render Pipeline) - Recommended for optimization, though not mandatory
Double-click on " Convai / Convai Custom Unity Packages / ConvaiVRUpgrader.unitypackage "
You'll see a warning that the settings will overwrite your project settings. You can either allow it by clicking " Import " or create a temporary project by clicking " Switch Project "
In the Import Unity Package window, review the assets to be imported and click " Next "
Select all settings to be changed in the Project Settings and complete the installation by clicking " Import "
Open the " Convai / Scenes / Convai Demo - AR " demo scene. If the TMP Importer window appears ( It will appear if TMP Essentials is not installed in your project ), click " Import TMP Essentials " to install TextMeshPro Essentials for UI text objects.
Alternatively, you can use the " Window / TextMeshPro / Import TMP Essential Resources " to install it.
After importing TMP Essentials, you can remove the empty GameObject in your scene that triggers the Prompt window to appear.
If you see 3D objects in pink, it's a shader issue. If you're using URP, convert the materials to URP by double-clicking on " Convai / Convai Custom Unity Packages / ConvaiURPConverter " and importing all assets in the window that appears.
Ensure you've set up your API Key ( Convai / Convai Setup ).
Build your project by going to " File / Build Settings / Build " Ensure that the " Convai Demo - AR " scene is included in the Scenes in Build section.
Now everything is ready for testing. 🙂✅
If you've created a Ready Player Me character on convai.com playground and want to add it to your AR project, follow these steps:
Right-click on the " Convai / ConvaiAR / Prefabs / Convai NPC AR Base Empty Character " prefab.
Click on " Create / Prefab Variant "
You'll see a prefab variant created for " Convai NPC AR Base Empty Character "
Double-click on this prefab variant.
In the Hierarchy section, add your imported character as a child to this prefab variant.
After adding your character, click on your character.
In the Inspector, adjust the Scale settings as needed. To prevent your character from moving with animation while talking, disable the " Apply Root Motion " option in the Animator.
After these steps, save your prefab variant by pressing CTRL + S.
Open the " Convai / Scenes / Convai Demo - AR " scene.
Click on the " Convai AR Player " object under " ConvaiAR Base Scene "
In the Inspector, under the " Convai Character Spawner " component, add your prefab variant to the Character Prefab field.
Now, everything is ready to test your character in the AR environment!🙂✅
Creating this prefab variant is to prevent automatic scaling ( 1,1,1 ) of your prefab when instantiated in the AR environment.
To avoid issues with scale adjustments, we added our character as a child to an empty parent object. For convenience, we created an empty prefab variant.
This installation method is ideal for users working with existing projects who wish to customize their setup. It allows for compatibility with various other SDKs, enabling you to integrate Convai seamlessly into your current workflows without limitations on using third-party SDKs.
In the top menu, click on Convai. Then, select Custom Package Installer.
In the Convai Panel that appears, click on Package Management and then select Install MR Package.
A new window will appear prompting you to select your installation type. For this documentation, we will proceed with Manual Installation.
The installation will start. This process will be completed quickly as only the ConvaiXR Package will be installed.
Once installation is complete, the Convai Essentials - XR Prefab will be added to your scene.
This is the only GameObject required for Convai to run in your scene.
The imported files can be found under Assets > Convai > ConvaiXR.
The final step is to import your character into the scene.
To activate Convai in your scene, simply add the Convai Essentials - XR Prefab.
There are no limitations on Third-Party SDKs, so you are free to use Convai with any XR SDK of your choice.
When building Unity projects for macOS, developers may encounter issues with microphone permissions, particularly when targeting both Intel and Apple Silicon Macs. This document outlines the problem, symptoms, causes, and solutions to help ensure successful access to the microphone across different Mac architectures.
Some users have reported that while building macOS universal apps, Apple Silicon Macs handle microphone permissions without issue, while Intel Macs may fail to access the microphone due to differences in architecture. This can result in a lack of microphone response, DLL not found Exceptions, error messages, potential application crashes, or no audio input being detected.
The issue stems from the grpc_csharp_ext.bundle
, which is crucial for networking in Unity projects. There are separate versions of this DLL for Intel and Apple Silicon architectures, and they cannot be easily merged or applied universally. The grpc library currently lacks dedicated support for resolving these dll issues in Unity.
For Intel Macs: Use Standalone builds targeted specifically for the Intel architecture to ensure compatibility.
For Apple Silicon Macs: Prefer Standalone builds for the ARM64 framework for optimal performance, although Universal builds are also an option.
After completing a Universal build on an Intel Mac, you must manually update the grpc_csharp_ext.bundle
to ensure proper functionality. Follow these steps:
Locate the .app
file generated by the build process.
Right-click on the .app
file and select "Show Package Contents."
Navigate to the Contents/Plugins
folder within the package.
Important: The grpc_csharp_ext.bundle
may not be included correctly in the final build when built from an Intel Mac. Always verify that the Plugins
folder in the build contains the correct DLLs. If there is any confusion or the DLLs are missing, replace or add the contents of the Plugins
folder with the one provided by us.
Building for macOS requires careful consideration of the distinct Intel and Apple Silicon architectures. The current best practice is to use Standalone build settings tailored to the specific architecture of the target Mac. As we progress, we will have a more integrated solution for managing DLLs that will simplify the development process for universal macOS applications.
This Unity tutorial demonstrates how to implement NPC-to-NPC conversations in 3D virtual environments using Convai's NPC2NPC feature. This functionality enables two NPCs to engage in dynamic, low-latency dialogues on predetermined topics, while allowing players or other game entities to interrupt and interact with them. Thus, this feature can create more engaging and interactive game worlds with use cases like storytelling, tutorial sequences or quest initiation
Check out this demo and tutorial on how to build this in Unity. We are also releasing the sample project source code along with the tutorial to get you started.
Download the free Sample project files:
Unity Asset Store plugin link:
Character Details
Name: Christina Smith ID: 64247ac6-74f8-11ef-be3e-42010a7be011
Name: Kevin Shaw ID: 7ffaee42-74f8-11ef-8980-42010a7be011
NPC2NPC Walkthrough:
Unity Factory Scene HDRP:
Interacting with the UI may vary depending on the SDK you are using. For popular frameworks like XR Interaction Toolkit and Meta SDK, you can follow the documentation below. We recommend checking the relevant documentation for other SDKs.
To make the Settings Panel interactable in XR environments, where you can test your microphone at runtime or change the appearance of the Chat UI in Convai, follow these steps:
Right-click on the Convai Settings Panel.
Select Interaction SDK and then click on Add Ray Interaction to Canvas.
If a warning appears in the new window, click the Fix button.
In the Settings section, choose Everything. Then click Create.
To prevent the ISDK_RayInteraction from running while the Settings Panel is closed, drag it onto the Panel GameObject.
Click on the Convai Settings Panel.
In the Inspector, click the Add Component button.
Add the Tracked Device Graphic Raycaster component.
This Unity tutorial shows you how to add a Convai-powered Tour Guide character for your 3D virtual environments. Convai’s Narrative Design Feature lets creators add spatial anchors in the 3D environment which guides the 3D AI character to navigate the world while following step-by-step instructions. The character follows instruction prompts combined with the spatial anchors while conversing with the user in a contextual open-ended manner. This enables a whole set of use cases from onboarding, tour guides, companion characters, tutor characters, and many more. Check out this demo and tutorial on how to build this in Unity. We are also releasing the sample project source code along with the tutorial to get you started.
Download the free Sample project files:
Unity Asset Store plugin link:
Character Details
Name: Christina Smith ID: 84434252-3776-11ef-a746-42010a7be00e
Narrative Design Walkthrough:
Unity Factory Scene HDRP:
Everything is now set up!
Everything is now set up!
Everything is now set up!
Use the " " guide to add your character to your project.
If you need guidance on this, please refer to the relevant documentation .
Everything is now set up!
Please follow documentation to interact with Convai's Settings Panel.
Replace the contents of this folder with the components from the provided plugin folder ().
Sign up at to get started. For any questions or bug reports, please visit the where our community and support team will be happy to assist you.
Sign up at to get started. For any questions or bug reports, please visit the where our community and support team will be happy to assist you.
This document explains how to add MetaHuman to your project.
This document explains how to add the Convai ReadyPlayerMe plugin to your project.
Go to this Drive Link.
Download the version corresponding to your Unreal Engine.
In your project directory create a folder named Plugins
if it does not already exist.
Extract the contents of the downloaded zip in the Plugins folder, the final folder hierarchy should look like this:
This stage explains how to use the default actions which come implemented out of the box with Convai.
These actions include:
Moves To: Character can move to another character or object
Follows: Character can follow you or follow other objects/characters
Waits For: Character can wait for some time before doing another action, for example: Wait for 10 seconds then throw a grenade
This mini-guide provides instructions for setting up Convai with pixel streaming in Unreal Engine
To setup the Pixel Streaming server, we recommend taking a look over this excellent guide.
Ensure you have the latest Convai 3.1.0 plugin or later.
Enable Unreal Engine's Pixel Streaming
and the Pixel Streaming Player
Plugins from the Plugins window.
In the player blueprint which has the Convai Player component, add the PixelStreamingAudio component to the list of components.
Click on PixelStreamingAudio component, and in the details panel find Base Submix and choose AudioInput sound submix.
On Begin Play in the event graph, add the following blueprint function to initialize Pixel Streaming with the Player Component.
Now, pixel streaming mic input should be working. However, system microphone will no longer work. To change back and forth to and from system microphone, set Use Pixel Streaming Mic Input to true for enabling pixel streaming microphone and false to enable system microphone. This is found in the ConvaiPlayer component.
This enables AI conversational features to MetaHuman
Open content browser then Content > MetaHumans > MetaHuman blueprint
.
Go to Class Settings and then under Details panel > Class Options > Parent Class and set it to ConvaiBaseCharacter
.
To add animation to the body go to the 'body'
component and under the details > Animation > Anim Class
change it to ‘Convai_MetaHuman_BodyAnim’.
Similarly for the face go to the face component and under the details > Animation > Anim Class
change it to ‘Convai_MetaHuman_FaceAnim’.
Compile and you will be good to go.
Add lip sync to MetaHuman characters in Unreal Engine with Convai. Enhance realism and engagement.
Prior to advancing, ensure that you modify the parent class of your MetaHuman to ConvaiBaseCharacter, as indicated in the provided documentation.
Open your MetaHuman blueprint.
Navigate to the Components
section and select the Add
button.
Search for Convai Face Sync
and select.
Finally, LipSync is added to your MetaHuman. Compile and save it and give it a try.
This gives the player the ability for conversation with the chat bot.
This can be applied to first person player (FPP) or third person player (TPP).
Steps to change parent class of a First person player (FPP): -
Create a new project as a first person project or import into your already existing project.
Steps to import : - Content Browser > Add > Add feature or content pack to the project > First Person > Add to Project.
Steps to make your game a default First Person game: -
Steps: - Edit > Project Settings > Maps and modes > Default Mode > Default GameMode > BP_FirstPersonGameMode
.
Then go to All > Content > FirstPerson > Blueprints > BP_FirstPersonCharacter
.
Click Class Settings then in the Details section Under ‘Class Options’
change the parent class to ‘ConvaiBasePlayer’.
Hit save and compile and you will be good to go.
For Third Person Player follow the same steps just by looking for Third Person
Adds lip animation to MetaHuman
Download the plugin from this link.
Head to your project folder and find the ‘Plugins’
folder. If not, create a new folder named ‘Plugins’.
Unzip the downloaded file and copy the folder named ‘ConvaiOVRLipSync’
into the ‘Plugins’
folder and restart the Unreal Engine.
Open the MetaHuman Blueprint and add a component named ‘ConvaiOVRLipSync’.
Make sure that the ‘Face’
component is using the ‘Convai_MetaHuman_FaceAnim’
animation class.
Link for Youtube Tutorial.
Create a new blueprint and select ConvaiRPM_Character
as the parent class which you can find under All Classes
.
Drag the blueprint to the scene and the default ReadyPlayerMe character should appear.
Create a new character on Convai Playground and copy the Character ID, you can also edit the avatar by clicking on the Edit Avatar
icon on the top right of the avatar preview window.
Back to Unreal, click the character in the scene and under the details panel find the Char ID field and paste the copied character ID into it.
Hit Play
, and wait for a few seconds then the character should now load into the game.
The first time the character loads will take more time but after that it will be cached and loaded faster.
This gives the player the ability for conversation with the chat bot.
This can be applied to first person player (FPP) or third person player (TPP).
Steps to change parent class of a First person player (FPP): -
Create a new project as a first person project or import into your already existing project.
Steps to import : - Content Browser > Add > Add feature or content pack to the project > First Person > Add to Project.
Steps to make your game a default First Person game: -
Steps: - Edit > Project Settings > Maps and modes > Default Mode > Default GameMode > BP_FirstPersonGameMode
.
Then go to All > Content > FirstPerson > Blueprints > BP_FirstPersonCharacter
.
Click Class Settings then in the Details section Under ‘Class Options’
change the parent class to ‘ConvaiBasePlayer’.
Hit save and compile and you will be good to go.
For Third Person Player follow the same steps just by looking for Third Person
This document explains how to bind objects to the Reallusion character and perform action with that object.
Go to Window > Place Actors >
Search NavMeshBoundsVolume
and drag it in the scene.
Click the character in the scene (in this case click the Reallusion character) and head to the Details
Panel.
Details > Default > Objects > Click Add Element
Now we need to select a reference for the object from the scene. Click on Pick Actor from Scene
.
Then select any object from the scene. We can also provide a name
and description
to the object in the scene. This will allow the player to interact with the objects.
Save and then hit play to test the character to perform certain actions related to the object present which you just added.
Narrative Design - Enhance your Unreal Engine projects with Convai.
Narrative Design can be extended using Unreal Engine Blueprints, allowing you to invoke triggers created in the Narrative Graph, as well as access dynamic variables such as the time of day or relevant information like inventory contents in the game.
This document explains how to create a Reallusion character with Character Creator 4.
Steps to create a Reallusion Character using Character Creator 4: -
Use the Character Creation tool by Reallusion.
Create / use default a character present there and add animations of your choice.
(Here we have used an already existing character named ‘CC4 Kevin’ and have added the idle animation and walking animation.
Link for .)
Export it in FBX
format.
File > Export > FBX > Clothed Character
Keep the following settings in Export FBX.
Target Tool Preset: - Unreal
FBX Options: - Mesh and Motion
Max Texture Size: - 4096 ( Choose the maximum available)
Frame Rate: - 30
Check the custom section. Click the Load Perform button.
Uncheck First Frame in bind pose.
Check Export Mesh and Motion individually.
Check Save One Motion per File.
Checking the Delete Hidden Faces option may avoid rendering issues.
Click Export.
This document explains how a Reallusion Character can be imported and used with the Convai Plugin.
Create / open a project in Unreal Engine and download the Reallusion auto setup for Unreal Engine from this .
Download > install > Open a folder based on your Unreal Engine version.
Copy the Content and Plugins folder and paste them in your Unreal Engine project folder.
Restart your project and create a new folder (say ‘Kevin’) in your content browser.
Copy the .Fbx
file (named as the character) from the export to ‘Kevin’
in the content browser and the FBX Import Options menu will pop up.
Check the following options and click Import All :-
Use T0 As Ref Pose.
Import Morph Targets.
Create a new folder named Animations within the Kevin folder and import all the animation file (named ‘_motion’ at the end) from the exported file from Reallusion.
The FBX Import Options pop up again and Uncheck the Import Mesh and select your imported skeleton.
Under Animation > advanced
. Check the Use Default Sample Rate.
Click Import All.
Now install the Convai Plugin from Epic marketplace and restart the engine.
Edit > Plugins
.
Search for ‘Convai’
and enable it by clicking the checkbox and restart.
Create a new Blueprint Class within Kevin folder and under ALL CLASSES select ‘ConvaiBaseCharacter’.
Open up the blueprint. Components > add > Skeletal Mesh.
Under the details tab go to mesh > Skeletal Mesh
and select the imported mesh named Kevin form Reallusion.
Finally add the Character Id by selecting the Character (from Convai website) you just added and you can enjoy talking to your AI buddy.
This guide shows how to dynamically pass variables to Narrative Design section and triggers
We will create a simple scenario where the character welcomes the player and asks them about their evening or morning based on the player's time of day.
In the playground, enable Narrative Design on your character and change the starting section name to Welcome
.
Add the following to the Objective field of the Welcome section:
The time of day currently is {TimeOfDay}. Welcome the player and ask him how his {TimeOfDay} is going.
Notice that by adding any string between curly brackets it becomes a variable, and what we did here is adding the time of day as a variable, then from Unreal we can pass either the word "Morning" or "Evening" and the character will respond accordingly.
Back in Unreal, open the character's blueprint.
Set the Narrative Template Keys
variable with a map containing the same variable name TimeOfDay
and for demonstration purposes we will hard code the value to "Morning".
Start the play mode and try it out.
Feel free to try other scenarios and settings to align better with your usecase.
You can use the narrative design keys feature in both sections and triggers.
Make sure the variable names are between curly brackets and has no spaces in between.
You can dynamically set, change or clear the narrative keys in Unreal blueprints.
Use Convai's Narrative Design Triggers in Unreal Engine to enhance your game stories.
Before proceeding with this section, it is advisable to familiarize yourself with the Narrative Design system, as elucidated in this .
Develop the logical flow for your specific use case. In this instance, we have created a simple museum tour guide scenario.
Our goal is to invoke the trigger Start Tour
in the narrative Design graph, using the Invoke Narrative Design Trigger
.
The Trigger Name
in the function should be the same as the Trigger
name on the graph.
The above example showcased only one Trigger. The use of more than one Trigger is also possible based on execution logic.
This document describes how actions can be used/added to your Convai character, we will go about actions in multiple stages. This written guide is complementary the following tutorial video:
This action enables a character to move to certain objects/characters present in its environment
By default, Convai Characters possess knowledge about the presence of other Convai characters within their environment.
Let's assume there are two Convai characters present in the virtual environment. We can ask one of the characters to move to the other character as shown here.
Steps: -
Click on your Convai character in present in scene.
In Outliner
go to Convai Info
section and then Click on the +
icon near Objects
.
Click on Pick Actor from Scene
tool
Select any object from the scene and give it a Name
and Description
of choice.
Save the changes and ask your Convai character to move to the object your named.
The Convai character finally move to the object you added to its object list.
To change the parent class of the player refer to section.
Once the logic is decided we can move to Unreal Engine. (In this guide we will use the same setup described in this )