Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Adding Characters to Scene - PlayCanvas Plugin Guide for Convai integration.
PlayCanvas template for Convai integration.
ConvAI is a powerful tool that enables developers to incorporate natural language processing (NLP) and conversational AI capabilities into their PlayCanvas projects. By following this guide, you'll learn how to seamlessly integrate ConvAI into your PlayCanvas project, allowing you to create engaging and interactive experiences for your users.
To help you get started, we've created a reference PlayCanvas project that demonstrates the integration of ConvAI. This project serves as a foundation for you to build upon and understand the necessary steps to incorporate ConvAI into your own projects.
Project Link : https://playcanvas.com/project/1216467/overview/convai-sdk
Add character animations in PlayCanvas with Convai. Enhance your web projects with interactive AI.
Upon completing the creation/upload of all desired animations within the PlayCanvas environment, construct an Animation State Graph to effectively manage and control the animation states and transitions
Reference : https://developer.playcanvas.com/tutorials/anim-blending/
Attach the state graph and animation files to the character. Create a Anim component and drag-drop the files to the placeholders.
Integrate Convai AI with your website. Follow our Web plugin documentation for seamless setup.
Integrate Convai conversational services in your own web application
The convai-web-sdk is a powerful npm package that empowers developers to seamlessly integrate lifelike characters into web applications. This SDK facilitates the capture of user audio streams and provides appropriate responses in the form of audio, actions, and facial expressions. Whether you're building an interactive website, a chatbot, or a game, this package adds a human touch to your user experience.
Sign in to Convai and copy your API key. This will help you to converse with the avatar at a later step.
Convai Web SDK is available as an npm package. Run the following command in the root of your React project to install the package.
npm install convai-web-sdk@latest
LTS Version: 0.0.6
Before you begin with the integration, make sure, you have created an account with Convai and have your own API-Key.
First Person View - PlayCanvas Plugin Guide for Convai integration.
Scale up the plane and add physics component to it. Add both collision and rigidbody.
Import Ammo Js (enables physics). Scale up the Collision component according to the plane size.
Create a New Entity
In the PlayCanvas Editor, right-click in the Hierarchy panel and select "Create New Entity".
Add Physics Component
With the new entity selected, click the "Add Component" button in the top-right corner of the Editor.
Search for "Physics" and add the "Physics" component to the entity.
Add Collision Capsule Component
With the entity still selected, click "Add Component" again.
Search for "Collision" and add the "Collision Capsule" component to the entity.
Adjust Entity Y-Position
Adjust the "Y" value of the Translation to position the entity above the plane.
Add Rigidbody Component
With the entity still selected, click "Add Component" again.
Search for "Rigidbody" and add the "Rigidbody" component to the entity.
Set the "Type" of the rigidbody to "Dynamic".
Adjust Angular Factors
In the Rigidbody component, locate the "Angular Factor" section.
Set the "X", "Y", and "Z" values of the Angular Factor to 0.
This section provides comprehensive information on integrating and handling facial expressions and lipsync within your web applications using the convai-web-sdk.
To kickstart facial expression functionality, initialize the ConvaiClient
with the necessary parameters. The enableFacialData
flag must be set to true
to enable facial expression data.
faceModel : 3 is standard and actively maintained.
Retrieve viseme data by incorporating the provided callback. The example code demonstrates how to handle and update facial data.
Utilize the useFrame
hook from react-three-fiber
to modulate morph targets based on the received facial data.
In addition to facial expressions, the convai-web-sdk
allows developers to modulate bone adjustments for specific facial features. Receive bone adjustments for "Open_Jaw," "Tongue," and "V_Tongue_Out" and apply them to your character as demonstrated below:
These code examples are specific to reallusion characters.
Note: Throttle function is not 100% accurate
Throttle accuracy may lead to edge cases. Implement the clock setup and handle both above and below 100fps scenarios using the elapsed time.
Convai Integration - PlayCanvas Plugin Guide for seamless integration.
After the addition of Convai web-sdk-cdn to url section, ConvaiClient class will be available to the browser directly.
Add all the scripts bellow to your Character entity.
Replace the empty "" with you API key and Character ID.
The ConvaiNpc
script is responsible for handling the interaction between the user and a virtual character powered by the Convai AI.
The script initializes the Convai client by providing an API key and character ID. It sets up necessary callbacks to handle various events, such as errors, user queries, and audio responses from the Convai service.
The initializeConvaiClient
function is the entry point for setting up the Convai client. It creates a new instance of the ConvaiClient
and configures it with the provided API key, character ID, and other settings like enabling audio and facial data.
The script handles user input through two methods: text input via a form and voice input using the "T" key. For voice input, the handleKeyDown
and handleKeyUp
functions are used to detect when the "T" key is pressed and released, respectively. When the "T" key is pressed, the script starts recording audio and sends it to the Convai service for processing.
The ConvaiNpc.prototype.initialize
function is called once per entity and sets up the Convai client. It also registers callbacks for handling audio playback events, updating the isTalking
and conversationActive
flags accordingly.
The ConvaiNpc.prototype.handleAnimation
function updates the character's animation based on the isTalking
state, allowing for synchronized lip movements and facial expressions.
The PlayerAnimationHandler
script is responsible for controlling the animations of a player character based on certain conditions, such as velocity or other factors.
The script defines three attributes:
blendTime
: This attribute controls the blend time between animations, which determines how smoothly the transition between animations occurs. The default value is set to 0.2.
velMin
: This attribute represents the minimum velocity required to trigger a specific animation. The default value is set to 10.
velMax
: This attribute represents the maximum velocity required to trigger a specific animation. The default value is set to 50.
These attributes can be adjusted in the editor or through code to fine-tune the animation behavior for the player character.
The initialize
function is called when the script is initialized. In this implementation, it plays the 'Idle' animation with the specified blend time (this.blendTime
). This animation will be played when the player character is not moving or when the velocity is outside the range defined by velMin
and velMax
.
The script is designed to be extended further to handle different animation states based on the player character's velocity or other conditions. For example, you could add additional functions or logic to check the player's velocity and play different animations (e.g., 'Walk', 'Run') based on the velocity range defined by velMin
and velMax
.
By utilizing this script, you can easily manage and transition between different animations for the player character, providing a more immersive and realistic experience in your game or application.
The Lipsync
script is responsible for animating the character's mouth and facial expressions based on the received viseme data. Visemes are the key mouth shapes and facial positions used to represent speech sounds. The script applies morph target animations to the character's head and teeth components to achieve realistic lip-syncing effects.
The script works by accessing the visemeData
array, which contains the viseme weights for each frame of the animation. It then applies these weights to the corresponding morph targets on the head and teeth components. The runVisemeData
function handles this process by looping through the viseme weights and setting the morph target weights accordingly.
The script keeps track of the current viseme frame using the currentVisemeFrame
variable and a timer variable. This ensures that the viseme animations are synchronized with the audio playback. When the viseme data has finished playing, the zeroMorphs
function is called to reset all morph target weights to zero, effectively resetting the character's facial expression.
The HeadTracking
script is responsible for controlling the rotation of a character's head and eyes based on the position of the camera (representing the user's viewpoint). The script achieves this by calculating the angle between the forward vector of the head component and the forward vector of the camera. If this angle is within a specified threshold (45 degrees in this case), the head and eyes are rotated to look towards the camera's position.
Add all the above scripts to your playcanvas project and attach convaiNPC, lipsync, Headtracking to the convi (your model) component.
Create a FirstPersonView.js script and add the bellow code. You can find examples for implementing camera controls on and examples as well. Attach this script to Player capsule.
Implement using lodash to ensure smooth animations at 100fps. The provided example demonstrates how to maintain a consistent animation frame rate.
Ask your NPC to perform actions using our JavaScript SDK
To set up the Actions you need to follow the following steps:
Sign in to Convai's website and navigate to your Character Details.
Navigate to Actions, enable the Action Generation and select the actions you want your NPC to perform.
Go back to your code and Initialize an actionText state that will store the action that you want NPC to perform.
Inside the same useEffect where we check the audio response. Refer to the Getting Started page to quickly understand how and where we check audio response.
Actions have been set up and now you can use the ActionText to perform the required action.
Begin building web applications with our quick start guide for the JavaScript Chat UI SDK
At first import ChatBubble component and useConvaiClient hook from convai-chatui-sdk.
Invoke the useConvaiClient
hook in your application, passing characterId
and apiKey
as parameters, then store the returned states in a client
variable for future reference.
In the return statement of your component, render the ChatBubble
component that was imported via the NPM package.
Begin building applications with our quick start guide for the Web SDK
At first import Convai client from convai-web-sdk.
Declare the convaiClient
variable using the useRef
hook provided by the React library. The useRef
hook returns a mutable ref object, which can be used to store a value that persists across component renders.
Initialize the Convai client inside a useEffect
hook to ensure it runs only once when the component is mounted. By providing an empty dependency array as the second parameter to the useEffect
hook, the initialization code will be executed only on the initial render.
Your Convai Client has been initialized. Now you can use Convai Client methods to set up a conversation with your NPC.
These are the main methods that allow you to interact and converse with your NPC using the ConvaiClient.
setResponseCallback
Description: Sets the response callback function for the ConvaiClient instance. This callback function will be invoked when a response is received from the Convai API. This part of the code should be also under the same use effect as the initialization one.
Parameters: callback
(function): A callback function that will be executed when a response is received. It takes one parameter representing the received response data.
Example:
This part of the code gets the user Query from the response and the finalized text is set as userText which is further used to generate a response from the NPC which is stored as npcText.
Also, remember to declare both the onAudioPlay
and onAudioStop
methods described below inside the useEffect
after the setResposeCallback
method to avoid facing errors.
startAudioChunk
Description: Initiates the client to start accepting audio chunks for voice input. This method signals the client to begin receiving and processing audio data.
Parameters: None
Example:
You can use this method to make the client listen and take input of the audio only when the user presses some particular key.
endAudioChunk
Description: Instructs the client to stop taking user audio input and finalize the transmission of audio chunks for voice input. This method indicates the end of the audio input and allows the client to process the received audio data.
Parameters: None
Example:
You can use this method to make the client stop listening to the audio on release of some particular key.
onAudioPlay
Description: Notifies whenever the NPC starts speaking.
Parameters: None
Example:
This method can be used to work with animations where once the audio starts we can set the avatar to do talking
animation.
onAudioStop
Description: Notifies whenever the NPC stops speaking.
Parameter: None
Example:
This method can also be used to work with animations where once the audio stops we can set the avatar to be in idle
position.
sendTextChunk
Description: Can be used to send a text chunk to the client which will further be processed and the NPC output will be generated.
Parameter: text (string): Takes a text chunk of type string as input.
Example: Can be used when your are using textbox to take user's input. You can set up the text box in the following way where
You can use this method to get the input from a text box and then send it to the client for processing.
resetSession
Description: Used for resetting the current session.
Parameter: None
Example:
toggleAudioVolume
Description: Can be used to toggle audio mode from on to off or vice versa.
Parameter: None
Example:
Integrate Convai's conversational services with a Chat UI into your Web Application.
Sign in to Convai and copy your API key. This will help you to converse with the avatar at a later step.
Convai Chat UI SDK is available as an npm package. Run the following command in the root of your React project to install the package.
npm install convai-chatui-sdk@latest
LTS Version: 0.0.4
Convai's JavaScript Chat UI SDK provides you with a set of tools to integrate Convai's conversational AI services with a Chat UI into your web-based applications. Our Chat UI package is designed to drastically expedite the process of establishing a chat environment for web-based applications. With streamlined integration, developers can now effortlessly setup an interactive, fully customizable chat interface in no time.
Before you begin with the integration, make sure, you have created an account with Convai and have your own API-Key.
Chat Overlay - PlayCanvas Plugin Guide for Convai integration.
Let's add a chat window to enhance user interaction and immersion. Create a New entity called Convai Chat.
The ConvaiChat
script is responsible for managing the chat interface and displaying the conversation between the user and an AI-powered virtual character. It handles rendering user messages and AI responses, maintaining a chat history, and ensuring smooth scrolling behavior within the chat container.
Add the following files as an attachment to convaiChat script after parsing the script.
Dive into the detailed guide to understand and utilize the properties that can be passed to your Chat Bubble component for enhanced customization and control.
In the ChatBubble component, there are three main properties:
A chatHistory property to determine whether to display the chat history.
A chatUiVariant property to specify the chat variant to display.
A set of properties returned by useConvaiClient
which are essential for fetching user and NPC texts.
The set of properties that are mandatory to be returned by the useConvaiClient (You can follow the JavaScript SDK tutorial to set up your own custom useConvaiClient hook) for the Chat Bubble to work as expected are:
npcText : Stores the text returned by Npc.
userText: Stores the user text.
convaiClient: Stores the state of convai client and is used by Chat Bubble to set session Id.
keyPressed: Changes whenever the user presses the key and starts speaking.
characterId: Used by the chat bubble to store and retrieve chat history.
setUserText: Used by Chat Bubble component to reset the Chat History.
setNpcText: Used by the Chat Bubble component to reset the Chat History.
setEnter: Used by the textbox to send the textbox content to the client whenever the user press enter.
These are the properties that can be optionally passed by the custom useConvaiClient hook. If you are using the inbuilt useConvaiClient hook you can use these properties:
npcName: Name that will be shown on the chat UI for your character.
userName: Name that will represent user on the chat UI.
gender: Returns the gender of your avatar ( Can be used to set up animations).
avatar: Gives the model link of your avatar which can be used to load your 3D model to the scene.
actionText: Returns the action text that represents the action Npc has been asked to perform.
There are 4 types of chat variants that you can choose from:
Toggle History Chat
Unified Compact Chat
Sequential Line Chat
Expanded Side Chat
Adding External Script - PlayCanvas Plugin Guide for Convai integration.
Here we will add the Convai Web SDK to our project, a JavaScript library that enables integration of conversational AI capabilities.
Create a blank project after loggin in.
Open up settings section in SETTINGS panel.
Increase the Array Size to 1 for adding 1 external script.
Add Convai web-sdk-cdn link in url section.
These triggers would be used to navigate through the story graph and activate specific narrative design sections.
In the example bellow calling the trigger (start tour) would activate "Greet User" section.
Once the logic is decided we can move to our Javascript code. Where we have initialised convaiClient.
We can call convaiClient.invokeTrigger()
whenver we require it in our use case.
Additional information for context can also be passed as the second argument : message.
Narrative Design - Narrative based NPCs with convai on web.
Narrative design is a method of structuring interactive stories, particularly for convai non-player characters (NPCs) in web-based environments. It utilizes a graph-based system where each node represents a step in the story progression.
Select your Character in which you want to enable Narrative Design.
Open Narrative Design section in convai playground.
According to your storyline create a narrative map/graph. Shown bellow is an example. The blue boxes represent triggers (used to initiate a line in the graph) and the black boxes represent sections (what you want the character to speak about).
Each section has an objective and decisions. Based on whats written in Objective the character would speak/respond. You can give the character exact dialogues using <speak>{your dialogue}</speak>
tags.
For decisions add messages as context on how the character can decide on which objective to move to own its own. The once you want to control can be through invokeTrigger in the next section.
Learn to integrate GLB and FBX animations into Convai's web plugin for dynamic character actions.
Mixamo is a free online service that provides a vast library of character animations that can be used in various 3D projects. Mixamo accepts only the .fbx file format for uploading animations. If you have an character in the .glb format, you'll need to convert it to .fbx first before uploading it to Mixamo.
Open Blender and navigate to File > Import > GlTF 2.0 (.glb/.gltf)
.
Locate and select the .glb file you want to convert, then click Import GlTF 2.0
.
Once the .glb file has been imported, navigate to File > Export > FBX (.fbx)
.
Any animation that is compatible with the character works. It should be in glb. or fbx. format.
This guide shows how to dynamically pass variables to Narrative Design section and triggers
We will create a simple scenario where the character welcomes the player and asks them about their evening or morning based on the player's time of day.
In the playground, enable Narrative Design on your character and change the starting section name to Welcome
.
Add the following to the Objective field of the Welcome section:
The time of day currently is {TimeOfDay}. Welcome the player and ask him how his {TimeOfDay} is going.
Notice that by adding any string between curly brackets it becomes a variable, and what we did here is adding the time of day as a variable, then from Unreal we can pass either the word "Morning" or "Evening" and the character will respond accordingly.
Setting Narrative Template Keys in JavaScript
To initialize your narrative design framework with custom template keys, you can pass the narrativeTemplateKeys
argument when instantiating the convaiClient
. This allows you to define and structure your narrative flow programmatically.
Discover how to use GLB characters with Convai's web plugin for immersive virtual experiences.
Creating a Character on ReadyPlayer:
ReadyPlayer.me is a platform that allows you to create custom 3D avatars or characters. Here are the typical steps involved:
Go to the website.
Click on the "Create Avatar" or similar button to start the character creation process.
Customize your character's appearance by selecting different options for body type, clothing, hair, facial features, and other attributes.
Preview your character in 3D as you make changes.
Once you're satisfied with your character's look, you can save or download the character file (often in a format like .glb or .glTF).
Using Morphs with ReadyPlayer.me Characters:
Morphs, also known as blend shapes or morph targets, are a way to deform or animate a 3D model's mesh to create different expressions or appearances. ReadyPlayer.me characters can be animated using morphs.
Creating a Character on :
Add the "Actore Core" character to cart and also get the relevant animations for the same. For example ("Ideal", "Talking").
Download animations with "Move in place" (Preferred Option).
To inspect and preview the available morph targets for your character, you can use the handy glTF viewer tool provided by Don McCurdy. This online viewer supports glTF files, which is a common format for 3D assets like characters and scenes.
Follow these steps:
In the glTF viewer interface, click the "Open File" button or drag and drop your character's glTF or GLB file onto the viewer window.
Once the file is loaded, you should see your character rendered in the 3D viewport.
On the right-hand side of the viewer, locate the "Morph Targets" section. This section will list all the available morph targets present in your character file.
Click on a morph target from the list to preview how it deforms the character's mesh. The viewer will show the character with the selected morph target applied.
Use the slider next to each morph target to adjust the intensity or weight of the deformation.
You can also enable multiple morph targets simultaneously by checking their respective boxes and adjusting their sliders.
This glTF viewer is an excellent tool for quickly inspecting and visualizing the morph targets of your character without needing to import it into a 3D software or game engine. It allows you to see how each morph target affects the character's appearance, which can be helpful for understanding the available facial expressions, body poses, or other deformations.
Additionally, the glTF viewer provides other useful features like viewing the character's node hierarchy, inspecting materials and textures, and more.
If your character file is in a different format (e.g., FBX, OBJ), you may need to convert it to glTF or GLB first before being able to view it in this tool.
glTF Viewer can also be used to find the skinnedMeshes which controlls the visme. In the above example its "Wolf3D_Head". The "Wolf3D_Head" controlls all morphs on Head object.
In a similar way the "Wolf3d_Teeth" in this case controls "Teeth" related morphs.
Go to the Reallusion website for actor core () and select the actor you like.
Go to the glTF viewer website: