This section provides comprehensive information on integrating and handling facial expressions and lipsync within your web applications using the convai-web-sdk.
To kickstart facial expression functionality, initialize the ConvaiClient
with the necessary parameters. The enableFacialData
flag must be set to true
to enable facial expression data.
faceModel : 3 is standard and actively maintained.
Retrieve viseme data by incorporating the provided callback. The example code demonstrates how to handle and update facial data.
Utilize the useFrame
hook from react-three-fiber
to modulate morph targets based on the received facial data.
In addition to facial expressions, the convai-web-sdk
allows developers to modulate bone adjustments for specific facial features. Receive bone adjustments for "Open_Jaw," "Tongue," and "V_Tongue_Out" and apply them to your character as demonstrated below:
These code examples are specific to reallusion characters.
Implement throttling using lodash to ensure smooth animations at 100fps. The provided example demonstrates how to maintain a consistent animation frame rate.
Note: Throttle function is not 100% accurate
Throttle accuracy may lead to edge cases. Implement the clock setup and handle both above and below 100fps scenarios using the elapsed time.
Ask your NPC to perform actions using our JavaScript SDK
To set up the Actions you need to follow the following steps:
Sign in to Convai's website and navigate to your Character Details.
Navigate to Actions, enable the Action Generation and select the actions you want your NPC to perform.
Go back to your code and Initialize an actionText state that will store the action that you want NPC to perform.
Inside the same useEffect where we check the audio response. Refer to the Getting Started page to quickly understand how and where we check audio response.
Actions have been set up and now you can use the ActionText to perform the required action.
Begin building applications with our quick start guide for the Web SDK
At first import Convai client from convai-web-sdk.
Declare the convaiClient
variable using the useRef
hook provided by the React library. The useRef
hook returns a mutable ref object, which can be used to store a value that persists across component renders.
Initialize the Convai client inside a useEffect
hook to ensure it runs only once when the component is mounted. By providing an empty dependency array as the second parameter to the useEffect
hook, the initialization code will be executed only on the initial render.
Your Convai Client has been initialized. Now you can use Convai Client methods to set up a conversation with your NPC.
These are the main methods that allow you to interact and converse with your NPC using the ConvaiClient.
setResponseCallback
Description: Sets the response callback function for the ConvaiClient instance. This callback function will be invoked when a response is received from the Convai API. This part of the code should be also under the same use effect as the initialization one.
Parameters: callback
(function): A callback function that will be executed when a response is received. It takes one parameter representing the received response data.
Example:
This part of the code gets the user Query from the response and the finalized text is set as userText which is further used to generate a response from the NPC which is stored as npcText.
Also, remember to declare both the onAudioPlay
and onAudioStop
methods described below inside the useEffect
after the setResposeCallback
method to avoid facing errors.
startAudioChunk
Description: Initiates the client to start accepting audio chunks for voice input. This method signals the client to begin receiving and processing audio data.
Parameters: None
Example:
You can use this method to make the client listen and take input of the audio only when the user presses some particular key.
endAudioChunk
Description: Instructs the client to stop taking user audio input and finalize the transmission of audio chunks for voice input. This method indicates the end of the audio input and allows the client to process the received audio data.
Parameters: None
Example:
You can use this method to make the client stop listening to the audio on release of some particular key.
onAudioPlay
Description: Notifies whenever the NPC starts speaking.
Parameters: None
Example:
This method can be used to work with animations where once the audio starts we can set the avatar to do talking
animation.
onAudioStop
Description: Notifies whenever the NPC stops speaking.
Parameter: None
Example:
This method can also be used to work with animations where once the audio stops we can set the avatar to be in idle
position.
sendTextChunk
Description: Can be used to send a text chunk to the client which will further be processed and the NPC output will be generated.
Parameter: text (string): Takes a text chunk of type string as input.
Example: Can be used when your are using textbox to take user's input. You can set up the text box in the following way where
You can use this method to get the input from a text box and then send it to the client for processing.
resetSession
Description: Used for resetting the current session.
Parameter: None
Example:
toggleAudioVolume
Description: Can be used to toggle audio mode from on to off or vice versa.
Parameter: None
Example:
Integrate Convai conversational services in your own web application
The convai-web-sdk is a powerful npm package that empowers developers to seamlessly integrate lifelike characters into web applications. This SDK facilitates the capture of user audio streams and provides appropriate responses in the form of audio, actions, and facial expressions. Whether you're building an interactive website, a chatbot, or a game, this package adds a human touch to your user experience.
Sign in to and copy your API key. This will help you to converse with the avatar at a later step.
is available as an npm package. Run the following command in the root of your React project to install the package.
npm install convai-web-sdk@latest
LTS Version: 0.0.6
Before you begin with the integration, make sure, you have created an account with Convai and have your own API-Key.