Facial Expressions
This section provides comprehensive information on integrating and handling facial expressions and lipsync within your web applications using the convai-web-sdk.
Initialization
To kickstart facial expression functionality, initialize the ConvaiClient with the necessary parameters. The enableFacialData flag must be set to true to enable facial expression data.
convaiClient.current = new ConvaiClient({
apiKey: '<apiKey>',
characterId: '<characterId>',
enableAudio: true,
enableFacialData: true,
faceModel: 3, // OVR lipsync
});Receiving Viseme Data
Retrieve viseme data by incorporating the provided callback. The example code demonstrates how to handle and update facial data.
const [facialData, setFacialData] = useState([]);
const facialRef = useRef([]);
convaiClient.current.setResponseCallback((response) => {
if (response.hasAudioResponse()) {
let audioResponse = response?.getAudioResponse();
if (audioResponse?.getVisemesData()?.array[0]) {
//Viseme data
let faceData = audioResponse?.getVisemesData().array[0];
//faceData[0] implies sil value. Which is -2 if new chunk of audio is recieve.
if (faceData[0] !== -2) {
facialRef.current.push(faceData);
setFacialData(facialRef.current);
}
}
}Modulating Morph Targets
Utilize the useFrame hook from react-three-fiber to modulate morph targets based on the received facial data.
In addition to facial expressions, the convai-web-sdk allows developers to modulate bone adjustments for specific facial features. Receive bone adjustments for "Open_Jaw," "Tongue," and "V_Tongue_Out" and apply them to your character as demonstrated below:
Handling 100fps Animation
Implement throttling using lodash to ensure smooth animations at 100fps. The provided example demonstrates how to maintain a consistent animation frame rate.
Handling 100fps Edge Cases
Throttle accuracy may lead to edge cases. Implement the clock setup and handle both above and below 100fps scenarios using the elapsed time.
Last updated
Was this helpful?