Lips moving in avatars

I noticed that standard avatars in Vircadia have a kind of lip-synch which is working when talking, and this works as well with with readyplayerme. They all seems to have a special skeleton and kind of poses to work with teeth to perform that lip-synch.

It appears that metaverse-tool hifi skeleton is not apparently doing this and avatars made with it coming from mixamo or makehuman dont appear to have moving lips.

Is there something we missed in the usage of metaverse-tool that would create such lip/mouth moving effect when speaking and usable by non technical people like my students in current vircadia class I’m holding? BTW: Vircadia is currently quite welcomed from my class. The 40 participants did enjoy private servers on their pc and unlimited new educational possibilities :slight_smile:

1 Like

In order to make the mouth move you have to use what’s called blendshapes or shapekeys (two words for the same thing). Basically someone has to edit the position of the vertices in a tool like blender, and save the state of said vertices. The most basic shapekey/bladeshape for mouth movement The Hifi Engine/Vircadia supports is “JawOpen”

The way to do this is to go into blender, find the Shape Keys panel on the Object Data Properties. Add a shapekey (this will be called default automatically, add a second shapekey and name it JawOpen. With JawOpen highlighted, go back to the 3d window and start moving vertices around in edit mode. Just a reminder, you shouldn’t try to add blendshapes until absolutely everything else on your model is finished and ready to go, as you’ll probably have to redo them.

Note: Last I’d checked, mixamo models do not have built-in shape keys, and if you try to rig an object in mixamo, I believe any existing shape keys on that model will be removed in the process. I forgot if makehuman does or does not have this built in. But for any model that does (and you can check in the shape keys section when you import the model into blender) you may have to rename them to be compatible.

More info on shapekeys/blendshapes can be found here


Thanks a lot for your answer. I understand it is not trivial, but at least there is a viable workflow that I can manage to document in a video for my students. I will try to put your instructions written in detailed steps and possibly publishing here. I think many people educators can find it quite a huge steps for them for doing simple historical videos from Vircadia without spending so much money in tools for lipsyncing.
One of my students did actually do in the past some very nice similar videos like this one:
Piazza del Lago fra tradizione, Storia e Letteratura - YouTube the voices belong to his secondary school students.
If I am not wrong he did these animations in Unity, but they can be easily be ported in a more friendly Vircadia environment.

1 Like

Dear Salahzar and Dear Aitolda,
thank you both so much for the advices. Special thanks to Salahzar, he is a great professional, with a marked availability, and the information material he proposes is a great stimulus.