I noticed that standard avatars in Vircadia have a kind of lip-synch which is working when talking, and this works as well with with readyplayerme. They all seems to have a special skeleton and kind of poses to work with teeth to perform that lip-synch.
It appears that metaverse-tool hifi skeleton is not apparently doing this and avatars made with it coming from mixamo or makehuman dont appear to have moving lips.
Is there something we missed in the usage of metaverse-tool that would create such lip/mouth moving effect when speaking and usable by non technical people like my students in current vircadia class I’m holding? BTW: Vircadia is currently quite welcomed from my class. The 40 participants did enjoy private servers on their pc and unlimited new educational possibilities