iX Labs is a platform for generating skeletal and facial blendshape animation from text and audio.
By pairing a clean, high quality mocap dataset with human annotations, it is now possible to train AI models to generate plausible skeletal animations from textual descriptions.
It is also possible to record facial expressions and lip movement from audio, so you can generate emotive, expressive, lip-synched blendshapes directly from audio.
We do not currently plan to extend this to non-humanoid animations.
We are still in the process of collating a high-quality dataset, training models and experimenting with the best way to make this new platform available.
We intend to make this available as an add-on for Blender.