Select a folder containing model3.json and all associated files.
Map detected emotions to expressions. The highest-scoring emotion above its threshold triggers the mapped expression.
Click "Set" then press a key to assign hotkey.
Make a (-_-) face then press Calibrate.
Enable lip sync, then hold a vowel sound and click the button to calibrate.
Requires face tracking to be enabled first.
Draws hand skeletons on screen over the model.