Create Ava: an intelligent, full-body, controllable human avatar that can hear/see and is capable of natural, spoken conversations with humans.
Issues Involved or Addressed
Current virtual personal assistants like Alexa, Siri, Google Assistant, Cortana most commonly follow a “voice-in-a-box” approach: unimodal voice I/O. In addition, interactions are limited to narrow domain, single turn dialogues: “what is the weather forecast for today?”, “play Easy on Me”. The Ava digital human will be an intelligent multimodal voice+vision full body digital human avatar capable of multi-turn open-domain interactions. The AVA Lab is creating the Conversational AI technologies for AVA including speech recognition/synthesis, natural language understanding/generation, dialogue management. To complement the AVA Lab, this VIP will address the challenge of creating a full-body, fully controllable high fidelity human avatar to represent Ava. This work of this VIP will involve advancing the state-of-the-art in deep learning methods to train Text-to-Face and Text-to-Body animation models.
Methods and Technologies
Academic Majors of Interest
- Computing›Computer Science
- Computing›Human-Computer Interaction
- Design›Music Technology
- Engineering›Computer Engineering
- Engineering›Electrical Engineering
- Engineering›Machine Learning
Preferred Interests and Preparation
software development skills (familiarity with Python), strong aptitude to learn advanced experimental methods, applied mathematics, statistical modeling, machine learning
Meeting Schedule & Location
- Machine Learning