AVA Digital Human

2022 ~ Present | TBD


Create Ava: an intelligent, full-body, controllable human avatar that can hear/see and is capable of natural, spoken conversations with humans.

Issues Involved or Addressed

Current virtual personal assistants like Alexa, Siri, Google Assistant, Cortana most commonly follow a “voice-in-a-box” approach: unimodal voice I/O. In addition, interactions are limited to narrow domain, single turn dialogues: “what is the weather forecast for today?”, “play Easy on Me”. The Ava digital human will be an intelligent multimodal voice+vision full body digital human avatar capable of multi-turn open-domain interactions. The AVA Lab is creating the Conversational AI technologies for AVA including speech recognition/synthesis, natural language understanding/generation, dialogue management. To complement the AVA Lab, this VIP will address the challenge of creating a full-body, fully controllable high fidelity human avatar to represent Ava. This work of this VIP will involve advancing the state-of-the-art in deep learning methods to train Text-to-Face and Text-to-Body animation models.

Methods and Technologies

  • Machine Learning (deep learning, reinforcement learning)
  • Compute vision
  • Speech recognition (speech-to-text)
  • Speech synthesis (text-to-speech)
  • Augmented/Virtual Reality
  • Avatar creation
  • Text-to-Body animation
  • Text-to-Face animation

Academic Majors of Interest

  • ComputingComputer Science
  • ComputingHuman-Computer Interaction
  • DesignMusic Technology
  • EngineeringComputer Engineering
  • EngineeringElectrical Engineering
  • EngineeringMachine Learning
  • SciencesNeuroscience

Preferred Interests and Preparation

software development skills (familiarity with Python), strong aptitude to learn advanced experimental methods, applied mathematics, statistical modeling, machine learning

Meeting Schedule & Location

9:30AM - 10:20AM
Meeting Day 

Team Advisors

Dr. Larry Heck
  • Machine Learning

Partner(s) and Sponsor(s)