ABOUT

Dr. Sanchita Ghose
Assistant Professor, AI-LAMP Director
Key Contributions:
- Leads multimodal learning research and human-computer interaction strategy
- Guides deep learning for sound synthesis, video processing, and cross-modal retrieval
- Shapes data collection, labeling standards, and evaluation protocols
- Advises student researchers across modeling and deployment

Aaron Singh
Graduate Research Assistant
Key Contributions:
- Built core architecture for emotion recognition and multimodal pipelines
- Designed model training, inference, and real time visualization flows
- Integrated webcam/audio ingestion and spline driven UI motion
- Performance tuning, deployment, and live demo maintenance

Dr. Hamid Mahmoodi
Professor & Graduate Program Coordinator, NeCRL
Key Contributions:
- Expertise in low power, high performance VLSI and nanoelectronics
- Architects efficient compute paths for real time inference
- Mentors on hardware-aware optimization and system integration
- Supports reliability, validation, and research direction
Project Information
Aurora · Emotion AI Platform
Project:Aurora · Emotion AI Platform
Domain:Multimodal Emotion & Affective Computing
Focus Areas:Audio, visual, text perception; real time inference; UX visualization
Mission:Build emotionally aware AI experiences
Vision:Responsive, privacy first, on device empathetic agents
Deployment:Real time web demos and edge friendly prototypes
Project Methodology
Our approach to shipping responsive, multimodal emotion AI.
- 1.Foundation: Defined multimodal architecture and experience goals
- 2.Data & Ingestion: Webcam/microphone capture, preprocessing, augmentation
- 3.Model Development: Train and optimize emotion and engagement models
- 4.Integration: Real-time inference pipeline plus UI motion & visualization
- 5.Evaluation: Live testing, human feedback loops, latency profiling
- 6.Refinement: Continual tuning, on device optimization, release hardening