Ashutosh Chaubey

About Me

I am a CS PhD student at the Institute for Creative Technologies, University of Southern California, where I am advised by Prof. Mohammad Soleymani at the Intelligent Human Perception Lab. I am a Bronze Medallist from the 2021 batch of Indian Institute of Technology, Roorkee.

Prior to this, I was a Founding Research Engineer at Anoki AI where I worked on multimodal content understanding and retrieval. I have also worked at LG Ad Solutions on speaker recognition, automatic content recognition using audio and voice cloning. Over the past I have interned at Adobe Research, where I worked with Dr. Sumit Shekhar on active learning for content labelling in documents. I have also interned at Video Analytics Lab, IISc. Bengaluru where I worked with Prof. R. Venkatesh Babu on human pose estimation from a single RGB image. Back at my undergraduate college, I worked with Prof. R. Balasubramanian on automatic evaluation of machine synthesized speech.

I am always eager to collaborate on research with people in academia as well as industry. Please reach out to achaubey@usc.edu to discuss potential collaborations.

Currently, I am looking for Research/Applied Scientist internship positions for Summer 2025. Please reach out if you have open positions.

Masters/Undergrad Students

If you are a student and want to have a discussion with me regarding my papers or how to apply for a PhD program in the US, please email me at achaubey at usc dot edu

For students who wish to join our lab, please check our lab's open positions.

Ashutosh Chaubey
Areas of Interest

Multimodal understanding and generation of human affective behaviour (emotions, expressions, etc.)
Speech and audio processing.

News

Research & Publications

ContextIQ: A Multimodal Expert-Based Video Retrieval System for Contextual Advertising

Ashutosh Chaubey, Anoubhav Agarwaal, Sartaki Sinha Roy, Aayush Agrawal, Susmita Ghose

IEEE/CVF WACV, 2025

Preprint

Proposed ContextIQ, a novel framework for video retrieval for contextual advertising using a mixture of multimodal experts.

Meta-Learning Framework for End-to-End Imposter Identification in Unseen Speaker Recognition

Ashutosh Chaubey, Sparsh Sinha, Susmita Ghose

IEEE ASRU, 2023

Paper / Poster / Cite

Proposed two novel approaches for imposter identification in unseen speaker recognition, including speaker-specific thresholding and a meta-learning approach.

Improved Relation Networks for End-to-End Speaker Verification and Identification

Ashutosh Chaubey, Sparsh Sinha, Susmita Ghose

Interspeech, 2022

Paper / Poster / Cite

Enhanced speaker recognition using relation networks inspired by computer vision, with global supervision and faster training.

OPAD: An Optimized Policy-based Active Learning Framework for Document Content Analysis

Sumit Shekhar, Bhanu Prakash Reddy Guda, Ashutosh Chaubey, Ishan Jindal, Avneet Jain

CVPR Workshops, 2022

Paper / Patent / Cite

A reinforcement policy-based active learning approach for document content labeling tasks such as object detection and named entity recognition.

Universal Adversarial Perturbations: A Survey

Ashutosh Chaubey*, Nikhil Agrawal*, Kavya Barnwal, Keerat K. Guliani, Pramod Mehta

Survey paper, arXiv 2020

Paper / Cite

A comprehensive survey on universal adversarial perturbations, covering both attacks and defenses.

A Generative Adversarial Network Based Ensemble Technique for Automatic Evaluation of Machine Synthesized Speech

Ashutosh Chaubey*, Jaynil Jaiswal*, Sasi Kiran Reddy Bhimvarapu, Shashank Kashyap, Puneet Kumar, Balasubramanian Raman, Partha Pratim Roy

ACPR, 2019

Paper / Cite

Proposed a technique leveraging the discriminator from a GAN-based TTS model for automatic evaluation of machine synthesized speech.

Education

University of Southern California – PhD, Computer Science (2024 - Present)

Graduate Researcher – Intelligent Human Perception Lab, Institute for Creative Technologies


Indian Institute of Technology Roorkee – BS, Computer Science (2017 - 2021), GPA: 9.718/10

Chair - ACM IIT Roorke Chapter | Co-President - Vision and Language Group