UCL CSML Talk

Abstract

The Embodied Question Answering (EQA) and Interactive Question Answering (IQA) tasks were recently introduced as a means to study the capabilities of agents in rich, realistic 3D environments, requiring both navigation and reasoning to achieve success. Each of these skills typically needs a different approach, which should nevertheless be smoothly integrated with the rest of the system leveraged by the agent. However, initial approaches either suffer from potentially weaker performance than when using a language-only model or are preceded by additional hand-engineered steps. This talk will provide an overview of the existing work on this thread and describe in more detail our recent study (published at BMVC 2019, spotlight talk at ViGIL@NeurIPS 2019), VideoNavQA: Bridging the Gap between Visual and Embodied Question Answering. Here, we investigate the feasibility of EQA-type tasks by building a novel benchmark, which contains pairs of questions and videos generated in the House3D environment. While removing the navigation and action selection requirements from EQA, we increase the difficulty of the visual reasoning component via a much larger question space, tackling the sort of complex reasoning questions that make QA tasks challenging. By designing and evaluating several VQA-style models on the dataset, we establish a novel way of evaluating EQA feasibility given existing methods, while highlighting the difficulty of the problem even in the most ideal setting.

Date
Jan 10, 2020 1:00 PM — 2:00 PM
Location
Malet Place Engineering Building 1.03
2 Malet Pl, Bloomsbury, London, WC1E 7JE, United Kingdom
Avatar
Dr Cătălina Cangea
Senior Research Scientist

Senior Research Scientist at Google DeepMind, with a PhD in ML from the University of Cambridge, and inhaler of music :) Focus on generative music models, finding signals in data and human evaluation. Motivated by contributing ML-based knowledge and improvements to real-world systems!