NILLI
Organizers: Prasanna Parthasarathi, Koustuv Sinha, Khyathi Raghavi Chandu, Chinnadhurai Sankar, Adina Williams, Sarath Chandar, Marc-Alexandre Côté, Joelle Pineau
Collaborative dialogues with automated systems through language interactions have become ubiquitous, wherein it is becoming common from setting an alarm to planning one’s day through language interactions. With recent advances in dialogue research, embodied learning and using language as a mode of instruction for learning agents there is, now, a scope for realizing domains that can assume agents with primitive task knowledge and a continual interact-and-learn procedure to systematically acquire knowledge through verbal/non-verbal interactions. The research direction of building interactive learning agents facilitates the possibility of agents to have advanced interactions like taking instructions by being a pragmatic listener, asking for more samples, generating rationales for predictions, interactions to interpret learning dynamics, or even identifying or modifying a new task that can be used towards building effective learning-to-learn mechanisms. In a way, with verbal/non-verbal interactive medium this interdisciplinary field unifies research paradigms of lifelong learning, natural language processing, embodied learning, reinforcement learning, robot learning and multi-modal learning towards building interactive and interpretable AI.