Abstract

One important challenge for cognitive systems research is to develop an integrated architecture that can enable effective, natural human-robot interactions in open worlds where new concepts, entities, and actions can be introduced through natural language during task performance. In this paper, we claim that in order to allow for such open-world tasking in natural language, all components in the robotic architecture that process and execute human instructions require mechanisms for learning new information and applying it immediately. We focus on two aspects of open worlds – new goals and new objects – and describe the architectural machinery required to handle them: from representations and processing schemes for human utterances to open-world quantified goals that involve novel objects introduced during task execution. We then present a proof-of-concept demonstration of these mechanisms implemented in the DIARC architecture on an autonomous robot and show in simulated scenarios the necessity of mechanisms for open-world tasking.
Saving...