June 16, 2022
Room "Je'an-Claude Samin", Building Stevin, floor -1
Robots require a generic action representation in order to recognize, learn, and imitate observed tasks without any human intervention. Defining such a representation is a challenging problem due to the high inter-individual variability that emerges during the execution of actions. Conventional methods approach this problem by either considering continuous trajectory profiles or employing predefined symbolic action knowledge. The main challenge, however, still remains in linking perceived continuous sensory signals to discrete symbolic object or action concepts.
In this talk, I will promote a new holistic view on manipulation semantics, which combines the perception and execution of manipulation actions in one unique framework, the so-called “Semantic Event Chain” (SEC). The SEC concept is an implicit spatio-temporal formulation that encodes actions by coupling the observed effect with the exhibited roles of manipulated objects. In the first part of the talk, I will explain how such semantic action encoding can allow robots to link continuous visual sensory signals (e.g., image sequences) to their symbolic descriptions (e.g., action primitives). To highlight the scalability of manipulation semantics, I will introduce various applications of SECs in learning object affordances, coupling language and vision, and memorizing episodic experiences. In the second part of my talk, I will talk about employing scene semantics to solve the domain translation problems in the context of autonomous driving.
Eren Aksoy is an Associate Professor (Docent) at Halmstad University in Sweden. He obtained his Ph.D. degree in computer science from the University of Göttingen, Germany, in 2012. During his Ph.D. study, he invented the concept of Semantic Event Chains to encode, learn, and execute human manipulation actions in the context of robot imitation learning. His framework has been used as a technical robot perception-action interface in many EU projects (e.g., IntellACT, Xperience, ACAT). Before moving to Sweden, he spent 3 years as a postdoctoral research fellow at the Karlsruhe Institute of Technology in the H2T group of Prof. Dr. Tamim Asfour. He has also been a visiting scholar at Volvo GTT and Zenseact AB in Sweden, working on AI-based perception algorithms for autonomous vehicles. He serves as an Associate Editor in different high-ranking robotics journals and conferences (RA-L, IROS, Humanoids, etc.). His research interests include action semantics, computer vision, AI, and cognitive robotics. He has been actively working on creating the semantic representation of visual experiences to achieve a better environment and action understanding for autonomous systems such as robots and unmanned vehicles. He is the main coordinator of a Horizon Europe project ROADVIEW focusing on robust automated driving in extreme weather.