IBM Research Challenge

IBM Research scientist Martin Rufli explains cognitive robotics.
(in German only)

The phy­si­cal en­vir­on­ment it­self is the ul­ti­mate user in­ter­face be­tween man and machine.

—Martin Rufli, IBM scientist


We aim to enable IBM’s cognitive computer Watson to perceive and reason on a semantic level in response to the unstructured physical world in which it is embedded. This will facilitate services where humans and cognitive systems interact and collaborate in real-time.

We refer to this emerging area of robotics as “spatial cognition”. Applications of spatial cognition range from service robotics in domestic, commercial and industrial settings to mixed reality and cognitive IoT.

Read on to get a glimpse of the many core methods and use-case scenarios we are working on, for which spatial cognition represents a critical enabler.

Research focus areas

Our research focuses on a range of core methods, that jointly enable novel spatial cognition applications.

The application space is represented by a range of modules implementing a specific functionality along the sense–think–act cycle of artificial intelligence.

Containerized, these functional modules become reliably deployable on diverse infrastructures with minimal dependencies. This makes them readily consumable as services.

The image at right illustrates our spatial cognition framework spanning the full stack
between hardware and service applications.

Current focus areas comprise multi-sensor tracking, geometric 3D reconstruction and semantic knowledge representation.

Spatial cognition framework

Multi-sensor tracking

Tracking robot motion via visual and/or inertial cues.

Tracking describes the process of estimating the egomotion of an agent such as a vehicle, human or robot, for example by fusing the information of visual and inertial sensors attached to it. Over time, this process is inherently affected by drift.

Our Vibe implementation is a state-of-the-art visual-inertial tracking and mapping system targeted at robotics, augmented reality and IoT applications.

Geometric 3D reconstruction

Geometric 3D reconstruction encompasses techniques for merging individual exteroceptive — mainly camera or laser-based — sensor scans into a consistent 2D or 3D representation.

Case studies

loop closure

Industry 4.0

Within the context of Industry 4.0, manufacturing is gearing away from monolithic assembly lines and going towards flexible, collaborative assembly pods suitable for (mass) customized fabrication.

A key aspect in this scenario is the seamless yet robust programming of individual work steps (for example by means of demonstrations) within an uncertainty-dominated environment of human design.

Our team is contributing its know-how to a numerous industry-driven projects ranging from dataset/­knowledge ingestion and maintenance to the enablement of a seamless context-aware conversation between operator and machine.

EU project "UP-Drive"

Automated driving

We are part of several public-funded research initiatives in the context of connected / automated driving, such as the Automated Urban Parking and Driving (UP-Drive) and Autopilot projects.

Our team contributes to the representations and mechanisms for efficient and cost-effective long-term data management across devices, as well as scene understanding — starting from the detection of semantic features, classification of objects, all the way to behavior analysis and intent prediction.

Ask the experts

Martin Rufli

Martin Rufli

IBM Research scientist

Ralf Kaestner

Ralf Kaestner

IBM Research scientist

Alexander Velizhei

Alexander Velizhev

IBM Research scientist