Event cameras, such as the Dynamic Vision Sensor (DVS), are biologically inspired sensors that present a new paradigm on the way that dynamic visual information is acquired and processed. Each pixel of an event camera operates independently from the rest, continuously monitoring its intensity level and transmitting only information about brightness changes of given size ("events") whenever they occur, with microsecond resolution. Hence, visual information is no longer acquired based on an external clock (e.g. global shutter); instead, each pixel has its own sampling rate, based on the visual input. This different representation of the visual information offers significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. This talk will focus on the research carried out at the Robotics and Perception Group (University of Zurich) on the development of such algorithms for ego-motion estimation and scene reconstruction, so that a robot equipped with an event camera can build a map of the scene and infer its pose with respect to it.