EPSC Planetary Robotics and Vision Processing Workshop

Monday, October 3, 2011

The Planetary Robotics and Vision Processing Workshop took place at the European Planetary Science Congress from October 3-7, 2011 in Nantes, France. The theme for this session was: “Planetary Robotics and Vision Processing for Future Planetary Exploration”. With the success of the NASA Mars Exploration Rovers and Phoenix Lander and several planetary exploration missions either being developed or planned for the future, this is an exciting and challenging time for Europe as it embarks upon its own plans and aspirations for planetary exploration.


A planetary robot can be regarded as an integral part of the ‘planetary science apparatus’, both as an instrument in its own right (e.g. using wheel motion and soil mechanics for physical investigations), and as a deployment device for instruments and surface/sub-surface sample acquisition. A major key to science sample selection is vision processing. Planetary rover imaging instruments such as panoramic and high resolution cameras, and the successful on-board and ground-based processing of this data is essential if mission science targets are to be realised.

The session focused on the advances required to address challenges like autonomous localisation and navigation; real-time characterisation of terrain and obstacles; autonomous monitoring and responding to system health and safety; robustness and the ability to function in the presence of faults or anomalous unexpected conditions; autonomous and ground-based camera image processing, and a shift from a human directing the minute-to-minute mission surface operations activities to the planetary robot performing this directing autonomously.

Oral presentations and posters were solicited which presented current and future research into planetary robotics and vision processing for planetary exploration. Papers were especially welcome in the following areas:
– planetary robotics and vision processing mission experiences;
– novel sensors and imaging devices;
– in-flight and on-surface robotic and imaging instrument calibration methods;
– vision data fusion;
– planetary environment and terrain modelling techniques;
– simulation and image data visualisation methods;
– localisation and navigation;
– autonomous control and vision processing architectures;
– autonomous sample acquisition, novel locomotion methods including aerobots, submersible and sub-surface robots,
– analogue field trials addressing the above mentioned aspects.

Find more information here.