Stephen James
Previous PI of the Robot Learning Lab, London.

Previous PI
Robot Learning Lab
London
I was previously the principal investigator of the Robot Learning Lab in London, UK, where I lead a large concentration of the top Robot Learning talent in the world. My interests lie in investigating how to get robots to learn practical skills and behaviours in a data-driven manner, without explicitly needing to hard-code steps for each new task. My work lies on the intersection of reinforcement learning, imitation learning, and unsupervised representation learning. Previously, I was a postdoctoral fellow at UC Berkeley, advised by Pieter Abbeel, and prior to that, I completed my PhD at Imperial College London, under the supervision of Andrew Davison. I serve as area chair for NeurIPS, CVPR, ICML, and ICLR. For a formal bio, please see here.
selected publications
-
- Green Screen Augmentation Enables Scene Generalisation in Robotic ManipulationarXiv preprint arXiv:2407.07868, 2024
-
- Continuous Control with Coarse-to-fine Reinforcement LearningConference on Robot Learning, 2024
- Render and Diffuse: Aligning Image and Action Spaces for Diffusion-based Behaviour CloningRobotics: Science and Systems, 2024
-
- Coarse-to-Fine Q-attention: Efficient Learning for Visual Robotic Manipulation via DiscretisationConference on Computer Vision and Pattern Recognition, 2022
- RLBench: The Robot Learning Benchmark & Learning EnvironmentIEEE Robotics and Automation Letters, 2020
- Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation NetworksConference on Computer Vision and Pattern Recognition, 2019
- Transferring End-to-End Visuomotor Control from Simulation to Real World for a Multi-Stage TaskConference on Robot Learning, 2017