Skip to yearly menu bar Skip to main content


SARAMIS: Simulation Assets for Robotic Assisted and Minimally Invasive Surgery

Nina Montana-Brown · Shaheer U. Saeed · Ahmed Abdulaal · Thomas Dowrick · Yakup Kilic · Sophie Wilkinson · Jack Gao · Meghavi Mashar · Chloe He · Alkisti Stavropoulou · Emma Thomson · Zachary MC Baum · Simone Foti · Brian Davidson · Yipeng Hu · Matthew Clarkson

Great Hall & Hall B1+B2 (level 1) #433


Minimally-invasive surgery (MIS) and robot-assisted minimally invasive (RAMIS) surgery offer well-documented benefits to patients such as reduced post-operative pain and shorter hospital stays.However, the automation of MIS and RAMIS through the use of AI has been slow due to difficulties in data acquisition and curation, partially caused by the ethical considerations of training, testing and deploying AI models in medical environments.We introduce \texttt{SARAMIS}, the first large-scale dataset of anatomically derived 3D rendering assets of the human abdominal anatomy.Using previously existing, open-source CT datasets of the human anatomy, we derive novel 3D meshes, tetrahedral volumes, textures and diffuse maps for over 104 different anatomical targets in the human body, representing the largest, open-source dataset of 3D rendering assets for synthetic simulation of vision tasks in MIS+RAMIS, increasing the availability of openly available 3D meshes in the literature by three orders of magnitude.We supplement our dataset with a series of GPU-enabled rendering environments, which can be used to generate datasets for realistic MIS/RAMIS tasks.Finally, we present an example of the use of \texttt{SARAMIS} assets for an autonomous navigation task in colonoscopy from CT abdomen-pelvis scans for the first time in the literature.\texttt{SARAMIS} is publically made available at, with assets released under a CC-BY-NC-SA license.

Chat is not available.