Image-driven virtual simulation of arthroscopy
In recent years, minimally invasive arthroscopic surgery has replaced a number of conventional open orthopedic surgery procedures on joints. While this achieves a number of advantages for the patient, the surgeons have to learn very different skills, since the surgery is performed with special minia...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2013
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/84487 http://hdl.handle.net/10220/11697 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In recent years, minimally invasive arthroscopic surgery has replaced a number of conventional open orthopedic surgery procedures on joints. While this achieves a number of advantages for the patient, the surgeons have to learn very different skills, since the surgery is performed with special miniature pencil-like tools and cameras inserted through little incisions while observing the surgical field on video monitor. Therefore, virtual reality simulation becomes an alternative to traditional surgical training based on hundreds years old apprentice–master model that involves either real patients or increasingly difficult to procure cadavers. Normally, 3D simulation of the virtual surgical field requires significant efforts from the software developers but yet remains not always photorealistic. In contrast to this, for photorealistic visualization and haptic interaction with the surgical field we propose to use real arthroscopic images augmented with 3D object models. The proposed technique allows for feeling the joint cavity displayed on video monitor as real 3D objects rather than their images while various surgical procedures, such as menisectomy, are simulated in real time. In the preprocessing stage of the proposed approach, the arthroscopic images are stitched into panoramas and augmented with implicitly defined object models representing deformable menisci. In the simulation loop, depth information from the mixed scene is used for haptic rendering. The scene depth map and visual display are reevaluated only when the scene is modified. |
---|