The ARTMA Virtual Patient ® System was first worldwide to introduce
augmented reality in medicine for visualization of virtual anatomical structures
in endoscopic surgery (Lit.1). Interventional Video Tomography (IVT),
is a proprietary imaging modality invented and developed by Artma (Lit.2,
PCT patent). Virtual computer generated structures
are fused with the endoscopic video image in real time.
We have now extended this concept (Lit.3) for augmented reality remote-guided surgery. Recent developments in videoconferencing technology make it possible to broadcast video data over a network. Current medical concepts for remote stereotactic navigation have in common that at the local operating theater a certain degree of technical knowledge is necessary to correlate the anatomical structures of the patient to the operating field and the 3D digitizer.
Our technology overcomes this limitations. The patient image coordinate transformation is based on the IVT data set without use of a 3D digitizing probe.
The virtual representation of any surgical instrument tracked with 3D sensors is defined in the video overlay and independent of the physical sensor attachment. The only input needed for the system to visualize the stereotactic navigation data is live video data with synchronously recorded 3D sensor data.
This data is acquired at the local operating theater and is processed in our standard surgical navigation system (Lit.4). The IVT data set is simultaneously also transmitted over the network.
Therefore the steps needed to correlate the CT coordinate system with the IVT data set which represents the real patient can be performed at any remote location where this data is accessible on the network.
Prior to surgery a short IVT video sequence is captured with all 3D sensors already securely attached. This IVT sequence now contains video views of the patients anatomical region from many different perspectives with synchronously recorded 3D digitizer data. This is the only information needed to calibrate the system and initialize the video overlay of anatomical structures.
An anatomical marker identified with a cursor in the CT is also identified in any of the endoscope video images . After repeating this step at least six times a direct linear transformation calculates the original camera parameters (Lit.5). The backprojection is determined by the position of the endoscope camera imaging plane relative to the anatomical structure tracked by the 3D sensor.
Because the relative change of the projection parameters is stored for every different view in the IVT data set by means of the 3D sensors the backprojection is valid for the complete IVT sequence.This spatial relations are stored in a document. As long as the position of the imaging plane relative to the anatomical structure can be determined the backprojection is independent of the imaging modality. Additional intraoperative imaging devices (ultrasound, C-arm, microscope) equipped with a 3D sensor can therefore be added and integrated to visualize and define structures in volume imaging data, projective imaging data and in live video images simultaneously.
Assuming the sensor is fixed on the endoscope for the duration of the surgical procedure the backprojection computed for the IVT data set is also valid for the live video image.