European Initiative

for

Remote Knowledge Visualization

© 1997 Copyright Michael Truppe, MD

 

Summary

The discovery of X-rays was the starting point for research and development in medical imaging modalities

Whereas current research activities concentrate on more technical issues like

these technologically oriented developments neglect the increase of knowledge in medicine. We have a steady progress in the development of imaging systems (Figure 1 Tele 3D Navigation) that contributes to continuous improvements in diagnosis and therapy. Therefore it should be the objective of research to present this knowledge to the physician and patient in diagnosis and therapy, and also intraoperatively.

The exponential increase of knowledge in medicine makes it necessary to develop new concepts for visualization of medical information. Visualization in this sense includes display of medical imaging data, image guided stereotactic navigation as well as the advice of an expert. We present in this proposal a technology for image fusion to overlay virtual data structures with live video data (augmented reality), in real time during surgery. Simultaneously this data is also transmitted over the Internet and enables an expert to participate in the surgical procedure from any remote location.

On September 26, 1996 the EURODOC project was presented by Stadtrat Rudolf Edlinger (Amtsführender Stadtrat für Finanzen und Wirtschaftspolitik der Stadt Wien) of the City of Vienna to the European Commission in Brussels. An image guided stereotactic procedure performed at the General Hospital (Clinic for Oral and Maxillofacial Surgery, University of Vienna, Head Prof.DDr.R.Ewers) was via telecommunication visualized remotely in Brussels. The interactive remote consultation of the surgical procedure by Prof.DDr.R.Ewers in Brussels was observed by Prof. Dr. Bangemann (Commissioner for Technology in the EC) and leading representatives of the EC.

The project goal is the visualization of medical knowledge in diagnosis and therapy, especially during surgery. The augmented reality technology makes it possible to transfer this knowledge via computer network or the Internet. This proposal not only outlines a strategy but presents already first results of our research in remote knowledge visualization.

Within the EURODOC project this technology is made available to leading research institutions by the Telepresence Research Institute (TRI) for the purpose of further research in the field of telenavigation. The free software for interactive teleconsulting can be downloaded on http://www.artma.com/viewer/.

 

Figure 1 Tele 3D Navigation

 

State of the art

One of the first stereotactic instruments was developed in the year 1906 by Victor Horsley and Richard Clarke. They used it for generating reproducible intracerebral lesions to investigate the brain function of monkeys. In the year 1918 a stereotactic instrument was developed by the neurologist Aubrey Mussen to position it relative to the human skull based on an anatomical atlas. This idea was ahead of its times for neurosurgery and it took another generation until it was finally implemented in humans.

Watanabe was first in 1987 to use the "Neuronavigator" intraoperatively in Japan [1]. The skull of the patient was immobilized and an articulated arm was used to display intraoperatively its current position on the computer screen in the corresponding CT slices.

While working for the computer graphics company Evans and Sutherland the medical student Russel A. Brown in 1978 introduced new concepts in the development what was later called the Brown-Roberts-Wells System [2]. This work was continued by Peter Heilbrun [3] and as in other research centers new position sensor systems were explored to localize stereotactic instruments without inherent limitation of mechanical stereotactic frames [4]. The research mission should be not so much to develop a navigation system but to transfer to an operating room expert knowledge. It is likely that in medicine we have a similar development as in information technology in general. Not the local optimization of a technology is desirable but the access to information resources of other university centers. The research centers will be connected via an European interdisciplinary medical network. To achieve this goal a suitable software technology has to be developed.

Since several years Dr.Michael Truppe (TRI) is developing a technology to accommodate these requirements, Interventional Video Tomography (IVT) [5-10]. First clinical results were published in international publications [11, 12].

 

Knowledge visualization

To localize and display tumors of the skull computer assisted navigation systems are already in use [13, 14] to guide the surgical instruments. This information constitutes only one of the attributes used by an experienced surgeon, in addition the experience from prior cases is assisting the surgeon in navigation. So instead of a surgical navigation system guiding to the patients anatomy only we want to display the medical expertise of a surgeon.

This is implemented by stereo photometric reconstruction of anatomical structures in 3D from various imaging modalities. These virtual graphical structures are then projected into the optical view of the surgeon so that they correlate with the anatomy visible in the operating field. As example the access path to a tumor can be predefined and becomes visible as virtual path within the patient. The increase of minimally invasive procedures has put further emphasis on this concept as the surgeon has only a limited overview of the operating field. Here the primary display is already a video monitor.

The IVT technology analyzes video sequences based on stereo photometric mathematical methods [15-22] after the video frames have been recorded synchronously with position data from 3D sensors (Figure 2 3D Reconstruction of a video sequence).

 

Figure 2 3D Reconstruction of a video sequence

 

These video sequences are spatially connected to existing imaging modalities like CT or MR. Therefore a tumor identified in the CT as graphical outline is now visible in the video sequence or as superimposition in the live video (Figure 3 Comparison CT to IVT). In this IVT data set the stereotactic position of surgical instruments relative to the patients anatomy.is also dynamically are stored.

 

Figure 3 Comparison CT to IVT

 

In this way the surgical procedure is documented, for any moment of the operation the position of an instrument relative to a sensitive anatomical structure is known (Figure 4 IVT Stereotactic coordinates).

 

Figure 4 IVT Stereotactic coordinates

 

With the Virtual Patient System the surgeon has intraoperatively two visualization methods available (Figure 5 Intraoperative screenshot, University of Vienna, Clinic for Oral and Maxillofacial Surgery)

Figure 5 Intraoperative screenshot, University of Vienna, Clinic for Oral and Maxillofacial Surgery

 

by image fusion of virtual data with video from

Microscope for stereotactic microscope navigation

Endoscope for stereotactic endoscope navigation

 

Intraoperative control of accuracy

Most authors put a high emphasis on the digitizer technology used and define the localization error by measuring the difference in mm between the computer-determined location of the target and the location of the point in the CT slice image. It was pointed out by several authors [23] that the digitizer accuracy has only limited consequence for the overall accuracy.

The magnetic field digitizer we use has an accuracy of about 0,5 mm within an optimal environment. Because of the dynamic nature of the measurement errors induced by metal objects to magnetic field digitizers we cannot rely on these laboratory benchmarks. Instead of comparing the digitizer values against a calibration object they are compared against an optical coordinate reference system, thus separating digitizer error from other error sources.

The digitizer source is rigidly mounted on an operating microscope with integrated video output. The position and orientation of the digitizer coordinate system relative to the focal plane of the microscope has to be determined. The representation of a stylus tip in optical space as x,y coordinate on the microscope video image is correlated to the x.y.z coordinates in digitizer space as measured by the tip of this stylus. The virtual camera model is computed either by a direct linear transformation [15, 16] or by a nonlinear optimization method [21, 22]. After this step any sensor position in digitizer space is projected in the focal plane of the microscope, a registration error induced by dynamic metal influence becomes visible immediately (Figure 6 Accuracy control, University of Vienna, Clinic for Oral and Maxillofacial Surgery).

Figure 6 Accuracy control, University of Vienna, Clinic for Oral and Maxillofacial Surgery

 

Augmented reality as visualization concept

The benefit of visualization of medical information in the head mounted display is not limited to surgical procedures, it can be applied universally in diagnosis and therapy. The head mounted displays currently used (Figure 7 Monoscopic HMD) have their origin in the military [24].

Figure 7 Monoscopic HMD

The Virtual Patient software has implemented a calibration method to project virtual structures directly into the optical view of the eyes, by using a see-through head mounted display [25]. We are working on a miniature optical system mounted on glasses, similar to a "Lupenbrille". It injects the computer generated structures into the optical path of the system rather then directly into the eye and offers an enlarged view of the operating field. The advantage is that this system is calibrated in itself, the virtual structures are superimposed correctly independent of a temporary misalignment of the device relative to the eyes of the observer.

 

3D Annotation

Various methods are explored to annotate medical imaging data, as example with the second opinion of a colleague. The solutions presented so far [26, 27] are limited to combinations with various PACS systems. To manage the increase and subsequent overload of visual information interesting concepts are developed for managing the information content of movie video sequences [28-30].

In the US the Medical Advanced Technology Management Office (MATMO) develops the Mobile Medical Mentoring Vehicle (Figure 8 US ARMY Mobile Medical Mentoring Vehicle).

Figure 8 US ARMY Mobile Medical Mentoring Vehicle

One of the main project goals are

1. medical mentoring

2. dynamical allocation of resources to the injured to set a priority list for treatment

Also here the transfer of knowledge is the main issue.

The technology presented in this proposal makes possible a dramatic new concept of knowledge transfer, a 3D annotation to a dynamic process. A typical example for this is a surgical procedure with continuously tracked instruments.

 

Telementoring

Only the prior research and development of the IVT technology made it possible to realize teleassisted surgery via the Internet. For the future we envision the availability of the best possible medical treatment to any patient within the European Union, independently of the geographic distance to medical centers, for diagnosis and therapy but also for teleassisted surgery.

The ETC Lab of the University of Toronto uses augmented reality technology to control robots underwater [31] or in dangerous areas. In contrast to industrial robots these robots have to operate in an environment about which insufficient information is available, in a so called unstructured environment [32]. In order to navigate the robot around obstacles stereoscopic video is used [33] and the position of the obstacle relative to the robot can be extracted from the video [34].

The IVT technology makes it possible to extract information also from monoscopic video about the relative distance of objects from the video camera. This is very important as monoscopic video is now already routinely transmitted via the Internet.

 

TRI Research in Austria

University of Vienna, Clinic for Oral and Maxillofacial Surgery

(A.Wagner, G.Enislidis, Head Prof. DDr.R.Ewers)

The "Augmented Reality Environment" technology developed by Michael Truppe is used in clinical applications at the University Clinic for Oral and Maxillofacial Surgery since several years. The system enables us to visualize structures of anatomy, physiology and function inside the living subject in a pseudo natural, less encumbering way by image fusion technology. This means fusion of cyberspace graphics and the real world space [35, 36, 11].

When employing intraoperative navigational assistance the first task to fullfil is collecting various preoperative radiological imaging modalities. Direct interfacing of our computer work-station with CT or MRI-scans allows direct transferal of the above mentioned data. The surgeon is then able to mark the identified pathological structures on screen as well as to mark surgical approaches by means of rectangular squares. Intraoperatively these graphical virtual structures are then matched with the real world space (i.e. the real patient) and visualized by employment of a head-mounted display (monoscopic, stereoscopic or see-through). Any preoperatively depicted structure can be seen inside the patient intraoperatively. For matching of computer graphics structures with the real world we utilize an electromagnetic tracking system, the integration of optoelectronic tracking devices is in preparation.

When attaching trackers to bone segments intraoperatively any movement of these segments can be visualized in an augmented reality environment space and at the same time put in spacial relation to other segments of the human skeleton in previously acquired imaging data. Instruments and endosseous implants can as well be visualized inside the human body by means of superimposed graphical structures which is of particular importance in minimal invasive surgery.

Checking intraoperative accuracy is possible by comparing the matching accuracy of the projection of computer generated structures onto anatomical fiducial markers.

We have evaluated the benefit of this technology in various procedures [37-53] and see it essential especially in difficult anatomical situations, in combination with a reduction of the operating time.

As presented at the European Forum Alpbach on August 26, 1996 and on September 26, 1996 in Brussels graphical tele-assistance can be submitted directly in the intraoperative situation. This interactive teleconsultation concept is possible by interactively changing virtual computer graphics which are then superimposed on the operating field by means of a see-through head-mounted display and serve intraoperatively as a virtual guide-line [54]. An expert from a remote location is then able to assist his colleague in the operating theater. Any change in the preoperative planning is visualized on the displays of the HMD in real-time.

 

Telesurgery at the Clinic for Oral and Maxillofacial Surgery, University of Vienna

A patient, suffering from a posttraumatic deformity, was operated at the Clinic for Oral and Maxillofacial Surgery, University of Vienna (Head Prof.DDr.R.Ewers), with the help of image guided surgery/augmented reality [55]. An unilateral zygomatic-orbital fracture and a central midface fracture and comminuted maxillary fracture had led to a marked facial asymmetry and functional impairment, following insufficient reduction in primary repair. To restore facial symmetry and occlusal balance, a reosteotomy following the fracture lines in the zygoma and in the orbit, as well as a Le Fort I osteotomy was planned. The actual and desired position of the bony structures were superimposed as overlay graphics on radiological and video images in real time during surgery, tracked by 3-D sensors. Reduction could be performed according to contour graphics, displaying the intraoperative movements on CT scans. In teleconsultation, the position achieved could be discussed according to symmetry, hard/soft tissue relation and occlusal details with the possibility of on screen planning interaction and real time evaluation of the results.

The digitizer source was rigidly mounted on an operating microscope with integrated video output. First the position and orientation of the digitizer coordinate system relative to the focal plane of the microscope has to be determined. The representation of a stylus tip in optical space as x,y coordinate on the microscope video image is correlated to the x.y.z coordinates in digitizer space as measured by the tip of this stylus and a virtual camera model is generated [15, 22]. After this step any sensor position in digitizer space is projected in the focal plane of the microscope, a registration error induced by dynamic metal influence becomes visible immediately during surgery in real-time. Sensors are rigidly attached to structures of intraoperative manipulation [56] and to surgical instruments. The position and movement relative to the skull base is continuously displayed in various radiographic data sets. In addition the virtual structures representing bone segments and instruments are visible as overlay in the live video source from the microscope and provide an augmented reality "see-through" visualization.

Remote consultation

The expert in a remote location receives this data almost in real time over standard transmission protocols (Figure 9 Interactive consultation, University of Vienna, Clinic for Oral and Maxillofacial Surgery). But in addition also stereotactic navigation data is sent over the network as rigid body coordinates. The actual graphic overlay structures are computed on the remote computer, thus dramatically reducing the bandwidth necessary for transmission. By teleconsulting, the composite images and overlapping graphics - instrument, target structure, landmark, contour - can be seen in connected clinics with the possibility of interactive graphical assistance.

Figure 9 Interactive consultation, University of Vienna, Clinic for Oral and Maxillofacial Surgery

 

Related Research of TRI

University of Vienna, Clinic for Traumatology

The first clinical use of this technology has been developed for the field of traumatology. In 1990 the University Clinic for Traumatology (Head Univ.Prof. Dr.V.Vecsei) started investigating "Dynamic 3D measurement of kinetics of the human knee".

Project summary

The kinematics of the knee is quite different from that of the other human joints. The factors involved are the joint surface, tendons, menisci and dynamic stabilization by muscles.

Figure 10 Stereophotometric analysis of X-rays, University of Vienna, Clinic for Traumatology

Figure 11 Distance change during flexion, University of Vienna, Clinic for Traumatology

The complex motion cannot be reproduced in models [57] or cadaver specimens. An exact analysis under in vivo conditions [58-60] was restricted so far to the limitations of the instrumentarium available. Our goal is the dynamic recording of the knee kinematics under load to display the real spatial movements of the bone structures. First results from studies on cadaver specimens are available (Figure 10 Stereophotometric analysis of X-rays, University of Vienna, Clinic for Traumatology). After defining the insertion points of the cruciate ligament in 3D by stereophotometric X-ray analysis [61, 62] a flexion of a cadaver knee was recorded by 3D sensors and displayed as graph (Figure 11 Distance change during flexion, University of Vienna, Clinic for Traumatology).

 

University of Innsbruck, Clinic for ENT

Figure 12 Sensor fixation, University of Innsbruck, Clinic for ENT

The system is used at the University of Innsbruck, Clinic for ENT (Head Prof. Dr. W.Thumfart) [63-67] for stereotactic image guided endoscopic navigation. A special project is the development of a patient head splint made of carbon faser (Figure 12 Sensor fixation, University of Innsbruck, Clinic for ENT), similar to stereotactic frames used in neurosurgery.

 

University of Vienna, Dental School

At the Dental School of the University of Vienna (Head Prof.Dr.R.Slavicek) TRI develops a software module to integrate electronic axiographic data into the medical imaging data.

The hinge axis defined in the electronic axiographic system [68] is visible in CT and X-ray by fiducial markers. A specially developed conversion algorithm maps the axiographic data as rigid body movement of the lower jaw onto any medical imaging data of the patient. In this way it is possible for the first time to visualize the motion of the mandibular condyle relative to the mandibular fossa in 3 dimensions [69].

In a different approach a 3D sensor rigidly attached to the forehead and lower jaw tracks the motion of the lower jaw (Figure 13 3D sensor on the skull and lower jaw, University of Vienna, Dental School).

At the Department for Oral Surgery at the Dental School of Vienna (Head: Prof.Dr.G.Watzek) the research is focused on the positioning of implants [70-72].

 

Figure 13 3D sensor on the skull and lower jaw, University of Vienna, Dental School

 

University of Zürich, Clinic for Oral and Maxillofacial Surgery

At the Clinic for Oral and Maxillofacial Surgery (Head: Prof. DDr H. Sailer), University of Zürich, research is focused on improving the accuracy of the Virtual Patient System [73, 74].

 

University of Berlin, Clinic for Oral and Maxillofacial Surgery

In Germany the Freie Universität Berlin (FUB) has extensive resources for research in biomedical imaging and visualization. TRI is starting a project with FUB for further development of the visualization technology of the Virtual Patient System. The Deutsche Forschungsgemeinschaft (DFG) is funding a research project for further development of this technology. In this context a Virtual Patient System is being purchased by the DFG to be used also at the Clinic for Oral and Maxillofacial Surgery, University of Berlin (Head Prof.DDr. B.Hoffmeister).

 

Relevance to projects in the European Union

Europe and the global information society

Recommendations to the European Council, 26. May 1994, excerpt of the report of Prof. Bangemann

Application VII, Healthcare Networks: Less costly and more effective healthcare systems for Europe's citizens

What should be done? Create a direct communication "network of networks" based on common standards linking general practitioners, hospitals and social centres on a European scale.

Who will do it?The private sector, insurance companies, medical associations and Member State healthcare systems, with the European Union promoting standards and portable applications. Once telecom operators make available the required networks at reduced rates, the private sector will create competitively priced services at a European level, boosting the productivity and cost effectiveness of the whole healthcare sector.

Who gains? Citizens as patients will benefit from a substantial improvement in healthcare (improvement in diagnosis through on-line access to European specialists, on-line reservation of analysis and hospital services by practitioners extended on European scale, transplant matching, etc.. Tax payers and public administrations will benefit from tighter cost control and cost savings in healthcare spending and a speeding up of reimbursement procedures.

Issues to watch? Privacy and the confidentiality of medical records will need to be safeguarded.

What target? Major private sector health care providers linked on a European scale. First level implementation of networks in Member States linking general practitioners, specialists and hospita/s at a regional and national level by end of 1995.

 

EURODOC: Pioneer project in the EU

The EURODOC project should contribute to APPLICATION VII, HEALTHCARE NETWORKS

The creation of networks on an European scale to exchange annotations to radiological data and a second opinion has already shown the feasibility of this approach. The Virtual Patient technology goes far beyond this concepts and has already been proven in practical field tests with the University Hospital of Vienna.

The goal of the EURODOC project is the further advancement of the technology leadership of Europe in the field of medical telenavigation [75]. By designing the software to work with existing communication bandwith limitations the available infrastructure should be used to its full potential, therefore contributing to a cost containment in the health care sector.

Intraoperatively any medical imaging data is processed as digital data, in combination with user defined graphical structures defining a surgical simulation. This graphical computer structures represent the knowledge of the physician.

 

Transfer of knowledge via telecommunication

Any medical knowledge is visualized as superimposition of virtual structures with recorded video sequences or as real time image fusion with live video sources. Especially with the increase of minimally invasive procedures the primary display of the operating field is via a video monitor. The information available to the surgeon in the operating room is already in digital form (Figure 14 Intraoperative integration of all imaging modalities).

 

Figure 14 Intraoperative integration of all imaging modalities

The transfer of this digital information via a network is possible with the existing communication infrastructure, e.g. the Internet. Only 3D coordinates defining rigid body movements are transmitted over the network, defining the stereotactic position of instruments relative to the patients anatomy. In addition also a video stream is sent via the network to give an overview of the operating field, but this is a supplement to the basic data transmission of the 3D sensors (Figure 15 Transfer of 3D data via Internet).

On any remote computer the computer graphic overlay is computed locally, depending on the processing power available with different complexity.

It is important to note that the medical imaging data displayed at the remote computer might be completely different from the imaging data displayed in the operating room. Therefore the expert can evaluate the progress of the surgical procedure by visualizing the stereotactic position of instruments relative to imaging data of his preference.

 

Figure 15 Transfer of 3D data via Internet

 

Advantages of IVT technology for telenavigation

This approach offers several advantages

 

Medical knowledge server

 

Figure 16 Knowledge based navigation server

The current consultation of a remote expert is based on a point to point connection protocol. Although this concept can be expanded to connect to different experts during a surgical procedure the management of this information needs special attention. We will use a knowledge based server [76] to filter the information reaching the surgeon (Figure 16 Knowledge based navigation server). The goal is to reduce the administrative load for the surgeon, we will integrate prior research at the General Hospital of Vienna [77-80].

The IVT technology makes it possible to digitally record a complete surgical procedure. Similar to a "black box" in airlines all data relevant to the surgical procedure is recorded. Not only the video of the operating field is recorded but also the movement and stereotactic position of instruments relative to patients anatomy, visualized as graphical overlay on any medical imaging data.

In certain situations it might be helpful to access as reference a similar surgical procedure recorded in the past, at a similar progress status. All Virtual Patient Systems have an integrated database. It would be therefore easily possible to access a recorded surgical procedure via a secure protocol at an Patient System system located in a different research center. It is not necessary to transfer the complete data set, it would be possible to access only a certain sequence of a surgical procedure. (Figure 17 Secure communication between Virtual Patient Systems).

 

Figure 17 Secure communication between Virtual Patient Systems

 

EURODOC project goals

Open standard for knowledge transfer

In order to facilitate an European exchange of expert second opinion in telenavigation it is necessary to define an open communication standard in extension to already existing standardization initiatives.

Interactive medical knowledge transfer in Europe

The local stereotactic navigation system will be connected to European research institutions by a high bandwith communication network (Figure 18 Interactive communication in Europe).

At these universities there is again a subnet (Local Area Network, LAN) for local communication between experts of this University to reach a consensus before the opinion is transmitted back to the operating site.

Figure 18 Interactive communication in Europe

 

Visualization by augmented reality

Our vision is that any kind of medical information will be visualized directly at the patient, as overlay of computer generated structures with the real world, the patient. The current visualization concepts have a significant disadvantage

It is up to the surgeon to relate the position of surgical instruments in the operating field to the display of the same instruments as overlay on medical imaging data on the computer monitor.

In contrast to this the Virtual Patient System fuses any medical relevant information with the real world, either superimposing this information with a video image from the operating field, or by injecting this graphic structures into the optical view of a see through head mounted display.

 

Bibliography

[1]

E. Watanabe, T. Watanabe, S. Manaka, Y. Mayanagi, and K. Takakura, "Three-dimensional digitizer (neuronavigator): new equipment for computed tomography-guided stereotaxic surgery," Surg Neurol, vol. 27, pp. 543-7, 1987.

[2]

R. A. Brown, "A computerized tomography-computer graphics approach to stereotaxic localization," J Neurosurg, vol. 50, pp. 715-20, 1979.

[3]

M. P. Heilbrun, T. S. Roberts, M. L. Apuzzo, T. J. Wells, and J. K. Sabshin, "Preliminary experience with Brown-Roberts-Wells (BRW) computerized tomography stereotaxic guidance system," J Neurosurg, vol. 59, pp. 217-22, 1983.

[4]

M. P. Heilbrun, P. McDonald, C. Wiker, S. Koehler, and W. Peters, "Stereotactic localization and guidance using a machine vision technique," Stereotact Funct Neurosurg, vol. 58, pp. 94-8, 1992.

[5]

M. Truppe, "Method for representing moving bodies," European Patent Office 1991, patent EP0488987B1.

[6]

M. Truppe, "Process for imaging the interior of bodies," European Patent Office 1993, patent WO94/03100.

[7]

M. Truppe, "Artma Virtual Patient," presented at Medicine Meets Virtual Reality, San Diego, 1994.

[8]

M. Truppe, "Interventional video tomography in computer assisted surgery concepts," J Craniomaxillofac Surg, vol. 24, Suppl. 1, 1996.

[9]

M. Truppe, F. Pongracz, W. Freysinger, A. Gunkel, and W. Thumfart, "Interventional Video Tomography," presented at Computer Assisted Radiology, Berlin, 1995.

[10]

M. Truppe, F. Pongracz, O. Ploder, A. Wagner, and R. Ewers, "Interventional Video Tomography," in Proceedings of Lasers in Surgery, vol. 2395. San Jose, CA: SPIE, 1995, pp. 150-152.

[11]

A. Wagner, O. Ploder, G. Enislidis, M. Truppe, and R. Ewers, "Virtual image guided navigation in tumor surgery--technical innovation," J Craniomaxillofac Surg, vol. 23, pp. 217-3, 1995.

[12]

A. Wagner, O. Ploder, G. Enislidis, M. Truppe, and R. Ewers, "Image-guided surgery," Int J Oral Maxillofac Surg, vol. 25, pp. 147-51, 1996.

[13]

S. Hassfeld, J. Muhling, and J. Zoller, "Intraoperative navigation in oral and maxillofacial surgery," Int J Oral Maxillofac Surg, vol. 24, pp. 111-9, 1995.

[14]

W. Schmitt, S. Pawelke, and T. Meissen, "["Aachen 3-D-finger". Development of a 3-D-digitizer for use in dental, oral and maxillary treatment]," Biomed Tech (Berlin), vol. 35, pp. 69-71, 1990.

[15]

Y. Abdel-Aziz, Karara, HM., "Direct linear transformation into object space coordinates in close-range photogrammetry," presented at Proceedings of the Symposium on Close-Range Photogrammetry, 1971.

[16]

Y. I. Abdel-Aziz, Karara, H.M, "Photogrammetric Potential of Non-Metric Cameras," in Civil Engineering Studies, Photogrammetry Series No. 36: University of Illinois at Urbana-Champain, 1974.

[17]

L. E. Fencil and C. E. Metz, "Propagation and reduction of error in three-dimensional structure determined from biplane views of unknown orientation," Med Phys, vol. 17, pp. 951-61, 1990.

[18]

H. C. Longuet-Higgins, "A computer algorithm for reconstructing a scene from two projections.," Nature, vol. 293, pp. 133-5, 1981.

[19]

C. E. Metz and L. E. Fencil, "Determination of three-dimensional structure in biplane radiography without prior knowledge of the relationship between the two views: theory," Med Phys, vol. 16, pp. 45-51, 1989.

[20]

C. E. Metz and L. E. Fencil, "Extraction of three-dimensional information from biplane images without prior knowledge of the relative geometry of the two views," Prog Clin Biol Res, vol. 363, pp. 257-70, 1991.

[21]

R. Tsai, "An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision," IEEE Conference on Computer Vision and Pattern Recognition, pp. 364-374, 1986.

[22]

R. Tsai, "A Versatile Camera Calibration Technique for High Accuracy 3D Machine Vision Metrology Using Off-The-Shelf TV Cameras and Lenses," IEEE Journal of Robotics and Automation, pp. 323-344, 1987.

[23]

D. A. Simon, R. V. O'Toole, M. Blackwell, F. Morgan, A. M. DiGioia, and T. Kanade, "Accuracy Validation in Image-Guided Orthopaedic Surgery," presented at MRCAS 95, Pittsburgh, 1995.

[24]

R. M. Satava, "Medical applications of virtual reality," J Med Syst, vol. 19, pp. 275-80, 1995.

[25]

R. Azuma and G. Bishop, "Improving Static and Dyanmic Registration in an Optical See-Through HMD," presented at Proceedings of Siggraph 94, Orlando Florida, 1994.

[26]

G. Haufe, R. Jakob, D. Fuchs, and T. Meißner, "PACS at work: A multimedia e-mail tool for integration of images, speech and dynamic annotation," presented at Computer Assisted Radiology, Paris, 1996.

[27]

D. Lyche, F. Goeringer, J. Weiser, M. Williamson, J. Romlein, and C. Suitor, "Teleradiology in the Department of Defense," presented at RSNA, Chicago, 1995.

[28]

P. Aigrain, H. Zhang, and D. Petkovic, "Content-based representation and retrieval of visual media: A state-of-the-art review," Multimedia Tools and Applications, vol. 3, pp. 179-202, 1996.

[29]

R. Harmon, W. Patterson, W. Ribarsky, and J. Bolter, "Virtual annotation system," presented at Proceedings of the IEEE 1996 Virtual Reality Annual International Symposium, Santa Clara, CA, 1996.

[30]

R. Zabih, J. Woodfill, and M. Withgott, "Real-time system for automatically annotating unstructured image sequences," presented at Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Le Touquet, Fr, 1993.

[31]

P. Milgram, A. Rastogi, and J. Grodski, "Telerobotic Control Using Augmented Reality," presented at 4th IEEE International Workshop on Robot and Human Communication, Tokyo, Japan, 1995.

[32]

A. Rastogi, P. Milgram, and J. Grodski, "Augmented Telerobotic Control: a visual display for unstructured environments," presented at KBS/Robotics Conference, 1995.

[33]

D. Drascic and P. Milgram, "Positioning Accuracy of a Virtual Stereographic Pointer in a Real Stereoscopic Video World," presented at Stereoscopic Displays and Applications II, San Jose, California, 1991.

[34]

P. Milgram, S. Zhai, and D. Drascic, "Applications of Augmented Reality for Human-Robot Communication," presented at International Conference on Intelligent Robots and Systems, Yokohama Japan, 1993.

[35]

G. Enislidis, O. Ploder, A. Wagner, and M. Truppe, "Prinzipien der virtuellen Realität und deren Anwendung in intraoperativen Navigationshilfesystemen," ACA, vol. 28, 1996.

[36]

O. Ploder, A. Wagner, G. Enislidis, and R. Ewers, "Computer-assisted intraoperative visualization of dental implants. Augmented reality in medicine," Radiologe, vol. 35, pp. 569-72, 1995.

[37]

G. Enislidis, O. Ploder, A. Wagner, M. Truppe, and R. Ewers, "Advantages and Drawbacks of Intraoperative Navigation Systems in Oral and Maxillofacial Surgery," presented at Computer Assisted Radiology, Berlin, 1995.

[38]

G. Enislidis, A. Wagner, O. Ploder, M. Truppe, and R. Ewers, "Semiimmersive Umfelder in der Kiefer- und Gesichtschirurgie," presented at Symposium Operationssimulation und intraoperative Navigation mit 3-D Modellen und Augmented Reality, Wien, 1995.

[39]

R. Ewers, G. Enislidis, A. Wagner, O. Ploder, and B. Schumann, "Die "Augmentierte Realität" Intraoperative Navigationshilfe in der Mund-, Kiefer- und Gesichtschirurgie," presented at Symposium Digitale Bildverarbeitung im Neuen AKH Dez. 95, Wien, 1995.

[40]

R. Ewers, O. Ploder, A. Wagner, G. Enislidis, and M. Truppe, "Intraoperative 3D Navigation in der Implantologie," presented at Symposium Operationssimulation und intraoperative Navigation mit 3-D Modellen und Augmented Reality, Wien, 1995.

[41]

R. Ewers, W. F. Thumfart, M. Truppe, O. Ploder, A. Wagner, G. Enislidis, N. Fock, A. R. Gunkel, and W. Freysinger, "Computed navigation in cranio-maxillofacial and ORL-head and neck surgery. Principles, indications and potentials for telepresence," J Craniomaxillofac Surg, vol. 24, Suppl. 1, 1996.

[42]

O. Ploder, A. Wagner, G. Enislidis, and R. Ewers, "3-D-Navigation mit dem Endoskop - eine Fallpräsentation am stereolithographischen Modell," Stomatol, vol. 92, pp. 147-9, 1995.

[43]

O. Ploder, A. Wagner, G. Enislidis, M. Truppe, and R. Ewers, "3D Augmented Reality in der Mund-Kiefer-Gesichtschirurgie," presented at Symposium Operationssimulation und intraoperative Navigation mit 3-D Modellen und Augmented Reality, Wien, 1995.

[44]

O. Ploder, A. Wagner, G. Enislidis, M. Truppe, and R. Ewers, "3D Endoscopic Surgery," presented at Computer Assisted Radiology, Berlin, 1995.

[45]

O. Ploder, A. Wagner, M. Truppe, M. Rasse, and R. Ewers, "CT-guided Drill System for Osteointegrated Implant Placement," presented at International Meeting on Functional Surgery of the Head and Neck, Graz, 1995.

[46]

O. Ploder, A. Wagner, M. Truppe, M. Rasse, and R. Ewers, "Image Guided Stereotactic Endoscopic Navigation," presented at International Meeting on Functional Surgery of the Head and Neck, Graz, 1995.

[47]

M. Truppe, "3D Augmented Reality - Grundlagen der Bildfusion," presented at Symposium Operationssimulation und intraoperative Navigation mit 3-D Modellen und Augmented Reality, Wien, 1995.

[48]

A. Wagner, O. Ploder, G. Enislidis, B. Schumann, and R. Ewers, "Semi-Immersive Artificial Environments in Maxillofacial Surgery," Journal of Computer Aided Surgery, vol. 2, pp. 19, 1995.

[49]

A. Wagner, O. Ploder, G. Enislidis, M. Truppe, and R. Ewers, "3D Image Guided Surgery," presented at Symposium Operationssimulation und intraoperative Navigation mit 3-D Modellen und Augmented Reality, Wien, 1995.

[50]

A. Wagner, O. Ploder, G. Enislidis, M. Truppe, and R. Ewers, "Navigation Assistance by a Virtual Image Guiding Operation System. Initial description and operation simulation on a stereolithographic skull model," Stomatol, vol. 93, pp. 87-90, 1996.

© 1997 Copyright Michael Truppe


contact: Michael Truppe, MD, email: m.truppe@eurodoc.at, URL: http://www.eurodoc.at

Telepresence Research Institute | Artma | Technology

© 1997 M.Truppe