• Title/Summary/Keyword: Image Guided System

Search Result 119, Processing Time 0.024 seconds

Image-guided Surgery System Using the Stereo Matching Method (스테레오 매칭 기법을 이용한 영상유도시술 시스템)

  • 강현수;이호진;문찬홍;문원진;김형진;최근호;함영국;이수열;변홍식
    • Journal of Biomedical Engineering Research
    • /
    • v.24 no.4
    • /
    • pp.339-346
    • /
    • 2003
  • MRI provides anatomical structure information with superb spatial resolution that can be utilized in clinical surgeries. Advanced image processing techniques in conjunction with the MRI-guided surgery is expected to be of great importance in brain surgeries in the near future. In this paper, we introduce an image-guided surgery technique using the stereo matching method. To perform image-guided biopsy operations, we made MRI markers, camera markers and a detection probe marker. To evaluate the accuracy of the image-guided system. we made a silicone phantom. Using the phantom and markers, we have performed MRI-guided experiments with a 1.5 Tesla MRI system. It has been verified from phantom experiments that our system has a positioning error less than 1.5%. Compared with other image guided surgery system, our system shows better positioning accuracy.

Clinical Application of Image Guided Surgery : Zeiss SMN System (영상유도 뇌수술 장비의 임상적 적용 : Zeiss SMN System)

  • Lee, Chea Heuck;Lee, Ho Yeon;Whang, Choong Jin
    • Journal of Korean Neurosurgical Society
    • /
    • v.29 no.1
    • /
    • pp.72-77
    • /
    • 2000
  • The authors describe the experience with the interactive image-guided Zeiss SMN system, which has been applied to 20 patients with various intracranial lesions during one year. Preoperative radiologic evaluation was CT scan in 6 cases, MRI in 14 cases. In all except one case, average fiducial registration errors were less than 2mm. There was no statistical difference in registration error between CT and MR image. This system considered to be relatively stable with respect to soft and hardware. Also it was useful for the designing of the scalp incision and bone flap and assessing the extent of resection in tumors, especially in gliomas. Moreover, it was helpful to evaluate complex surgical anatomy in skull base surgery.

  • PDF

Efficient VLSI Architecture of Full-Image Guided Filter Based on Two-Pass Model (양방향 모델을 적용한 Full-image Guided Filter의 효율적인 VLSI 구조)

  • Lee, Gyeore;Park, Taegeun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1507-1514
    • /
    • 2016
  • Full-image guided filter reflects all pixels of image in filtering by using weight propagation and two-pass model, whereas the existing guide filter is processed based on the kernel window. Therefore the computational complexity can be improved while maintaining characteristics of guide filter, such as edge-preserving, smoothing, and so on. In this paper, we propose an efficient VLSI architecture for the full-image guided filter by analyzing the data dependency, the data frequency and the PSNR analysis of the image in order to achieve enough speed for various applications such as stereo vision, real-time systems, etc. In addition, the proposed efficient scheduling enables the realtime process by minimizing the idle period in weight computation. The proposed VLSI architecture shows 214MHz of maximum operating frequency (image size: 384*288, 965 fps) and 76K of gates (internal memory excluded).

Implementation of Virtual Instrumentation based Realtime Vision Guided Autopilot System and Onboard Flight Test using Rotory UAV (가상계측기반 실시간 영상유도 자동비행 시스템 구현 및 무인 로터기를 이용한 비행시험)

  • Lee, Byoung-Jin;Yun, Suk-Chang;Lee, Young-Jae;Sung, Sang-Kyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.9
    • /
    • pp.878-886
    • /
    • 2012
  • This paper investigates the implementation and flight test of realtime vision guided autopilot system based on virtual instrumentation platform. A graphical design process via virtual instrumentation platform is fully used for the image processing, communication between systems, vehicle dynamics control, and vision coupled guidance algorithms. A significatnt ojective of the algorithm is to achieve an environment robust autopilot despite wind and an irregular image acquisition condition. For a robust vision guided path tracking and hovering performance, the flight path guidance logic is combined in a multi conditional basis with the position estimation algorithm coupled with the vehicle attitude dynamics. An onboard flight test equipped with the developed realtime vision guided autopilot system is done using the rotary UAV system with full attitude control capability. Outdoor flight test demonstrated that the designed vision guided autopilot system succeeded in UAV's hovering on top of ground target within about several meters under geenral windy environment.

In-House Developed Surface-Guided Repositioning and Monitoring System to Complement In-Room Patient Positioning System for Spine Radiosurgery

  • Kim, Kwang Hyeon;Lee, Haenghwa;Sohn, Moon-Jun;Mun, Chi-Woong
    • Progress in Medical Physics
    • /
    • v.32 no.2
    • /
    • pp.40-49
    • /
    • 2021
  • Purpose: This study aimed to develop a surface-guided radiosurgery system customized for a neurosurgery clinic that could be used as an auxiliary system for improving the accuracy, monitoring the movements of patients while performing hypofractionated radiosurgery, and minimizing the geometric misses. Methods: RGB-D cameras were installed in the treatment room and a monitoring system was constructed to perform a three-dimensional (3D) scan of the body surface of the patient and to express it as a point cloud. This could be used to confirm the exact position of the body of the patient and monitor their movements during radiosurgery. The image from the system was matched with the computed tomography (CT) image, and the positional accuracy was compared and analyzed in relation to the existing system to evaluate the accuracy of the setup. Results: The user interface was configured to register the patient and display the setup image to position the setup location by matching the 3D points on the body of the patient with the CT image. The error rate for the position difference was within 1-mm distance (min, -0.21 mm; max, 0.63 mm). Compared with the existing system, the differences were found to be as follows: x=0.08 mm, y=0.13 mm, and z=0.26 mm. Conclusions: We developed a surface-guided repositioning and monitoring system that can be customized and applied in a radiation surgery environment with an existing linear accelerator. It was confirmed that this system could be easily applied for accurate patient repositioning and inter-treatment motion monitoring.

Computer Integrated Surgical Robot System for Spinal Fusion

  • Kim Sungmin;Chung Goo Bong;Oh Se Min;Yi Byung-Ju;Kim Whee Kuk;Park Jong Il;Kim Young Soo
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.5
    • /
    • pp.265-270
    • /
    • 2005
  • A new Computer Integrated Surgical Robot system is composed of a surgical robot, a surgical planning system, and an optical tracking system. The system plays roles of an assisting surgeon and taking the place of surgeons for inserting a pedicle screw in spinal fusion. Compared to pure surgical navigation systems as well as conventional methods for spinal fusion, it is able to achieve better accuracy through compensating for the portending movement of the surgical target area. Furthermore, the robot can position and guide needles, drills, and other surgical instruments or conducts drilling/screwing directly. Preoperatively, the desired entry point, orientation, and depth of surgical tools for pedicle screw insertion are determined by the surgical planning system based on CT/MR images. Intra-operatively, position information on surgical instruments and targeted surgical areas is obtained from the navigation system. Two exemplary experiments employing the developed image-guided surgical robot system are conducted.

Image-guided surgery and craniofacial applications: mastering the unseen

  • Wang, James C.;Nagy, Laszlo;Demke, Joshua C.
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.37
    • /
    • pp.43.1-43.5
    • /
    • 2015
  • Image-guided surgery potentially enhances intraoperative safety and outcomes in a variety of craniomaxillofacial procedures. We explore the efficiency of one intraoperative navigation system in a single complex craniofacial case, review the initial and recurring costs, and estimate the added cost (e.g., additional setup time, registration). We discuss the potential challenges and benefits of utilizing image-guided surgery in our specific case and its benefits in terms of educational and teaching purposes and compare this with traditional osteotomies that rely on a surgeon's thorough understanding of anatomy coupled with tactile feedback to blindly guide the osteotome during surgery. A 13-year-old presented with untreated syndromic multi-suture synostosis, brachycephaly, severe exorbitism, and midface hypoplasia. For now, initial costs are high, recurring costs are relatively low, and there are perceived benefits of imaged-guided surgery as an excellent teaching tool for visualizing difficult and often unseen anatomy through computerized software and multi-planar real-time images.

Optical Imaging Technology for Real-time Tumor Monitoring

  • Shin, Yoo-kyoung;Eom, Joo Beom
    • Medical Lasers
    • /
    • v.10 no.3
    • /
    • pp.123-131
    • /
    • 2021
  • Optical imaging modalities with properties of real-time, non-invasive, in vivo, and high resolution for image-guided surgery have been widely studied. In this review, we introduce two optical imaging systems, that could be the core of image-guided surgery and introduce the system configuration, implementation, and operation methods. First, we introduce the optical coherence tomography (OCT) system implemented by our research group. This system is implemented based on a swept-source, and the system has an axial resolution of 11 ㎛ and a lateral resolution of 22 ㎛. Second, we introduce a fluorescence imaging system. The fluorescence imaging system was implemented based on the absorption and fluorescence wavelength of indocyanine green (ICG), with a light-emitting diode (LED) light source. To confirm the performance of the two imaging systems, human malignant melanoma cells were injected into BALB/c nude mice to create a xenograft model and using this, OCT images of cancer and pathological slide images were compared. In addition, in a mouse model, an intravenous injection of indocyanine green was used with a fluorescence imaging system to detect real-time images moving along blood vessels and to detect sentinel lymph nodes, which could be very important for cancer staging. Finally, polarization-sensitive OCT to find the boundaries of cancer in real-time and real-time image-guided surgery using a developed contrast agent and fluorescence imaging system were introduced.

The development of Frameless Image-Guided Surgery system based on magnetic field digitizers (마그네틱 센서를 이용한 영상유도 뇌정위 시스템 개발)

  • Woo, J.H.;Jang, D.P.;Kim, Y.S.;Kim, Sun-I.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.269-270
    • /
    • 1998
  • Image-guided surgery (IGS) system has become well known in the field of neurosurgery and spine surgery. A patient's anatomy is first registered to preoperatively acquired CT/ MRI data using the point matching algorithm. A magnetic field digitizer was used to measure the physical space data and the system was based on Workstation of Unix system. To evaluate the spatial accuracy of interactive IGS system, the phantom consisting of rods varied height and known location was used. The RMS error value between CT/MR images and real location was 3-4mm. For the more convenience of the surgery, we provide various image display modules.

  • PDF

Preliminary Application of Synthetic Computed Tomography Image Generation from Magnetic Resonance Image Using Deep-Learning in Breast Cancer Patients

  • Jeon, Wan;An, Hyun Joon;Kim, Jung-in;Park, Jong Min;Kim, Hyoungnyoun;Shin, Kyung Hwan;Chie, Eui Kyu
    • Journal of Radiation Protection and Research
    • /
    • v.44 no.4
    • /
    • pp.149-155
    • /
    • 2019
  • Background: Magnetic resonance (MR) image guided radiation therapy system, enables real time MR guided radiotherapy (RT) without additional radiation exposure to patients during treatment. However, MR image lacks electron density information required for dose calculation. Image fusion algorithm with deformable registration between MR and computed tomography (CT) was developed to solve this issue. However, delivered dose may be different due to volumetric changes during image registration process. In this respect, synthetic CT generated from the MR image would provide more accurate information required for the real time RT. Materials and Methods: We analyzed 1,209 MR images from 16 patients who underwent MR guided RT. Structures were divided into five tissue types, air, lung, fat, soft tissue and bone, according to the Hounsfield unit of deformed CT. Using the deep learning model (U-NET model), synthetic CT images were generated from the MR images acquired during RT. This synthetic CT images were compared to deformed CT generated using the deformable registration. Pixel-to-pixel match was conducted to compare the synthetic and deformed CT images. Results and Discussion: In two test image sets, average pixel match rate per section was more than 70% (67.9 to 80.3% and 60.1 to 79%; synthetic CT pixel/deformed planning CT pixel) and the average pixel match rate in the entire patient image set was 69.8%. Conclusion: The synthetic CT generated from the MR images were comparable to deformed CT, suggesting possible use for real time RT. Deep learning model may further improve match rate of synthetic CT with larger MR imaging data.