• 제목/요약/키워드: Image Guided System

검색결과 119건 처리시간 0.034초

스테레오 매칭 기법을 이용한 영상유도시술 시스템 (Image-guided Surgery System Using the Stereo Matching Method)

  • 강현수;이호진;문찬홍;문원진;김형진;최근호;함영국;이수열;변홍식
    • 대한의용생체공학회:의공학회지
    • /
    • 제24권4호
    • /
    • pp.339-346
    • /
    • 2003
  • 자기공명영상은 뛰어난 해상도의 해부학적 구조 정보를 제공하여 임상적인 외과수술에 매우 유용하게 적용되고 있다. 영상처리 기법과 MRI 영상유도기법을 이용한 뇌수술은 외과 전문의에게 많은 도움을 줄 수 있다. 본 논문에서는 스테레오 매칭 기법을 이용하여 중재적 시술이 가능한 유도영상시술 시스템의 개발에 관하여 소개하였다. 생검을 수행하기 위하여, MRI 마커, 카메라 마커, 탐침 프로브 마커를 정밀하게 제작하였고 시스템의 정확성을 입증하기 위하여 팬텀을 제작하였다. 제작된 마커와 팬텀을 이용하여 1.5 Tesla MRI 시스템으로 실험을 수행하였다. 구현된 시스템의 오차범위는 팬텀 실험에서 약 1.5%였고, 동물실험에서는 오차가 3mm 이내로 임상적용이 가능한 수준임을 착인하였다. 본 연구에서 제시한 스테레오 매칭기법을 이용한 유도영상시술 시스템은 기존의 방법보다 우수한 성능을 보여주었다.

영상유도 뇌수술 장비의 임상적 적용 : Zeiss SMN System (Clinical Application of Image Guided Surgery : Zeiss SMN System)

  • 이채혁;이호연;황충진
    • Journal of Korean Neurosurgical Society
    • /
    • 제29권1호
    • /
    • pp.72-77
    • /
    • 2000
  • The authors describe the experience with the interactive image-guided Zeiss SMN system, which has been applied to 20 patients with various intracranial lesions during one year. Preoperative radiologic evaluation was CT scan in 6 cases, MRI in 14 cases. In all except one case, average fiducial registration errors were less than 2mm. There was no statistical difference in registration error between CT and MR image. This system considered to be relatively stable with respect to soft and hardware. Also it was useful for the designing of the scalp incision and bone flap and assessing the extent of resection in tumors, especially in gliomas. Moreover, it was helpful to evaluate complex surgical anatomy in skull base surgery.

  • PDF

양방향 모델을 적용한 Full-image Guided Filter의 효율적인 VLSI 구조 (Efficient VLSI Architecture of Full-Image Guided Filter Based on Two-Pass Model)

  • 이겨레;박태근
    • 한국통신학회논문지
    • /
    • 제41권11호
    • /
    • pp.1507-1514
    • /
    • 2016
  • Full-image guided filter는 커널 윈도우 영역만 필터링에 반영되는 기존의 커널 윈도우 기반 가이드 필터와 달리 가중치 전파 도식과 양방향 모델이 적용되어 영상의 모든 픽셀이 필터링에 반영된다. 이로써 가이드 필터의 경계 보존과 평활화 등의 가이드 이미지 필터의 특성을 유지하면서도 계산 복잡도를 개선할 수 있다. 본 논문에서는 full-image guided filter의 더 빠른 처리가 필요한 스테레오 비전 및 각종 실시간 시스템 분야에 적용될 수 있도록 효율적인 하드웨어 구조를 제안하였다. 필터링 프로세스에서 발생하는 각종 데이터의 종속성 분석과 영상의 PSNR 분석, 데이터 빈도 분석 등을 통하여 적합한 하드웨어 구조를 제안하였다. 또한 양방향 모델이 적용된 가중치 연산 모듈의 휴식 구간이 최소화되도록 효율적인 스케줄링을 하였고 실시간 처리가 가능하게 하였다. 제안한 하드웨어 구조는 동부하이텍 0.11um 표준셀 라이브러리로 합성하였을 경우 최대 동작주파수 214MHz(384*288 영상: 965 fps)와 76K(내부 메모리 제외) 게이트의 하드웨어 복잡도를 나타냈다.

가상계측기반 실시간 영상유도 자동비행 시스템 구현 및 무인 로터기를 이용한 비행시험 (Implementation of Virtual Instrumentation based Realtime Vision Guided Autopilot System and Onboard Flight Test using Rotory UAV)

  • 이병진;윤석창;이영재;성상경
    • 제어로봇시스템학회논문지
    • /
    • 제18권9호
    • /
    • pp.878-886
    • /
    • 2012
  • This paper investigates the implementation and flight test of realtime vision guided autopilot system based on virtual instrumentation platform. A graphical design process via virtual instrumentation platform is fully used for the image processing, communication between systems, vehicle dynamics control, and vision coupled guidance algorithms. A significatnt ojective of the algorithm is to achieve an environment robust autopilot despite wind and an irregular image acquisition condition. For a robust vision guided path tracking and hovering performance, the flight path guidance logic is combined in a multi conditional basis with the position estimation algorithm coupled with the vehicle attitude dynamics. An onboard flight test equipped with the developed realtime vision guided autopilot system is done using the rotary UAV system with full attitude control capability. Outdoor flight test demonstrated that the designed vision guided autopilot system succeeded in UAV's hovering on top of ground target within about several meters under geenral windy environment.

In-House Developed Surface-Guided Repositioning and Monitoring System to Complement In-Room Patient Positioning System for Spine Radiosurgery

  • Kim, Kwang Hyeon;Lee, Haenghwa;Sohn, Moon-Jun;Mun, Chi-Woong
    • 한국의학물리학회지:의학물리
    • /
    • 제32권2호
    • /
    • pp.40-49
    • /
    • 2021
  • Purpose: This study aimed to develop a surface-guided radiosurgery system customized for a neurosurgery clinic that could be used as an auxiliary system for improving the accuracy, monitoring the movements of patients while performing hypofractionated radiosurgery, and minimizing the geometric misses. Methods: RGB-D cameras were installed in the treatment room and a monitoring system was constructed to perform a three-dimensional (3D) scan of the body surface of the patient and to express it as a point cloud. This could be used to confirm the exact position of the body of the patient and monitor their movements during radiosurgery. The image from the system was matched with the computed tomography (CT) image, and the positional accuracy was compared and analyzed in relation to the existing system to evaluate the accuracy of the setup. Results: The user interface was configured to register the patient and display the setup image to position the setup location by matching the 3D points on the body of the patient with the CT image. The error rate for the position difference was within 1-mm distance (min, -0.21 mm; max, 0.63 mm). Compared with the existing system, the differences were found to be as follows: x=0.08 mm, y=0.13 mm, and z=0.26 mm. Conclusions: We developed a surface-guided repositioning and monitoring system that can be customized and applied in a radiation surgery environment with an existing linear accelerator. It was confirmed that this system could be easily applied for accurate patient repositioning and inter-treatment motion monitoring.

Computer Integrated Surgical Robot System for Spinal Fusion

  • Kim Sungmin;Chung Goo Bong;Oh Se Min;Yi Byung-Ju;Kim Whee Kuk;Park Jong Il;Kim Young Soo
    • 대한의용생체공학회:의공학회지
    • /
    • 제26권5호
    • /
    • pp.265-270
    • /
    • 2005
  • A new Computer Integrated Surgical Robot system is composed of a surgical robot, a surgical planning system, and an optical tracking system. The system plays roles of an assisting surgeon and taking the place of surgeons for inserting a pedicle screw in spinal fusion. Compared to pure surgical navigation systems as well as conventional methods for spinal fusion, it is able to achieve better accuracy through compensating for the portending movement of the surgical target area. Furthermore, the robot can position and guide needles, drills, and other surgical instruments or conducts drilling/screwing directly. Preoperatively, the desired entry point, orientation, and depth of surgical tools for pedicle screw insertion are determined by the surgical planning system based on CT/MR images. Intra-operatively, position information on surgical instruments and targeted surgical areas is obtained from the navigation system. Two exemplary experiments employing the developed image-guided surgical robot system are conducted.

Image-guided surgery and craniofacial applications: mastering the unseen

  • Wang, James C.;Nagy, Laszlo;Demke, Joshua C.
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • 제37권
    • /
    • pp.43.1-43.5
    • /
    • 2015
  • Image-guided surgery potentially enhances intraoperative safety and outcomes in a variety of craniomaxillofacial procedures. We explore the efficiency of one intraoperative navigation system in a single complex craniofacial case, review the initial and recurring costs, and estimate the added cost (e.g., additional setup time, registration). We discuss the potential challenges and benefits of utilizing image-guided surgery in our specific case and its benefits in terms of educational and teaching purposes and compare this with traditional osteotomies that rely on a surgeon's thorough understanding of anatomy coupled with tactile feedback to blindly guide the osteotome during surgery. A 13-year-old presented with untreated syndromic multi-suture synostosis, brachycephaly, severe exorbitism, and midface hypoplasia. For now, initial costs are high, recurring costs are relatively low, and there are perceived benefits of imaged-guided surgery as an excellent teaching tool for visualizing difficult and often unseen anatomy through computerized software and multi-planar real-time images.

Optical Imaging Technology for Real-time Tumor Monitoring

  • Shin, Yoo-kyoung;Eom, Joo Beom
    • Medical Lasers
    • /
    • 제10권3호
    • /
    • pp.123-131
    • /
    • 2021
  • Optical imaging modalities with properties of real-time, non-invasive, in vivo, and high resolution for image-guided surgery have been widely studied. In this review, we introduce two optical imaging systems, that could be the core of image-guided surgery and introduce the system configuration, implementation, and operation methods. First, we introduce the optical coherence tomography (OCT) system implemented by our research group. This system is implemented based on a swept-source, and the system has an axial resolution of 11 ㎛ and a lateral resolution of 22 ㎛. Second, we introduce a fluorescence imaging system. The fluorescence imaging system was implemented based on the absorption and fluorescence wavelength of indocyanine green (ICG), with a light-emitting diode (LED) light source. To confirm the performance of the two imaging systems, human malignant melanoma cells were injected into BALB/c nude mice to create a xenograft model and using this, OCT images of cancer and pathological slide images were compared. In addition, in a mouse model, an intravenous injection of indocyanine green was used with a fluorescence imaging system to detect real-time images moving along blood vessels and to detect sentinel lymph nodes, which could be very important for cancer staging. Finally, polarization-sensitive OCT to find the boundaries of cancer in real-time and real-time image-guided surgery using a developed contrast agent and fluorescence imaging system were introduced.

마그네틱 센서를 이용한 영상유도 뇌정위 시스템 개발 (The development of Frameless Image-Guided Surgery system based on magnetic field digitizers)

  • 우지환;장동표;김영수;김선일
    • 대한의용생체공학회:학술대회논문집
    • /
    • 대한의용생체공학회 1998년도 추계학술대회
    • /
    • pp.269-270
    • /
    • 1998
  • Image-guided surgery (IGS) system has become well known in the field of neurosurgery and spine surgery. A patient's anatomy is first registered to preoperatively acquired CT/ MRI data using the point matching algorithm. A magnetic field digitizer was used to measure the physical space data and the system was based on Workstation of Unix system. To evaluate the spatial accuracy of interactive IGS system, the phantom consisting of rods varied height and known location was used. The RMS error value between CT/MR images and real location was 3-4mm. For the more convenience of the surgery, we provide various image display modules.

  • PDF

Preliminary Application of Synthetic Computed Tomography Image Generation from Magnetic Resonance Image Using Deep-Learning in Breast Cancer Patients

  • Jeon, Wan;An, Hyun Joon;Kim, Jung-in;Park, Jong Min;Kim, Hyoungnyoun;Shin, Kyung Hwan;Chie, Eui Kyu
    • Journal of Radiation Protection and Research
    • /
    • 제44권4호
    • /
    • pp.149-155
    • /
    • 2019
  • Background: Magnetic resonance (MR) image guided radiation therapy system, enables real time MR guided radiotherapy (RT) without additional radiation exposure to patients during treatment. However, MR image lacks electron density information required for dose calculation. Image fusion algorithm with deformable registration between MR and computed tomography (CT) was developed to solve this issue. However, delivered dose may be different due to volumetric changes during image registration process. In this respect, synthetic CT generated from the MR image would provide more accurate information required for the real time RT. Materials and Methods: We analyzed 1,209 MR images from 16 patients who underwent MR guided RT. Structures were divided into five tissue types, air, lung, fat, soft tissue and bone, according to the Hounsfield unit of deformed CT. Using the deep learning model (U-NET model), synthetic CT images were generated from the MR images acquired during RT. This synthetic CT images were compared to deformed CT generated using the deformable registration. Pixel-to-pixel match was conducted to compare the synthetic and deformed CT images. Results and Discussion: In two test image sets, average pixel match rate per section was more than 70% (67.9 to 80.3% and 60.1 to 79%; synthetic CT pixel/deformed planning CT pixel) and the average pixel match rate in the entire patient image set was 69.8%. Conclusion: The synthetic CT generated from the MR images were comparable to deformed CT, suggesting possible use for real time RT. Deep learning model may further improve match rate of synthetic CT with larger MR imaging data.