• Title/Summary/Keyword: Eye Image

Search Result 823, Processing Time 0.03 seconds

Reduction of Radiation Dose to Eye Lens in Cerebral 3D Rotational Angiography Using Head Off-Centering by Table Height Adjustment: A Prospective Study

  • Jae-Chan Ryu;Jong-Tae Yoon;Byung Jun Kim;Mi Hyeon Kim;Eun Ji Moon;Pae Sun Suh;Yun Hwa Roh;Hye Hyeon Moon;Boseong Kwon;Deok Hee Lee;Yunsun Song
    • Korean Journal of Radiology
    • /
    • v.24 no.7
    • /
    • pp.681-689
    • /
    • 2023
  • Objective: Three-dimensional rotational angiography (3D-RA) is increasingly used for the evaluation of intracranial aneurysms (IAs); however, radiation exposure to the lens is a concern. We investigated the effect of head off-centering by adjusting table height on the lens dose during 3D-RA and its feasibility in patient examination. Materials and Methods: The effect of head off-centering during 3D-RA on the lens radiation dose at various table heights was investigated using a RANDO head phantom (Alderson Research Labs). We prospectively enrolled 20 patients (58.0 ± 9.4 years) with IAs who were scheduled to undergo bilateral 3D-RA. In all patients' 3D-RA, the lens dose-reduction protocol involving elevation of the examination table was applied to one internal carotid artery, and the conventional protocol was applied to the other. The lens dose was measured using photoluminescent glass dosimeters (GD-352M, AGC Techno Glass Co., LTD), and radiation dose metrics were compared between the two protocols. Image quality was quantitatively analyzed using source images for image noise, signal-to-noise ratio, and contrast-to-noise ratio. Additionally, three reviewers qualitatively assessed the image quality using a five-point Likert scale. Results: The phantom study showed that the lens dose was reduced by an average of 38% per 1 cm increase in table height. In the patient study, the dose-reduction protocol (elevating the table height by an average of 2.3 cm) led to an 83% reduction in the median dose from 4.65 mGy to 0.79 mGy (P < 0.001). There were no significant differences between dose-reduction and conventional protocols in the kerma area product (7.34 vs. 7.40 Gy·cm2, P = 0.892), air kerma (75.7 vs. 75.1 mGy, P = 0.872), and image quality. Conclusion: The lens radiation dose was significantly affected by table height adjustment during 3D-RA. Intentional head off-centering by elevation of the table is a simple and effective way to reduce the lens dose in clinical practice.

Evaluation on the Usefulness of Alternative Radiopharmaceutical by Particle size in Sentinel Lymphoscintigraphy (감시림프절 검사 시 입자크기에 따른 대체 방사성의약품의 유용성평가)

  • Jo, Gwang Mo;Jeong, Yeong Hwan;Choi, Do Cheol;Shin, Ju Cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.2
    • /
    • pp.36-41
    • /
    • 2016
  • Purpose Sentinel lymphoscintigraphy (SLS) was using only $^{99m}Tc-phytate$. If the supply is interrupted temporarily, there is no alternative radiopharmaceuticals. The aim of this study measure the particle size of radiopharmaceuticals and look for radiopharmaceuticals which can be substituted for $^{99m}Tc-phytate$. Materials and Methods The particle size of radiopharmaceuticals were analyzed by a nano-particle analyzer. This study were selected known radiopharmaceuticals to be useful particle size for SLS. We were divided into control and experimental groups using $^{99m}Tc-DPD$, $^{99m}Tc-MAG3$, $^{99m}Tc-DMSA$ with $^{99m}Tc-phytate$. For in-vivo experiment, radiopharmaceuticals were injected intradermally at both foot to perform lymphoscintigraphy. Imaging was acquired to dynamic and delayed static image and observe the inguinal lymph nodes with the naked eye. Results Particle size was measured respectively Phytate 105~255 nm (81.9%), MAG3 91~255 nm (98.7%), DPD 105~342 nm (77.3%), DMSA 164~ 342 nm (99.2%), MAA 1281~2305 nm (90.6%), DTPA 342~1106 nm (79.4%), and HDP 295~955 nm (94%). In-vivo delayed static image, inguinal lymph nodes of all experiment groups and two control groups are visible to naked eye. however, $^{99m}Tc-MAG3$ of control groups is not visible to naked eye. Conclusion We were analyzed to the particle size of the radiopharmaceuticals that are used in in-vivo. Consequently, $^{99m}Tc-DPD$, $^{99m}Tc-DMSA $are possible in an alternative radiopharmaceuticals of emergency.

  • PDF

Reconstruction of Stereo MR Angiography Optimized to View Position and Distance using MIP (최대강도투사를 이용한 관찰 위치와 거리에 최적화 된 입체 자기공명 뇌 혈관영상 재구성)

  • Shin, Seok-Hyun;Hwang, Do-Sik
    • Investigative Magnetic Resonance Imaging
    • /
    • v.16 no.1
    • /
    • pp.67-75
    • /
    • 2012
  • Purpose : We studied enhanced method to view the vessels in the brain using Magnetic Resonance Angiography (MRA). Noticing that Maximum Intensity Projection (MIP) image is often used to evaluate the arteries of the neck and brain, we propose a new method for view brain vessels to stereo image in 3D space with more superior and more correct compared with conventional method. Materials and Methods: We use 3T Siemens Tim Trio MRI scanner with 4 channel head coil and get a 3D MRA brain data by fixing volunteers head and radiating Phase Contrast pulse sequence. MRA brain data is 3D rotated according to the view angle of each eyes. Optimal view angle (projection angle) is determined by the distance between eye and center of the data. Newly acquired MRA data are projected along with the projection line and display only the highest values. Each left and right view MIP image is integrated through anaglyph imaging method and optimal stereoscopic MIP image is acquired. Results: Result image shows that proposed method let enable to view MIP image at any direction of MRA data that is impossible to the conventional method. Moreover, considering disparity and distance from viewer to center of MRA data at spherical coordinates, we can get more realistic stereo image. In conclusion, we can get optimal stereoscopic images according to the position that viewers want to see and distance between viewer and MRA data. Conclusion: Proposed method overcome problems of conventional method that shows only specific projected image (z-axis projection) and give optimal depth information by converting mono MIP image to stereoscopic image considering viewers position. And can display any view of MRA data at spherical coordinates. If the optimization algorithm and parallel processing is applied, it may give useful medical information for diagnosis and treatment planning in real-time.

Recognition of Resident Registration Card using ART2-based RBF Network and face Verification (ART2 기반 RBF 네트워크와 얼굴 인증을 이용한 주민등록증 인식)

  • Kim Kwang-Baek;Kim Young-Ju
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.1-15
    • /
    • 2006
  • In Korea, a resident registration card has various personal information such as a present address, a resident registration number, a face picture and a fingerprint. A plastic-type resident card currently used is easy to forge or alter and tricks of forgery grow to be high-degree as time goes on. So, whether a resident card is forged or not is difficult to judge by only an examination with the naked eye. This paper proposed an automatic recognition method of a resident card which recognizes a resident registration number by using a refined ART2-based RBF network newly proposed and authenticates a face picture by a template image matching method. The proposed method, first, extracts areas including a resident registration number and the date of issue from a resident card image by applying Sobel masking, median filtering and horizontal smearing operations to the image in turn. To improve the extraction of individual codes from extracted areas, the original image is binarized by using a high-frequency passing filter and CDM masking is applied to the binaried image fur making image information of individual codes better. Lastly, individual codes, which are targets of recognition, are extracted by applying 4-directional contour tracking algorithm to extracted areas in the binarized image. And this paper proposed a refined ART2-based RBF network to recognize individual codes, which applies ART2 as the loaming structure of the middle layer and dynamicaly adjusts a teaming rate in the teaming of the middle and the output layers by using a fuzzy control method to improve the performance of teaming. Also, for the precise judgement of forgey of a resident card, the proposed method supports a face authentication by using a face template database and a template image matching method. For performance evaluation of the proposed method, this paper maked metamorphoses of an original image of resident card such as a forgey of face picture, an addition of noise, variations of contrast variations of intensity and image blurring, and applied these images with original images to experiments. The results of experiment showed that the proposed method is excellent in the recognition of individual codes and the face authentication fur the automatic recognition of a resident card.

  • PDF

ALL-SKY OBSERVATION OF THE 2001 LEONID METEOR STORM: 1. METEOR MAGNITUDE DISTRIBUTION (전천 카메라를 이용한 2001 사자자리 유성우 관측: 1. 유성 등급 분포)

  • 김정한;정종균;김용하;원영인;천무영;임홍서
    • Journal of Astronomy and Space Sciences
    • /
    • v.20 no.4
    • /
    • pp.283-298
    • /
    • 2003
  • The 2001 Leonid meteor storm has been observed all over the world, and its most intense flux since the last few decades has caused great interest among both laymen and experts. Especially, its maximum hours occurred at dawn hours of Nov. 19 in the east Asia, during which moonless clear night at the Mt. Bohyun allowed us near perfect condition of observation. Observation was carried out in the period of 01:00∼05:40(KST), which include the predicted maximum hours, with all-sky camera installed for upper atmospheric airglow research. Tn this paper we analyze 68 all-sky images obtained in this period, which contain records of 172 meteors. Utilizing the zenith hourly rate(ZHR) of 3000 and magnitude distribution index of 2, which were reported to International Meteor Organization by visible observers in the east Asia, we estimate the limiting magnitude of about 3 for meteors detected in our all-sky images. We then derive magnitudes of 83 meteors with clear pixel brightness outlines among the initially detected 172 meteors by comparing with neighbor standard stars. Angular velocities of meteors needed for computing their passing times over an all-sky image are expressed with a simple formula of an angle between a meteor head and the Leonid radiant point. The derived magnitudes of 83 meteors are in the range of -6∼-1 magnitude, and its distribution shows a maximum new -3mag. The derived magnitudes are much smaller than the limiting magnitude inferred from the comparison with the result of naked-eye observations. The difference may be due to the characteristic difference between nearly instantaneuous naked-eye observations and CCD observations with a long exposure. We redetermine magnitudes of the meteors by adjusting a meteor lasting time to be consistent with the naked-eye observations. The relative distribution of the redetermined magnitudes, which has a maximum at 0 mag., resembles that of the magnitudes determined with the in-principle method. The relative distribution is quite different from ones that decrease monotonically with decreasing magnitudes for meteors(1∼6) sensitive to naked-eye observations. We conclude from the magnitude distribution of our all-sky observation that meteors brighter than about 0 mag., appeared more frequently during the 2001 Leonid maximum hours. The frequent appearance of bright meteors has significantly important implication for meteor research. We noted, however, considerably large uncertainties in magnitudes determined only by comparing standard stars due to the unknown lasting time of meteors and the non-linear sensitivity of all-sky camera.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

Patient Setup Aid with Wireless CCTV System in Radiation Therapy (무선 CCTV 시스템을 이용한 환자 고정 보조기술의 개발)

  • Park, Yang-Kyun;Ha, Sung-Whan;Ye, Sung-Joon;Cho, Woong;Park, Jong-Min;Park, Suk-Won;Huh, Soon-Nyung
    • Radiation Oncology Journal
    • /
    • v.24 no.4
    • /
    • pp.300-308
    • /
    • 2006
  • $\underline{Purpose}$: To develop a wireless CCTV system in semi-beam's eye view (BEV) to monitor daily patient setup in radiation therapy. $\underline{Materials\;and\;Methods}$: In order to get patient images in semi-BEV, CCTV cameras are installed in a custom-made acrylic applicator below the treatment head of a linear accelerator. The images from the cameras are transmitted via radio frequency signal (${\sim}2.4\;GHz$ and 10 mW RF output). An expected problem with this system is radio frequency interference, which is solved utilizing RF shielding with Cu foils and median filtering software. The images are analyzed by our custom-made software. In the software, three anatomical landmarks in the patient surface are indicated by a user, then automatically the 3 dimensional structures are obtained and registered by utilizing a localization procedure consisting mainly of stereo matching algorithm and Gauss-Newton optimization. This algorithm is applied to phantom images to investigate the setup accuracy. Respiratory gating system is also researched with real-time image processing. A line-laser marker projected on a patient's surface is extracted by binary image processing and the breath pattern is calculated and displayed in real-time. $\underline{Results}$: More than 80% of the camera noises from the linear accelerator are eliminated by wrapping the camera with copper foils. The accuracy of the localization procedure is found to be on the order of $1.5{\pm}0.7\;mm$ with a point phantom and sub-millimeters and degrees with a custom-made head/neck phantom. With line-laser marker, real-time respiratory monitoring is possible in the delay time of ${\sim}0.17\;sec$. $\underline{Conclusion}$: The wireless CCTV camera system is the novel tool which can monitor daily patient setups. The feasibility of respiratory gating system with the wireless CCTV is hopeful.

Gaze Tracking System Using Feature Points of Pupil and Glints Center (동공과 글린트의 특징점 관계를 이용한 시선 추적 시스템)

  • Park Jin-Woo;Kwon Yong-Moo;Sohn Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.80-90
    • /
    • 2006
  • A simple 2D gaze tracking method using single camera and Purkinje image is proposed. This method employs single camera with infrared filter to capture one eye and two infrared light sources to make reflection points for estimating corresponding gaze point on the screen from user's eyes. Single camera, infrared light sources and user's head can be slightly moved. Thus, it renders simple and flexible system without using any inconvenient fixed equipments or assuming fixed head. The system also includes a simple and accurate personal calibration procedure. Before using the system, each user only has to stare at two target points for a few seconds so that the system can initiate user's individual factors of estimating algorithm. The proposed system has been developed to work in real-time providing over 10 frames per second with XGA $(1024{\times}768)$ resolution. The test results of nine objects of three subjects show that the system is achieving an average estimation error less than I degree.

An Experimental Reproduction Study on Characteristics of Woodblock Printing on Traditional Korean Paper (Hanji) (목판인쇄 재현실험을 통한 한지상의 인출특성에 관한 연구)

  • Yoo, Woo Sik;Kim, Jung Gon;Ahn, Eun-Ju
    • Journal of Conservation Science
    • /
    • v.37 no.5
    • /
    • pp.590-605
    • /
    • 2021
  • The history of printing technology in Korea is studied by investigating existing ancient documents and records and comparing accumulated data and knowledge. Cultural property research requires non-destructive testing and observation with the naked eye or aided by a microscope. Researchers' experience and knowledge are required even though they cannot guarantee the outcome. For ancient documents and records that are presumed to consist of woodblock printing, wood type printing, metal type printing, or their combinations, each researcher draws various opinions and conclusions. This often causes confusion and divides the opinions of ordinary citizens and field specialists. Among them, the criteria for judging ancient documents or books printed using woodblock and metal movable material are ambiguous. Academic research on the development history of printing technology in ancient Korea has been stagnant, and conflicts among researchers have also erupted. Involvement of national investigative agencies not specialized in cultural properties has exacerbated the situation. In this study, we investigated printing characteristics that are likely to serve as more objective judgment criteria by quantitatively analyzing the experiments of retrieving several sheets of Korean paper (Hanji) using a replicated Hunminjeongeum (訓民正音) woodblock and quantitatively analyzing the images of the printed papers. In addition, the validity and questions for the typical phenomena presented as a method for distinguishing between woodblock and metal print are reviewed. We investigated the possibility of developing new objective judgement criteria through quantitative analysis using image analysis and investigating the printing characteristics of Korean paper through a reproduction experiment of woodblock printing.

Textile material classification in clothing images using deep learning (딥러닝을 이용한 의류 이미지의 텍스타일 소재 분류)

  • So Young Lee;Hye Seon Jeong;Yoon Sung Choi;Choong Kwon Lee
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.43-51
    • /
    • 2023
  • As online transactions increase, the image of clothing has a great influence on consumer purchasing decisions. The importance of image information for clothing materials has been emphasized, and it is important for the fashion industry to analyze clothing images and grasp the materials used. Textile materials used for clothing are difficult to identify with the naked eye, and much time and cost are consumed in sorting. This study aims to classify the materials of textiles from clothing images based on deep learning algorithms. Classifying materials can help reduce clothing production costs, increase the efficiency of the manufacturing process, and contribute to the service of recommending products of specific materials to consumers. We used machine vision-based deep learning algorithms ResNet and Vision Transformer to classify clothing images. A total of 760,949 images were collected and preprocessed to detect abnormal images. Finally, a total of 167,299 clothing images, 19 textile labels and 20 fabric labels were used. We used ResNet and Vision Transformer to classify clothing materials and compared the performance of the algorithms with the Top-k Accuracy Score metric. As a result of comparing the performance, the Vision Transformer algorithm outperforms ResNet.