• Title/Summary/Keyword: Images, processing

Search Result 4,224, Processing Time 0.032 seconds

CComparative evaluation of the methods of producing planar image results by using Q-Metrix method of SPECT/CT in Lung Perfusion Scan (Lung Perfusion scan에서 SPECT-CT의 Q-Metrix방법과 평면영상 결과 산출방법에 대한 비교평가)

  • Ha, Tae Hwan;Lim, Jung Jin;Do, Yong Ho;Cho, Sung Wook;Noh, Gyeong Woon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.90-97
    • /
    • 2018
  • Purpose The lung segment ratio which is obtained through quantitative analyses of lung perfusion scan images is calculated to evaluate the lung function pre and post surgery. In this Study, the planar image production methods by using Q-Metrix (GE Healthcare, USA) program capable of not only quantitative analysis but also computation of the segment ratio after having performed SPECT/CT are comparatively evaluated. Materials and Methods Lung perfusion scan and SPECT/CT were performed on 50 lung cancer patients prior to surgery who visited our hospital from May 1, 2015 to September 13, 2016 by using Discovery 670(GE Healthcare, USA) equipment. AP(Anterior Posterior)method that uses planar image divided the frontal and rear images into three rectangular portions by means of ROI tool while PO(Posterior Oblique)method computed the segment ratio by dividing the right lobe into three parts and the left lobe into two parts on the oblique image. Segment ratio was computed by setting the ROI and VOI in the CT image by using Q-Metrix program and statistically analysis was performed with SPSS Ver. 23. Results Regarding the correlation concordance rate of Q-Metrix and AP methods, RUL(Right upper lobe), RML(Right middle lobe) and RLL(Right lower lobe) were 0.224, 0.035 and 0.447. LUL(Left upper lobe) and LLL(Left lower lobe) were found to be 0.643 and 0.456, respectively. In the PO method, the right lobe were 0.663, 0.623 and 0.702, respectively, while the left lobe were 0.754 and 0.823. When comparison was made by using the Paired sample T-test, Right lobe were $11.6{\pm}4.5$, $26.9{\pm}6.2$ and $17.8{\pm}4.2$, respectively in the AP method. Left lobe were $28.4{\pm}4.8$ and $15.4{\pm}5.6$. The right lobe of PO had values of $17.4{\pm}5.0$, $10.5{\pm}3.6$ and $27.3{\pm}6.0$, while the left lobe had values of $21.6{\pm}4.8$ and $23.1{\pm}6.6$, thereby having statistically significant difference in comparison to the Q-Metrix method for each of the lobes (P<0.05). However, there was no statistically significant difference in Right middle lobe (P>0.05). Conclusion The AP method showed low concordance rate in correlation with the Q-Metrix method. However, PO method displayed high concordance rate overall. although AP method had significant differences in all lobes, there was no significant difference in Right middle lobe of PO method. Therefore, at the time of production of lung perfusion scan results, utilization of Q-Metrix method of SPECT/CT would be useful in computation of accurate resultant values. Moreover, it is deemed possible to expect obtain more practical sectional computation result values by using PO method at the time of planar image acquisition.

A Comparative Study of the Standard Uptake Values of the PET Reconstruction Methods; Using Contrast Enhanced CT and Non Contrast Enhanced CT (PET/CT 영상에서 조영제를 사용하지 않은 CT와 조영제를 사용한 CT를 이용한 감쇠보정에 따른 표준화섭취계수의 비교)

  • Lee, Seung-Jae;Park, Hoon-Hee;Ahn, Sha-Ron;Oh, Shin-Hyun;NamKoong, Heuk;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.3
    • /
    • pp.235-240
    • /
    • 2008
  • Purpose: At the beginning of PET/CT, Computed Tomography was mainly used only for Attenuation Correction (AC), but as the performance of the CT have been increase, it could give improved diagnostic information with Contrast Media. But it was controversial that Contrast Media could affect AC on PET/CT scan. Some submitted thesis' show that Contrast Media could overestimate when it is for AC data processing. On the contrary, the opinion that Contrast Media could be possible to affect the alteration of SUV because of the overestimated AC. But it does not have a definite effect on the diagnosis. Thus, the affection of Contrast Media on AC was investigated in this study. Materials and Methods: Patient inclusion criteria required a history of a malignancy and performance of an integrated PET/CT scan and contrast- enhanced CT scan within a 1-day period. Thirty oncologic patients who had PET/CT scan from December 2007 to June 2008 underwent staging evaluation and met these criteria. All patients fasted for at least 6 hr before the IV injection of approximately 5.6 MBq/kg (0.15 mCi/kg) of $^{18}F$-FDG and were scanned about 60 min after injection. All patients had a whole body PET/CT performed without IV contrast media followed by a contrast-enhanced CT on the Discovery STe PET/CT scanner. CT data were used for AC and PET images came out after AC. The ROIs drew and measured SUV. A paired t-test of these results was performed to assess the significance of the difference between the SUV obtained from the two attenuation corrected PET images. Results: The mean and maximum Standardized Uptake Values (SUV) for different regions averaged over all Patients. Comparing before using Contrast Media and after using, Most of ROIs have the increased SUV when it did Contrast Enhanced CT compare to Non-Contrast enhanced CT. All regions have increased SUV and also their p value was under 0.05 except the mean SUV of the Heart region. Conclusion: In this regard, the effect on SUV measurements that occurs when a contrast-enhanced CT is used for attenuation correction could have significant clinical ramifications. But some submitted thesis insisted that the percentage change in SUV that can determine or modify clinical management of oncology patients is small. Because there was not much difference that could be discovered by interpreter. But obviously the numerical change was occurred and on the stage finding primary region, small change would be base line, such as the region of liver which has greater change than the other regions needs more attention.

  • PDF

Reconstruction of Stereo MR Angiography Optimized to View Position and Distance using MIP (최대강도투사를 이용한 관찰 위치와 거리에 최적화 된 입체 자기공명 뇌 혈관영상 재구성)

  • Shin, Seok-Hyun;Hwang, Do-Sik
    • Investigative Magnetic Resonance Imaging
    • /
    • v.16 no.1
    • /
    • pp.67-75
    • /
    • 2012
  • Purpose : We studied enhanced method to view the vessels in the brain using Magnetic Resonance Angiography (MRA). Noticing that Maximum Intensity Projection (MIP) image is often used to evaluate the arteries of the neck and brain, we propose a new method for view brain vessels to stereo image in 3D space with more superior and more correct compared with conventional method. Materials and Methods: We use 3T Siemens Tim Trio MRI scanner with 4 channel head coil and get a 3D MRA brain data by fixing volunteers head and radiating Phase Contrast pulse sequence. MRA brain data is 3D rotated according to the view angle of each eyes. Optimal view angle (projection angle) is determined by the distance between eye and center of the data. Newly acquired MRA data are projected along with the projection line and display only the highest values. Each left and right view MIP image is integrated through anaglyph imaging method and optimal stereoscopic MIP image is acquired. Results: Result image shows that proposed method let enable to view MIP image at any direction of MRA data that is impossible to the conventional method. Moreover, considering disparity and distance from viewer to center of MRA data at spherical coordinates, we can get more realistic stereo image. In conclusion, we can get optimal stereoscopic images according to the position that viewers want to see and distance between viewer and MRA data. Conclusion: Proposed method overcome problems of conventional method that shows only specific projected image (z-axis projection) and give optimal depth information by converting mono MIP image to stereoscopic image considering viewers position. And can display any view of MRA data at spherical coordinates. If the optimization algorithm and parallel processing is applied, it may give useful medical information for diagnosis and treatment planning in real-time.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

Reproducibility of Adenosine Tc-99m sestaMIBI SPECT for the Diagnosis of Coronary Artery Disease (관동맥질환의 진단을 위한 아데노신 Tc-99m sestaMIBI SPECT의 재현성)

  • Lee, Duk-Young;Bae, Jin-Ho;Lee, Sang-Woo;Chun, Kyung-Ah;Yoo, Jeong-Soo;Ahn, Byeong-Cheol;Ha, Jeoung-Hee;Chae, Shung-Chull;Lee, Kyu-Bo;Lee, Jae-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.473-480
    • /
    • 2005
  • Purpose: Adenosine myocardial perfusion SPECT has proven to be useful in the detection of coronary artery disease, in the follow up the success of various therapeutic regimens and in assessing the prognosis of coronary artery disease. The purpose of this study is to define the reproducibility of myocardial perfusion SPECT using adenosine stress testing between two consecutive Tc-99m sestaMIBI (MIBI) SPECT studies in the same subjects. Methods: Thirty patients suspected of coronary artery disease in stable condition underwent sequential Tc-99m MIBI SPECT studies using intravenous adenosine. Gamma camera, acquisition and processing protocols used for the two tests were identical and no invasive procedures were performed between two tests. Mean interval between two tests were 4.1 days (range: 2-11 days). The left ventricular wall was divided into na segments and the degree of myocardial tracer uptake was graded with four-point scoring system by visual analysis. Images were interpretated by two independent nuclear medicine physicians and consensus was taken for final decision, if segmental score was not agreeable. Results: Hemodynamic responses to adenosine were not different between two consecutive studies. There were no serious side effects to stop infusion of adenosine and side effects profile was not different. When myocardial uptake was divided into normal and abnormal uptake, 481 of 540 segments were concordant (agreement rate 89%, Kappa index 0.74). With four-grade storing system, exact agreement was 81.3% (439 of 540 segments, tau b=0.73). One and two-grade differences were observed in 97 segments (18%) and 4 segments (0.7%) respectively, but three-grade difference was not observed in any segment. Extent and severity scores were not different between two studios. The extent and severity scores of the perfusion defect revealed excellent positive correlation between two test (r value for percentage extent and severity score is 0.982 and 0.965, p<0.001) Conclusion: Hemodynamic responses and side effects profile were not different between two consecutive adenosine stress tests in the same subjects. Adenosine Tc-99m sestaMIBI SPECT is highly reproducible, and could be used to assess temporal changes in myocardial perfusion in individual patients.

Local Shape Analysis of the Hippocampus using Hierarchical Level-of-Detail Representations (계층적 Level-of-Detail 표현을 이용한 해마의 국부적인 형상 분석)

  • Kim Jeong-Sik;Choi Soo-Mi;Choi Yoo-Ju;Kim Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.11A no.7 s.91
    • /
    • pp.555-562
    • /
    • 2004
  • Both global volume reduction and local shape changes of hippocampus within the brain indicate their abnormal neurological states. Hippocampal shape analysis consists of two main steps. First, construct a hippocampal shape representation model ; second, compute a shape similarity from this representation. This paper proposes a novel method for the analysis of hippocampal shape using integrated Octree-based representation, containing meshes, voxels, and skeletons. First of all, we create multi-level meshes by applying the Marching Cube algorithm to the hippocampal region segmented from MR images. This model is converted to intermediate binary voxel representation. And we extract the 3D skeleton from these voxels using the slice-based skeletonization method. Then, in order to acquire multiresolutional shape representation, we store hierarchically the meshes, voxels, skeletons comprised in nodes of the Octree, and we extract the sample meshes using the ray-tracing based mesh sampling technique. Finally, as a similarity measure between the shapes, we compute $L_2$ Norm and Hausdorff distance for each sam-pled mesh pair by shooting the rays fired from the extracted skeleton. As we use a mouse picking interface for analyzing a local shape inter-actively, we provide an interaction and multiresolution based analysis for the local shape changes. In this paper, our experiment shows that our approach is robust to the rotation and the scale, especially effective to discriminate the changes between local shapes of hippocampus and more-over to increase the speed of analysis without degrading accuracy by using a hierarchical level-of-detail approach.

Development of Geometrical Quality Control Real-time Analysis Program using an Electronic Portal Imaging (전자포탈영상을 이용한 기하학적 정도관리 실시간 분석 프로그램의 개발)

  • Lee, Sang-Rok;Jung, Kyung-Yong;Jang, Min-Sun;Lee, Byung-Gu;Kwon, Young-Ho
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.2
    • /
    • pp.77-84
    • /
    • 2012
  • Purpose: To develop a geometrical quality control real-time analysis program using an electronic portal imaging to replace film evaluation method. Materials and Methods: A geometrical quality control item was established with the Eclipse treatment planning system (Version 8.1, Varian, USA) after the Electronic Portal Imaging Device (EPID) took care of the problems occurring from the fixed substructure of the linear accelerator (CL-iX, Varian, USA). Electronic portal image (single exposure before plan) was created at the treatment room's 4DTC (Version 10.2, Varian, USA) and a beam was irradiated in accordance with each item. The gaining the entire electronic portal imaging at the Off-line review and was evaluated by a self-developed geometrical quality control real-time analysis program. As for evaluation methods, the intra-fraction error was analyzed by executing 5 times in a row under identical conditions and procedures on the same day, and in order to confirm the infer-fraction error, it was executed for 10 days under identical conditions of all procedures and was compared with the film evaluation method using an Iso-align$^{TM}$ quality control device. Measurement and analysis time was measured by sorting the time into from the device setup to data achievement and the time amount after the time until the completion of analysis and the convenience of the users and execution processes were compared. Results: The intra-fraction error values for each average 0.1, 0.2, 0.3, 0.2 mm at light-radiation field coincidence, collimator rotation axis, couch rotation axis and gantry rotation axis. By checking the infer-fraction error through 10 days of continuous quality control, the error values obtained were average 1.7, 1.4, 0.7, 1.1 mm for each item. Also, the measurement times were average 36 minutes, 15 minutes for the film evaluation method and electronic portal imaging system, and the analysis times were average 30 minutes, 22 minutes. Conclusion: When conducting a geometrical quality control using an electronic portal imaging, it was found that it is efficient as a quality control tool. It not only reduces costs through not using films, but also reduces the measurement and analysis time which enhances user convenience and can improve the execution process by leaving out film developing procedures etc. Also, images done with evaluation from the self-developed geometrical quality control real-time analysis program, data processing is capable which supports the storage of information.

  • PDF

DEVELOPMENT OF A LYMAN-α IMAGING SOLAR TELESCOPE FOR THE SATELLITE (인공위성 탑재용 자외선 태양카메라(LIST) 개발)

  • Jang, M.;Oh, H.S.;Rim, C.S.;Park, J.S.;Kim, J.S.;Son, D.;Lee, H.S.;Kim, S.J.;Lee, D.H.;Kim, S.S.;Kim, K.H.
    • Journal of Astronomy and Space Sciences
    • /
    • v.22 no.3
    • /
    • pp.329-352
    • /
    • 2005
  • Long term observations of full-disk Lyman-o irradiance have been made by the instruments on various satellites. In addition, several sounding rockets dating back to the 1950s and up through the present have measured the $Lyman-{\alpha}$ irradiance. Previous full disk $Lyman-{\alpha}$ images of the sun have been very interesting and useful scientifically, but have been only five-minute 'snapshots' obtained on sounding rocket flights. All of these observations to date have been snapshots, with no time resolution to observe changes in the chromospheric structure as a result of the evolving magnetic field, and its effect on the Lyman-o intensity. The $Lyman-{\alpha}$ Imaging Solar Telescope(LIST) can provide a unique opportunity for the study of the sun in the $Lyman-{\alpha}$ region with the high time and spatial resolution for the first time. Up to the 2nd year development, the preliminary design of the optics, mechanical structure and electronics system has been completed. Also the mechanical structure analysis, thermal analysis were performed and the material for the structure was chosen as a result of these analyses. And the test plan and the verification matrix were decided. The operation systems, technical and scientific operation, were studied and finally decided. Those are the technical operation, mechanical working modes for the observation and safety, the scientific operation and the process of the acquired data. The basic techniques acquired through the development of satellite based solar telescope are essential for the construction of space environment forecast system in the future. The techniques which we developed through this study, like mechanical, optical and data processing techniques, could be applied extensively not only to the process of the future production of flight models of this kind, but also to the related industries. Also, we can utilize the scientific achievements which are obtained throughout the project And these can be utilized to build a high resolution photometric detectors for military and commercial purposes. It is also believed that we will be able to apply several acquired techniques for the development of the Korean satellite projects in the future.

A Study on the Improvement of Skin-affinity and Spreadability in the Pressed Powder using Air Jet Mill Process and Mono-dispersed PMMA (Air Jet Mill 공법과 PMMA의 단분산성이 프레스드 파우더의 밀착성 및 발림성 향상에 대한 연구)

  • Song, Sang Hoon;Hong, Kyong Woo;Han, Jong Seob;Kim, Kyong Seob;Park, Sun Gyoo
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.43 no.1
    • /
    • pp.61-68
    • /
    • 2017
  • The key quality attributes of the pressed powder, one of base makeup products, are skin-affinity and spreadability. In general, there was a limit to meet skin-affinity and spreadability simultaneously, which are opposite attributes each other. In this study, air jet mill process was tried to satisfy two main properties. Skin-affinity was improved by a wet coating of sericite with a mixture of lauroyl lysine (LL) and sodium cocoyl glutamate (SCG). The application of mono-dispersed polymethyl methacrylate (PMMA) and diphenyl dimethicone/vinyl diphenyl dimethicone/silsesquioxane crosspolymer (DDVDDSC) improved both qualities. Air jet mill process has been mainly applied in the pharmaceutical and food industries, and is a method used for processing powder materials in cosmetic field. In this study, we were able to complete makeup cosmetics with an optimum particle size $6.8{\mu}m$ by combining the air jet mill process at the manufacturing stage. It was confirmed that the Ti element was uniformly distributed throughout the cosmetics by EDS mapping, and that the corners of the tabular grains were rounded by SEM analysis. It is considered that this can provide an effect of improving the spreadability when the cosmetic is applied to the skin by using a makeup tool. LL with excellent skin compatibility and SCG derived from coconut with little skin irritation were wet coated to further enhance the adhesion of sericite. SEM images were analyzed to evaluate effect of the dispersion and uniformity of PMMA on spreadability. With the spherical shapes of similar size, it was found that the spreading effect was further increased when the distribution was homogeneously mono-dispersed. The dispersion and spreadability of PMMA were confirmed by measuring the kinetic friction and optimal content was determined. The silicone rubber powder, DDVDDSC, was confirmed by evaluating the hardness, spreading value, and drop test. Finally, it was found that the dispersion of PMMA and silicone rubber powder affected spreadability. Such makeup cosmetics have excellent stability in drop test while having appropriate hardness, and good stability over time. Taken together, it is concluded that air jet mill process can be utilized as a method to improve skin-affinity and spreadability of the pressed powder.

Matching Points Filtering Applied Panorama Image Processing Using SURF and RANSAC Algorithm (SURF와 RANSAC 알고리즘을 이용한 대응점 필터링 적용 파노라마 이미지 처리)

  • Kim, Jeongho;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.144-159
    • /
    • 2014
  • Techniques for making a single panoramic image using multiple pictures are widely studied in many areas such as computer vision, computer graphics, etc. The panorama image can be applied to various fields like virtual reality, robot vision areas which require wide-angled shots as an useful way to overcome the limitations such as picture-angle, resolutions, and internal informations of an image taken from a single camera. It is so much meaningful in a point that a panoramic image usually provides better immersion feeling than a plain image. Although there are many ways to build a panoramic image, most of them are using the way of extracting feature points and matching points of each images for making a single panoramic image. In addition, those methods use the RANSAC(RANdom SAmple Consensus) algorithm with matching points and the Homography matrix to transform the image. The SURF(Speeded Up Robust Features) algorithm which is used in this paper to extract featuring points uses an image's black and white informations and local spatial informations. The SURF is widely being used since it is very much robust at detecting image's size, view-point changes, and additionally, faster than the SIFT(Scale Invariant Features Transform) algorithm. The SURF has a shortcoming of making an error which results in decreasing the RANSAC algorithm's performance speed when extracting image's feature points. As a result, this may increase the CPU usage occupation rate. The error of detecting matching points may role as a critical reason for disqualifying panoramic image's accuracy and lucidity. In this paper, in order to minimize errors of extracting matching points, we used $3{\times}3$ region's RGB pixel values around the matching points' coordinates to perform intermediate filtering process for removing wrong matching points. We have also presented analysis and evaluation results relating to enhanced working speed for producing a panorama image, CPU usage rate, extracted matching points' decreasing rate and accuracy.