• Title/Summary/Keyword: image-level fusion

Search Result 84, Processing Time 0.024 seconds

Implementation of GLCM/GLDV-based Texture Algorithm and Its Application to High Resolution Imagery Analysis (GLCM/GLDV 기반 Texture 알고리즘 구현과 고 해상도 영상분석 적용)

  • Lee Kiwon;Jeon So-Hee;Kwon Byung-Doo
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.2
    • /
    • pp.121-133
    • /
    • 2005
  • Texture imaging, which means texture image creation by co-occurrence relation, has been known as one of the useful image analysis methodologies. For this purpose, most commercial remote sensing software provides texture analysis function named GLCM (Grey Level Co-occurrence Matrix). In this study, texture-imaging program based on GLCM algorithm is newly implemented. As well, texture imaging modules for GLDV (Grey Level Difference Vector) are contained in this program. As for GLCM/GLDV Texture imaging parameters, it composed of six types of second order texture functions such as Homogeneity, Dissimilarity, Energy, Entropy, Angular Second Moment, and Contrast. As for co-occurrence directionality in GLCM/GLDV, two direction modes such as Omni-mode and Circular mode newly implemented in this program are provided with basic eight-direction mode. Omni-mode is to compute all direction to avoid directionality complexity in the practical level, and circular direction is to compute texture parameters by circular direction surrounding a target pixel in a kernel. At the second phase of this study, some case studies with artificial image and actual satellite imagery are carried out to analyze texture images in different parameters and modes by correlation matrix analysis. It is concluded that selection of texture parameters and modes is the critical issues in an application based on texture image fusion.

Study on the Difference of Standardized Uptake Value in Fusion Image of Nuclear Medicine (핵의학 융합영상의 표준섭취계수 차이에 관한 연구)

  • Kim, Jung-Soo;Park, Chan-Rok
    • Journal of radiological science and technology
    • /
    • v.41 no.6
    • /
    • pp.553-560
    • /
    • 2018
  • PET-CT and PET-MRI which integrates CT using ionized radiation and MRI using phenomena of magnetic resonance are determined to have the limitation to apply the semi-quantitative index, standardized uptake value (SUV), with the same level due to the fundamental differences of image capturing principle and reorganization, hence, their correlations were analyzed to provide their clinical information. To 30 study subjects maintaining pre-treatment, $^{18}F-FDG$ (5.18 MBq/㎏) was injected and they were scanned continuously without delaying time using $Biograph^{TM}$ mMR 3T (Siemens, Munich) and Biograph mCT 64 (Siemens, Germany), which is an integral type, under the optimized condition except the structural differences of both scanners. Upon the measurement results of $SUV_{max}$ setting volume region of interest with evenly distributed radioactive pharmaceuticals by captured images, $SUV_{max}$ mean values of PET-CT and PET-MRI were $2.94{\pm}0.55$ and $2.45{\pm}0.52$, respectively, and the value of PET-MRI was measured lower by $-20.85{\pm}7.26%$ than that of PET-CT. Also, there was a statistically significant difference in SUVs between two scanners (P<0.001), hence, SUV of PET-CT and PET-MRI cannot express the clinical meanings in the same level. Therefore, in case of the patients who undergo cross follow-up tests with PET-CT and PET-MRI, diagnostic information should be analyzed considering the conditions of SUV differences in both scanners.

Performance Enhancement of Virtual War Field Simulator for Future Autonomous Unmanned System (미래 자율무인체계를 위한 가상 전장 환경 시뮬레이터 성능 개선)

  • Lee, Jun Pyo;Kim, Sang Hee;Park, Jin-Yang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.109-119
    • /
    • 2013
  • An unmanned ground vehicle(UGV) today plays a significant role in both civilian and military areas. Predominantly these systems are used to replace humans in hazardous situations. To take unmanned ground vehicles systems to the next level and increase their capabilities and the range of missions they are able to perform in the combat field, new technologies are needed in the area of command and control. For this reason, we present war field simulator based on information fusion technology to efficiently control UGV. In this paper, we present the war field simulator which is made of critical components, that is, simulation controller, virtual image viewer, and remote control device to efficiently control UGV in the future combat fields. In our information fusion technology, improved methods of target detection, recognition, and location are proposed. In addition, time reduction method of target detection is also proposed. In the consequence of the operation test, we expect that our war field simulator based on information fusion technology plays an important role in the future military operation significantly.

Development of Supplemental Equipment to Reduce Movement During Fusion Image Acquisition (융합영상(Fusion image)에서 움직임을 줄이기 위한 보정기구의 개발)

  • Cho, Yong Gwi;Pyo, Sung Jae;Kim, Bong Su;Shin, Chae Ho;Cho, Jin Woo;Kim, Chang Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.2
    • /
    • pp.84-89
    • /
    • 2013
  • Purpose: Patients' movement during long image acquisition time for the fusion image of PET-CT (Positron Emission Tomography-Computed Tomography) results in unconformity, and greatly affects the quality of the image and diagnosis. The arm support fixtures provided by medical device companies are not manufactured considering the convenience and safety of the patients; the arm and head movements (horizontal and vertical) during PET/CT scan cause defects in the brain fundus images and often require retaking. Therefore, this study aims to develop patient-compensation device that would minimize the head and arm movements during PET/CT scan, providing comfort and safety, and to reduce retaking. Materials and Methods: From June to July 2012, 20 patients who had no movement-related problems and another 20 patients who had difficulties in raising arms due to shoulder pain were recruited among the ones who visited nuclear medicine department for PET Torso scan. By using Patient Holding System (PHS), different range of motion (ROM) in the arm ($25^{\circ}$, $27^{\circ}$, $29^{\circ}$, $31^{\circ}$, $33^{\circ}$, $35^{\circ}$) was applied to find the most comfortable angle and posture. The manufacturing company was investigated for the permeability of the support material, and the comfort level of applying bands (velcro type) to fix the patient's head and arms was evaluated. To find out the retake frequency due to movements, the amount of retake cases pre/post patient-compensation were analyzed using the PET Torso scan data collected between January to December 2012. Results: Among the patients without movement disorder, 18 answered that PHS and $29^{\circ}$ arm ROM were the most comfortable, and 2 answered $27^{\circ}$ and $31^{\circ}$, respectively. Among the patients with shoulder pain, 15 picked $31^{\circ}$ as the most comfortable angle, 2 picked $33^{\circ}$, and 3 picked $35^{\circ}$. For this study, the handle was manufactured to be adjustable for vertical movements. The material permeability of the patient-compensation device has been verified, and PHS and the compensation device were band-fixed (velcro type) to prevent device movements. A furrow was cut for head fixation to minimize the head and neck movements, fixing bands were attached for the head, wrist, forearm, and upper arm to limit movements. The retake frequency of PET Torso scan due to patient movements was 11.06% (191 cases/1,808 patients) before using the movement control device, and 2.65% (48 cases/1,732 patients) after using the device; 8.41% of the frequency was reduced. Conclusion: Recent change and innovation in the medical environment are making expensive medical image scans, and providing differentiated services for the customers is essential. To secure patient comfort and safety during PET/CT scans, ergonomic patient-compensation devices need to be provided. Therefore, this study manufactured a patientcompensation device with vertically adjustable ergonomic ROM according to the patient's body shape and condition during PET Torso scan. The defects in the basal ganglia images due to arm movements were reduced, and retaking was decreased.

  • PDF

Hierarchical Land Cover Classification using IKONOS and AIRSAR Images (IKONOS와 AIRSAR 영상을 이용한 계층적 토지 피복 분류)

  • Yeom, Jun-Ho;Lee, Jeong-Ho;Kim, Duk-Jin;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.4
    • /
    • pp.435-444
    • /
    • 2011
  • The land cover map derived from spectral features of high resolution optical images has low spectral resolution and heterogeneity in the same land cover class. For this reason, despite the same land cover class, the land cover can be classified into various land cover classes especially in vegetation area. In order to overcome these problems, detailed vegetation classification is applied to optical satellite image and SAR(Synthetic Aperture Radar) integrated data in vegetation area which is the result of pre-classification from optical image. The pre-classification and vegetation classification were performed with MLC(Maximum Likelihood Classification) method. The hierarchical land cover classification was proposed from fusion of detailed vegetation classes and non-vegetation classes of pre-classification. We can verify the facts that the proposed method has higher accuracy than not only general SAR data and GLCM(Gray Level Co-occurrence Matrix) texture integrated methods but also hierarchical GLCM integrated method. Especially the proposed method has high accuracy with respect to both vegetation and non-vegetation classification.

Analysis of the Effect of Learned Image Scale and Season on Accuracy in Vehicle Detection by Mask R-CNN (Mask R-CNN에 의한 자동차 탐지에서 학습 영상 화면 축척과 촬영계절이 정확도에 미치는 영향 분석)

  • Choi, Jooyoung;Won, Taeyeon;Eo, Yang Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.1
    • /
    • pp.15-22
    • /
    • 2022
  • In order to improve the accuracy of the deep learning object detection technique, the effect of magnification rate conditions and seasonal factors on detection accuracy in aerial photographs and drone images was analyzed through experiments. Among the deep learning object detection techniques, Mask R-CNN, which shows fast learning speed and high accuracy, was used to detect the vehicle to be detected in pixel units. Through Seoul's aerial photo service, learning images were captured at different screen magnifications, and the accuracy was analyzed by learning each. According to the experimental results, the higher the magnification level, the higher the mAP average to 60%, 67%, and 75%. When the magnification rates of train and test data of the data set were alternately arranged, low magnification data was arranged as train data, and high magnification data was arranged as test data, showing a difference of more than 20% compared to the opposite case. And in the case of drone images with a seasonal difference with a time difference of 4 months, the results of learning the image data at the same period showed high accuracy with an average of 93%, confirming that seasonal differences also affect learning.

A Study on Multi-modal Near-IR Face and Iris Recognition on Mobile Phones (휴대폰 환경에서의 근적외선 얼굴 및 홍채 다중 인식 연구)

  • Park, Kang-Ryoung;Han, Song-Yi;Kang, Byung-Jun;Park, So-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.1-9
    • /
    • 2008
  • As the security requirements of mobile phones have been increasing, there have been extensive researches using one biometric feature (e.g., an iris, a fingerprint, or a face image) for authentication. Due to the limitation of uni-modal biometrics, we propose a method that combines face and iris images in order to improve accuracy in mobile environments. This paper presents four advantages and contributions over previous research. First, in order to capture both face and iris image at fast speed and simultaneously, we use a built-in conventional mega pixel camera in mobile phone, which is revised to capture the NIR (Near-InfraRed) face and iris image. Second, in order to increase the authentication accuracy of face and iris, we propose a score level fusion method based on SVM (Support Vector Machine). Third, to reduce the classification complexities of SVM and intra-variation of face and iris data, we normalize the input face and iris data, respectively. For face, a NIR illuminator and NIR passing filter on camera are used to reduce the illumination variance caused by environmental visible lighting and the consequent saturated region in face by the NIR illuminator is normalized by low processing logarithmic algorithm considering mobile phone. For iris, image transform into polar coordinate and iris code shifting are used for obtaining robust identification accuracy irrespective of image capturing condition. Fourth, to increase the processing speed on mobile phone, we use integer based face and iris authentication algorithms. Experimental results were tested with face and iris images by mega-pixel camera of mobile phone. It showed that the authentication accuracy using SVM was better than those of uni-modal (face or iris), SUM, MAX, NIN and weighted SUM rules.

Development of Advanced Personal Identification System Using Iris Image and Speech Signal (홍채와 음성을 이용한 고도의 개인확인시스템)

  • Lee, Dae-Jong;Go, Hyoun-Joo;Kwak, Keun-Chang;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.348-354
    • /
    • 2003
  • This proposes a new algorithm for advanced personal identification system using iris pattern and speech signal. Since the proposed algorithm adopts a fusion scheme to take advantage of iris recognition and speaker identification, it shows robustness for noisy environments. For evaluating the performance of the proposed scheme, we compare it with the iris pattern recognition and speaker identification respectively. In the experiments, the proposed method showed more 56.7% improvements than the iris recognition method and more 10% improvements than the speaker identification method for high quality security level. Also, in noisy environments, the proposed method showed more 30% improvements than the iris recognition method and more 60% improvements than the speaker identification method for high quality security level.

Pavement Crack Detection and Segmentation Based on Deep Neural Network

  • Nguyen, Huy Toan;Yu, Gwang Hyun;Na, Seung You;Kim, Jin Young;Seo, Kyung Sik
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.9
    • /
    • pp.99-112
    • /
    • 2019
  • Cracks on pavement surfaces are critical signs and symptoms of the degradation of pavement structures. Image-based pavement crack detection is a challenging problem due to the intensity inhomogeneity, topology complexity, low contrast, and noisy texture background. In this paper, we address the problem of pavement crack detection and segmentation at pixel-level based on a Deep Neural Network (DNN) using gray-scale images. We propose a novel DNN architecture which contains a modified U-net network and a high-level features network. An important contribution of this work is the combination of these networks afforded through the fusion layer. To the best of our knowledge, this is the first paper introducing this combination for pavement crack segmentation and detection problem. The system performance of crack detection and segmentation is enhanced dramatically by using our novel architecture. We thoroughly implement and evaluate our proposed system on two open data sets: the Crack Forest Dataset (CFD) and the AigleRN dataset. Experimental results demonstrate that our system outperforms eight state-of-the-art methods on the same data sets.

Automatic Extraction of Buildings using Aerial Photo and Airborne LIDAR Data (항공사진과 항공레이저 데이터를 이용한 건물 자동추출)

  • 조우석;이영진;좌윤석
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.4
    • /
    • pp.307-317
    • /
    • 2003
  • This paper presents an algorithm that automatically extracts buildings among many different features on the earth surface by fusing LIDAR data with panchromatic aerial images. The proposed algorithm consists of three stages such as point level process, polygon level process, parameter space level process. At the first stage, we eliminate gross errors and apply a local maxima filter to detect building candidate points from the raw laser scanning data. After then, a grouping procedure is performed for segmenting raw LIDAR data and the segmented LIDAR data is polygonized by the encasing polygon algorithm developed in the research. At the second stage, we eliminate non-building polygons using several constraints such as area and circularity. At the last stage, all the polygons generated at the second stage are projected onto the aerial stereo images through collinearity condition equations. Finally, we fuse the projected encasing polygons with edges detected by image processing for refining the building segments. The experimental results showed that the RMSEs of building corners in X, Y and Z were 8.1cm, 24.7cm, 35.9cm, respectively.