• Title/Summary/Keyword: Comparison of images

Search Result 1,770, Processing Time 0.031 seconds

Comparison of Orthophotos and 3D Models Generated by UAV-Based Oblique Images Taken in Various Angles

  • Lee, Ki Rim;Han, You Kyung;Lee, Won Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.3
    • /
    • pp.117-126
    • /
    • 2018
  • Due to intelligent transport systems, location-based applications, and augmented reality, demand for image maps and 3D (Three-Dimensional) maps is increasing. As a result, data acquisition using UAV (Unmanned Aerial Vehicles) has flourished in recent years. However, even though orthophoto map production and research using UAVs are flourishing, few studies on 3D modeling have been conducted. In this study, orthophoto and 3D modeling research was performed using various angle images acquired by a UAV. For orthophotos, accuracy was evaluated using a GPS (Global Positioning System) survey that employed VRS (Virtual Reference Station) acquired checkpoints. 3D modeling was evaluated by calculating the RMSE (Root Mean Square Error) of the difference between the outline height values of buildings obtained from the GPS survey to the corresponding 3D modeling height values. The orthophotos satisfied the acceptable accuracy of NGII (National Geographic Information Institute) for a 1/500 scale map from all angles. In the case of 3D modeling, models based on images taken at 45 degrees revealed better accuracy of building outlines than models based on images taken at 30, 60, or 75 degrees. To summarize, it was shown that for orthophotos, the accuracy for 1/500 maps was satisfied at all angles; for 3D modeling, images taken at 45 degrees produced the most accurate models.

Mosaic image generation of AISA Eagle hyperspectral sensor using SIFT method (SIFT 기법을 이용한 AISA Eagle 초분광센서의 모자이크영상 생성)

  • Han, You Kyung;Kim, Yong Il;Han, Dong Yeob;Choi, Jae Wan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.2
    • /
    • pp.165-172
    • /
    • 2013
  • In this paper, high-quality mosaic image is generated by high-resolution hyperspectral strip images using scale-invariant feature transform (SIFT) algorithm, which is one of the representative image matching methods. The experiments are applied to AISA Eagle images geo-referenced by using GPS/INS information acquired when it was taken on flight. The matching points between three strips of hyperspectral images are extracted using SIFT method, and the transformation models between images are constructed from the points. Mosaic image is, then, generated using the transformation models constructed from corresponding images. Optimal band appropriate for the matching point extraction is determined by selecting representative bands of hyperspectral data and analyzing the matched results based on each band. Mosaic image generated by proposed method is visually compared with the mosaic image generated from initial geo-referenced AISA hyperspectral images. From the comparison, we could estimate geometrical accuracy of generated mosaic image and analyze the efficiency of our methodology.

The Applicability for Earth Surface Monitoring Based on 3D Wavelet Transform Using the Multi-temporal Satellite Imagery (다중시기 위성영상을 이용한 3차원 웨이블릿 변환의 지구모니터링 응용가능성 연구)

  • Yoo, Hee-Young;Lee, Ki-Won
    • Journal of the Korean earth science society
    • /
    • v.32 no.6
    • /
    • pp.560-574
    • /
    • 2011
  • Satellite images that have been obtained periodically and continuously are very effective data to monitor the changes of Earth's surface. Traditionally, the studies on change detection using satellite images have mainly focused on comparison between two results after analyzing two images respectively. However, the interests in researches to catch smooth trends and short duration events from continual multi-temporal images have been increased recently. In this study, we introduce and test an approach based on 3D wavelet transform to analyze the multi-temporal satellite images. 3D wavelet transform can reduce the dimensions of data conserving main trends. Also, it is possible to extract important patterns and to analyze spatial and temporal relations with neighboring pixels using 3D wavelet transform. As a result, 3D wavelet transform is useful to capture the long term trends and short-term events rapidly. In addition, we can expect to get new information through sub-bands of 3D wavelet transform which provide different information by decomposed direction.

Scene Change Detection and Key Frame Selection Using Fast Feature Extraction in the MPEG-Compressed Domain (MPEG 압축 영상에서의 고속 특징 요소 추출을 이용한 장면 전환 검출과 키 프레임 선택)

  • 송병철;김명준;나종범
    • Journal of Broadcast Engineering
    • /
    • v.4 no.2
    • /
    • pp.155-163
    • /
    • 1999
  • In this paper, we propose novel scene change detection and key frame selection techniques, which use two feature images, i.e., DC and edge images, extracted directly from MPEG compressed video. For fast edge image extraction. we suggest to utilize 5 lower AC coefficients of each DCT. Based on this scheme, we present another edge image extraction technique using AC prediction. Although the former is superior to the latter in terms of visual quality, both methods all can extract important edge features well. Simulation results indicate that scene changes such as cut. fades, and dissolves can be correctly detected by using the edge energy diagram obtained from edge images and histograms from DC images. In addition. we find that our edge images are comparable to those obtained in the spatial domain while keeping much lower computational cost. And based on HVS, a key frame of each scene can also be selected. In comparison with an existing method using optical flow. our scheme can select semantic key frames because we only use the above edge and DC images.

  • PDF

A COMPARISON OF PERIAPICAL RADIOGRAPHS AND THEIR DIGITAL IMAGES FOR THE DETECTION OF SIMULATED INTERPROXIMAL CARIOUS LESIONS (모의 인접면 치아우식병소의 진단을 위한 구내 표준방사선사진과 그 디지털 영상의 비교)

  • Kim Hyun;Chung Hyun-Dae
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.24 no.2
    • /
    • pp.279-290
    • /
    • 1994
  • The purpose of this study was to compare the diagnostic accuracy of periapical radiographs and their digitized images for the detection of simulated interproximal carious lesions. A total of 240 interproximal surfaces was used in this study. The case sample was composed of 80 anterior teeth, 80 bicuspids and 80 molars which were prepared in order to distribute the surfaces from carious free to those containing simulated carious lesions of varying depths (0.5㎜, 0.8㎜, and 1.2㎜). The periapical radiographs were taken by paralleling technique and film used was Kodak Ektaspeed(E group). All radiographs were evaluated by five dentist to recognize the true status of simulated carious lesion. They were asked to give a score of 0, 1, 2, or 3. Digitized images were obtained using a commercial video processor(FOTOVIX Ⅱ- XS). And the computer system was 486 DX PC with PC Vision and frame grabber. The 17' display monitor had a resolution of 1280×1024 pixels(0.26㎜ dot pitch). But the one frame of the intraoral radiograph has a resolution of 700×480 pixels and each pixel has a grey level value of 256. All the radiographs and digital images were viewed under uniform subdued lighting in the same reading room. After a week the second interpretation was performed in the same condition. The detection of lesions on the monitor was compared with the finding of simulated interproximal carious lesions on the film images. The results were as follows: 1. When the scoring criteria was dichotomous ; lesion present and not present 1) The overall sensitivity, specificity and diagnostic accuracy of periapical radiographs and their digital images showed no statistically significant difference. 2) The sensitivity and specificity according to the region of teeth and the grade of lesions showed no statistically significant difference between periapical radiographs and their digital images. 2. When estimate the grade of lesions ; score 0, 1, 2, 3 1) The overall diagnostic accuracy was 53.3% on the intraoral films and 52.9% on digital images. There was no significant difference. 2) The diagnostic accuracy according to the region of teeth showed no statistically significant difference between periapical radiographs and their digital images. 3. The degree of agreement and reliability 1) Using gamma value to show the degree of agreement, there was similarity between periapical films and digital images. 2) The reliability of each twice interpretation of periapical films and digital images showed no statistically significant difference. In all cases P value was greater than 0.05, showing that both techniques can be used to detect the incipient and moderate interproximal carious lesions with similar accuracy.

  • PDF

Comparison of static MRI and pseudo-dynamic MRI in tempromandibular joint disorder patients (측두하악관절장애 환자에서의 static MRI와 pseudo-dynamic MRI의 비교연구)

  • Lee, Jin-Ho;Yun, Kyoung-In;Park, In-Woo;Choi, Hang-Moon;Park, Moon-Soo
    • Imaging Science in Dentistry
    • /
    • v.36 no.4
    • /
    • pp.199-206
    • /
    • 2006
  • Purpose: The purpose of this study was to evaluate comparison of static MRI and pseudo-dynamic (cine) MRI in temporomandibular joint (TMJ) disorder patients. Materials and Methods: In this investigation, 33 patients with TMJ disorders were examined using both conventional static MRI and pseudo-dynamic MRI. Multiple spoiled gradient recalled acquisition in the steady state (SPGR) images were obtained when mouth opened and closed. Proton density weighted images were obtained at the closed and open mouth position in static MRI. Two oral and maxillofacial radiologists evaluated location of the articular disk, movement of condyle and bony change respectively and the posterior boundary of articular disk was obtained. Results: No statistically significant difference was found in the observation of articular disk position, mandibular condylar movement and posterior boundary of articular disk using static MRI and pseudo-dynamic MRI (P<0.05). Statistically significant difference was noted in bony changes of condyle using static MRI and pseudo-dynamic MRI (P<0.05). Conclusion: This study showed that pseudo-dynamic MRI didn't make a difference in diagnosing internal derangement of TMJ in comparison with static MRI. But it was considered as an additional method to be supplemented in observing bony change.

  • PDF

The Similarity of the Image Comparison System utilizing OpenCV (OpenCV를 활용한 이미지 유사성 비교 시스템)

  • Ban, Tae-Hak;Bang, Jin-Suk;Yuk, Jung-Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.834-835
    • /
    • 2016
  • In recent years, advances in technology, IT is rapidly growing. Accordingly, real time image processing and multiple platforms, providing compatibility with OpenCV for image processing technology research on actively in progress. At present, different, comparing the images to determine the similarity is low, the system will match the rate of people using the analogue figures to determine the system is for the most part. In this paper, Template Matching of OpenCV and Feature Matching utilizing different images to determine the similarity between digital values for the system. A comparison of the features of a specific point on the screen the image to extract the same feature in a different size, you can compare the features of the target image recognized as compared to three historic castle in comparison, verification. This is the voice and image recognition and analysis, check the matching rate readings than in Zhengzhou treatment techniques are available. The future of forensic and other image processing technologies for OpenCV studies will be needed to feed.

  • PDF

Comparison of Deep Learning-based CNN Models for Crack Detection (콘크리트 균열 탐지를 위한 딥 러닝 기반 CNN 모델 비교)

  • Seol, Dong-Hyeon;Oh, Ji-Hoon;Kim, Hong-Jin
    • Journal of the Architectural Institute of Korea Structure & Construction
    • /
    • v.36 no.3
    • /
    • pp.113-120
    • /
    • 2020
  • The purpose of this study is to compare the models of Deep Learning-based Convolution Neural Network(CNN) for concrete crack detection. The comparison models are AlexNet, GoogLeNet, VGG16, VGG19, ResNet-18, ResNet-50, ResNet-101, and SqueezeNet which won ImageNet Large Scale Visual Recognition Challenge(ILSVRC). To train, validate and test these models, we constructed 3000 training data and 12000 validation data with 256×256 pixel resolution consisting of cracked and non-cracked images, and constructed 5 test data with 4160×3120 pixel resolution consisting of concrete images with crack. In order to increase the efficiency of the training, transfer learning was performed by taking the weight from the pre-trained network supported by MATLAB. From the trained network, the validation data is classified into crack image and non-crack image, yielding True Positive (TP), True Negative (TN), False Positive (FP), False Negative (FN), and 6 performance indicators, False Negative Rate (FNR), False Positive Rate (FPR), Error Rate, Recall, Precision, Accuracy were calculated. The test image was scanned twice with a sliding window of 256×256 pixel resolution to classify the cracks, resulting in a crack map. From the comparison of the performance indicators and the crack map, it was concluded that VGG16 and VGG19 were the most suitable for detecting concrete cracks.

Parallel Processing of k-Means Clustering Algorithm for Unsupervised Classification of Large Satellite Images: A Hybrid Method Using Multicores and a PC-Cluster (대용량 위성영상의 무감독 분류를 위한 k-Means Clustering 알고리즘의 병렬처리: 다중코어와 PC-Cluster를 이용한 Hybrid 방식)

  • Han, Soohee;Song, Jeong Heon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.445-452
    • /
    • 2019
  • In this study, parallel processing codes of k-means clustering algorithm were developed and implemented in a PC-cluster for unsupervised classification of large satellite images. We implemented intra-node code using multicores of CPU (Central Processing Unit) based on OpenMP (Open Multi-Processing), inter-nodes code using a PC-cluster based on message passing interface, and hybrid code using both. The PC-cluster consists of one master node and eight slave nodes, and each node is equipped with eight multicores. Two operating systems, Microsoft Windows and Canonical Ubuntu, were installed in the PC-cluster in turn and tested to compare parallel processing performance. Two multispectral satellite images were tested, which are a medium-capacity LANDSAT 8 OLI (Operational Land Imager) image and a high-capacity Sentinel 2A image. To evaluate the performance of parallel processing, speedup and efficiency were measured. Overall, the speedup was over N / 2 and the efficiency was over 0.5. From the comparison of the two operating systems, the Ubuntu system showed two to three times faster performance. To confirm that the results of the sequential and parallel processing coincide with the other, the center value of each band and the number of classified pixels were compared, and result images were examined by pixel to pixel comparison. It was found that care should be taken to avoid false sharing of OpenMP in intra-node implementation. To process large satellite images in a PC-cluster, code and hardware should be designed to reduce performance degradation caused by file I / O. Also, it was found that performance can differ depending on the operating system installed in a PC-cluster.

Artifact Reduction in Sparse-view Computed Tomography Image using Residual Learning Combined with Wavelet Transformation (Wavelet 변환과 결합한 잔차 학습을 이용한 희박뷰 전산화단층영상의 인공물 감소)

  • Lee, Seungwan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.3
    • /
    • pp.295-302
    • /
    • 2022
  • Sparse-view computed tomography (CT) imaging technique is able to reduce radiation dose, ensure the uniformity of image characteristics among projections and suppress noise. However, the reconstructed images obtained by the sparse-view CT imaging technique suffer from severe artifacts, resulting in the distortion of image quality and internal structures. In this study, we proposed a convolutional neural network (CNN) with wavelet transformation and residual learning for reducing artifacts in sparse-view CT image, and the performance of the trained model was quantitatively analyzed. The CNN consisted of wavelet transformation, convolutional and inverse wavelet transformation layers, and input and output images were configured as sparse-view CT images and residual images, respectively. For training the CNN, the loss function was calculated by using mean squared error (MSE), and the Adam function was used as an optimizer. Result images were obtained by subtracting the residual images, which were predicted by the trained model, from sparse-view CT images. The quantitative accuracy of the result images were measured in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The results showed that the trained model is able to improve the spatial resolution of the result images as well as reduce artifacts in sparse-view CT images effectively. Also, the trained model increased the PSNR and SSIM by 8.18% and 19.71% in comparison to the imaging model trained without wavelet transformation and residual learning, respectively. Therefore, the imaging model proposed in this study can restore the image quality of sparse-view CT image by reducing artifacts, improving spatial resolution and quantitative accuracy.