• Title/Summary/Keyword: image merging accuracy

Search Result 38, Processing Time 0.029 seconds

Comparative Research of Image Classification and Image Segmentation Methods for Mapping Rural Roads Using a High-resolution Satellite Image (고해상도 위성영상을 이용한 농촌 도로 매핑을 위한 영상 분류 및 영상 분할 방법 비교에 관한 연구)

  • CHOUNG, Yun-Jae;GU, Bon-Yup
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.3
    • /
    • pp.73-82
    • /
    • 2021
  • Rural roads are the significant infrastructure for developing and managing the rural areas, hence the utilization of the remote sensing datasets for managing the rural roads is necessary for expanding the rural transportation infrastructure and improving the life quality of the rural residents. In this research, the two different methods such as image classification and image segmentation were compared for mapping the rural road based on the given high-resolution satellite image acquired in the rural areas. In the image classification method, the deep learning with the multiple neural networks was employed to the given high-resolution satellite image for generating the object classification map, then the rural roads were mapped by extracting the road objects from the generated object classification map. In the image segmentation method, the multiresolution segmentation was employed to the same satellite image for generating the segment image, then the rural roads were mapped by merging the road objects located on the rural roads on the satellite image. We used the 100 checkpoints for assessing the accuracy of the two rural roads mapped by the different methods and drew the following conclusions. The image segmentation method had the better performance than the image classification method for mapping the rural roads using the give satellite image, because some of the rural roads mapped by the image classification method were not identified due to the miclassification errors occurred in the object classification map, while all of the rural roads mapped by the image segmentation method were identified. However some of the rural roads mapped by the image segmentation method also had the miclassfication errors due to some rural road segments including the non-rural road objects. In future research the object-oriented classification or the convolutional neural networks widely used for detecting the precise objects from the image sources would be used for improving the accuracy of the rural roads using the high-resolution satellite image.

Region-based Building Extraction of High Resolution Satellite Images Using Color Invariant Features (색상 불변 특징을 이용한 고해상도 위성영상의 영역기반 건물 추출)

  • Ko, A-Reum;Byun, Young-Gi;Park, Woo-Jin;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.2
    • /
    • pp.75-87
    • /
    • 2011
  • This paper presents a method for region-based building extraction from high resolution satellite images(HRSI) using integrated information of spectral and color invariant features without user intervention such as selecting training data sets. The purpose of this study is also to evaluate the effectiveness of the proposed method by applying to IKONOS and QuickBird images. Firstly, the image is segmented by the MSRG method. The vegetation and shadow regions are automatically detected and masked to facilitate the building extraction. Secondly, the region merging is performed for the masked image, which the integrated information of the spectral and color invariant features is used. Finally, the building regions are extracted using the shape feature for the merged regions. The boundaries of the extracted buildings are simplified using the generalization techniques to improve the completeness of the building extraction. The experimental results showed more than 80% accuracy for two study areas and the visually satisfactory results obtained. In conclusion, the proposed method has shown great potential for the building extraction from HRSI.

3D Image Mergence using Weighted Bipartite Matching Method based on Minimum Distance (최소 거리 기반 가중치 이분 분할 매칭 방법을 이용한 3차원 영상 정합)

  • Jang, Taek-Jun;Joo, Ki-See;Jang, Bog-Ju;Kang, Kyeang-Yeong
    • Journal of Advanced Navigation Technology
    • /
    • v.12 no.5
    • /
    • pp.494-501
    • /
    • 2008
  • In this paper, to merge whole 3D information of an occluded body from view point, the new image merging algorithm is introduced after obtaining images of body on the turn table from 4 directions. The two images represented by polygon meshes are merged using weight bipartite matching method with different weights according to coordinates and axes based on minimum distance since two images merged don't present abrupt variation of 3D coordinates and scan direction is one direction. To obtain entire 3D information of body, these steps are repeated 3 times since the obtained images are 4. This proposed method has advantage 200 - 300% searching time reduction rather than conventional branch and bound, dynamic programming, and hungarian method though the matching accuracy rate is a little bit less than these methods.

  • PDF

Text Area Extraction Method for Color Images Based on Labeling and Gradient Difference Method (레이블링 기법과 밝기값 변화에 기반한 컬러영상의 문자영역 추출 방법)

  • Won, Jong-Kil;Kim, Hye-Young;Cho, Jin-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.511-521
    • /
    • 2011
  • As the use of image input and output devices increases, the importance of extracting text area in color images is also increasing. In this paper, in order to extract text area of the images efficiently, we present a text area extraction method for color images based on labeling and gradient difference method. The proposed method first eliminates non-text area using the processes of labeling and filtering. After generating the candidates of text area by using the property that is high gradient difference in text area, text area is extracted using the post-processing of noise removal and text area merging. The benefits of the proposed method are its simplicity and high accuracy that is better than the conventional methods. Experimental results show that precision, recall and inverse ratio of non-text extraction (IRNTE) of the proposed method are 99.59%, 98.65% and 82.30%, respectively.

Study on Heart Rate Variability and PSD Analysis of PPG Data for Emotion Recognition (감정 인식을 위한 PPG 데이터의 심박변이도 및 PSD 분석)

  • Choi, Jin-young;Kim, Hyung-shin
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.103-112
    • /
    • 2018
  • In this paper, we propose a method of recognizing emotions using PPG sensor which measures blood flow according to emotion. From the existing PPG signal, we use a method of determining positive emotions and negative emotions in the frequency domain through PSD (Power Spectrum Density). Based on James R. Russell's two-dimensional prototype model, we classify emotions as joy, sadness, irritability, and calmness and examine their association with the magnitude of energy in the frequency domain. It is significant that this study used the same PPG sensor used in wearable devices to measure the top four kinds of emotions in the frequency domain through image experiments. Through the questionnaire, the accuracy, the immersion level according to the individual, the emotional change, and the biofeedback for the image were collected. The proposed method is expected to be various development such as commercial application service using PPG and mobile application prediction service by merging with context information of existing smart phone.

A Study on the extraction of activity obstacles to improve self-driving efficiency (자율주행 효율성 향상을 위한 활동성 장애물 추출에 관한 연구)

  • Park, Chang min
    • Journal of Platform Technology
    • /
    • v.9 no.4
    • /
    • pp.71-78
    • /
    • 2021
  • Self-driving vehicles are increasing as new alternatives to solving problems such as human safety, environment and aging. And such technology development has a great ripple effect on other industries. However, various problems are occurring. The number of casualties caused by self-driving is increasing. Although the collision of fixed obstacles is somewhat decreasing, on the contrary, the technology by active obstacles is still insignificant. Therefore, in this study, in order to solve the core problem of self-driving vehicles, we propose a method of extracting active obstacles on the road. First, a center scene is extracted from a continuous image. In addition, it was proposed to extract activity obstacles using activity size and activity repeatability information from objects included in the center scene. The center scene is calculated using region segmentation and merging. Based on these results, the size of the frequency for each pixel in the region was calculated and the size of the activity of the obstacle was calculated using information that frequently appears in activity. Compared to the results extracted directly by humans, the extraction accuracy was somewhat lower, but satisfactory results were obtained. Therefore, it is believed that the proposed method will contribute to solving the problems of self-driving and reducing human accidents.

Hierarchical Clustering Approach of Multisensor Data Fusion: Application of SAR and SPOT-7 Data on Korean Peninsula

  • Lee, Sang-Hoon;Hong, Hyun-Gi
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.65-65
    • /
    • 2002
  • In remote sensing, images are acquired over the same area by sensors of different spectral ranges (from the visible to the microwave) and/or with different number, position, and width of spectral bands. These images are generally partially redundant, as they represent the same scene, and partially complementary. For many applications of image classification, the information provided by a single sensor is often incomplete or imprecise resulting in misclassification. Fusion with redundant data can draw more consistent inferences for the interpretation of the scene, and can then improve classification accuracy. The common approach to the classification of multisensor data as a data fusion scheme at pixel level is to concatenate the data into one vector as if they were measurements from a single sensor. The multiband data acquired by a single multispectral sensor or by two or more different sensors are not completely independent, and a certain degree of informative overlap may exist between the observation spaces of the different bands. This dependence may make the data less informative and should be properly modeled in the analysis so that its effect can be eliminated. For modeling and eliminating the effect of such dependence, this study employs a strategy using self and conditional information variation measures. The self information variation reflects the self certainty of the individual bands, while the conditional information variation reflects the degree of dependence of the different bands. One data set might be very less reliable than others in the analysis and even exacerbate the classification results. The unreliable data set should be excluded in the analysis. To account for this, the self information variation is utilized to measure the degrees of reliability. The team of positively dependent bands can gather more information jointly than the team of independent ones. But, when bands are negatively dependent, the combined analysis of these bands may give worse information. Using the conditional information variation measure, the multiband data are split into two or more subsets according the dependence between the bands. Each subsets are classified separately, and a data fusion scheme at decision level is applied to integrate the individual classification results. In this study. a two-level algorithm using hierarchical clustering procedure is used for unsupervised image classification. Hierarchical clustering algorithm is based on similarity measures between all pairs of candidates being considered for merging. In the first level, the image is partitioned as any number of regions which are sets of spatially contiguous pixels so that no union of adjacent regions is statistically uniform. The regions resulted from the low level are clustered into a parsimonious number of groups according to their statistical characteristics. The algorithm has been applied to satellite multispectral data and airbone SAR data.

  • PDF

Application of Hyperspectral Imagery to Decision Tree Classifier for Assessment of Spring Potato (Solanum tuberosum) Damage by Salinity and Drought (초분광 영상을 이용한 의사결정 트리 기반 봄감자(Solanum tuberosum)의 염해 판별)

  • Kang, Kyeong-Suk;Ryu, Chan-Seok;Jang, Si-Hyeong;Kang, Ye-Seong;Jun, Sae-Rom;Park, Jun-Woo;Song, Hye-Young;Lee, Su Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.4
    • /
    • pp.317-326
    • /
    • 2019
  • Salinity which is often detected on reclaimed land is a major detrimental factor to crop growth. It would be advantageous to develop an approach for assessment of salinity and drought damages using a non-destructive method in a large landfills area. The objective of this study was to examine applicability of the decision tree classifier using imagery for classifying for spring potatoes (Solanum tuberosum) damaged by salinity or drought at vegetation growth stages. We focused on comparing the accuracies of OA (Overall accuracy) and KC (Kappa coefficient) between the simple reflectance and the band ratios minimizing the effect on the light unevenness. Spectral merging based on the commercial band width with full width at half maximum (FWHM) such as 10 nm, 25 nm, and 50 nm was also considered to invent the multispectral image sensor. In the case of the classification based on original simple reflectance with 5 nm of FWHM, the selected bands ranged from 3-13 bands with the accuracy of less than 66.7% of OA and 40.8% of KC in all FWHMs. The maximum values of OA and KC values were 78.7% and 57.7%, respectively, with 10 nm of FWHM to classify salinity and drought damages of spring potato. When the classifier was built based on the band ratios, the accuracy was more than 95% of OA and KC regardless of growth stages and FWHMs. If the multispectral image sensor is made with the six bands (the ratios of three bands) with 10 nm of FWHM, it is possible to classify the damaged spring potato by salinity or drought using the reflectance of images with 91.3% of OA and 85.0% of KC.