• Title/Summary/Keyword: 클래스 추출

Search Result 416, Processing Time 0.021 seconds

Fire Severity Mapping Using a Single Post-Fire Landsat 7 ETM+ Imagery (단일 시기의 Landsat 7 ETM+ 영상을 이용한 산불피해지도 작성)

  • 원강영;임정호
    • Korean Journal of Remote Sensing
    • /
    • v.17 no.1
    • /
    • pp.85-97
    • /
    • 2001
  • The KT(Kauth-Thomas) and IHS(Intensity-Hue-Saturation) transformation techniques were introduced and compared to investigate fire-scarred areas with single post-fire Landsat 7 ETM+ image. This study consists of two parts. First, using only geometrically corrected imagery, it was examined whether or not the different level of fire-damaged areas could be detected by simple slicing method within the image enhanced by the IHS transform. As a result, since the spectral distribution of each class on each IHS component was overlaid, the simple slicing method did not seem appropriate for the delineation of the areas of the different level of fire severity. Second, the image rectified by both radiometrically and topographically was enhanced by the KT transformation and the IHS transformation, respectively. Then, the images were classified by the maximum likelihood method. The cross-validation was performed for the compensation of relatively small set of ground truth data. The results showed that KT transformation produced better accuracy than IHS transformation. In addition, the KT feature spaces and the spectral distribution of IHS components were analyzed on the graph. This study has shown that, as for the detection of the different level of fire severity, the KT transformation reflects the ground physical conditions better than the IHS transformation.

Deep Learning Structure Suitable for Embedded System for Flame Detection (불꽃 감지를 위한 임베디드 시스템에 적합한 딥러닝 구조)

  • Ra, Seung-Tak;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.112-119
    • /
    • 2019
  • In this paper, we propose a deep learning structure suitable for embedded system. The flame detection process of the proposed deep learning structure consists of four steps : flame area detection using flame color model, flame image classification using deep learning structure for flame color specialization, $N{\times}N$ cell separation in detected flame area, flame image classification using deep learning structure for flame shape specialization. First, only the color of the flame is extracted from the input image and then labeled to detect the flame area. Second, area of flame detected is the input of a deep learning structure specialized in flame color and is classified as flame image only if the probability of flame class at the output is greater than 75%. Third, divide the detected flame region of the images classified as flame images less than 75% in the preceding section into $N{\times}N$ units. Fourthly, small cells divided into $N{\times}N$ units are inserted into the input of a deep learning structure specialized to the shape of the flame and each cell is judged to be flame proof and classified as flame images if more than 50% of cells are classified as flame images. To verify the effectiveness of the proposed deep learning structure, we experimented with a flame database of ImageNet. Experimental results show that the proposed deep learning structure has an average resource occupancy rate of 29.86% and an 8 second fast flame detection time. The flame detection rate averaged 0.95% lower compared to the existing deep learning structure, but this was the result of light construction of the deep learning structure for application to embedded systems. Therefore, the deep learning structure for flame detection proposed in this paper has been proved suitable for the application of embedded system.

Contactless User Identification System using Multi-channel Palm Images Facilitated by Triple Attention U-Net and CNN Classifier Ensemble Models

  • Kim, Inki;Kim, Beomjun;Woo, Sunghee;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.33-43
    • /
    • 2022
  • In this paper, we propose an ensemble model facilitated by multi-channel palm images with attention U-Net models and pretrained convolutional neural networks (CNNs) for establishing a contactless palm-based user identification system using conventional inexpensive camera sensors. Attention U-Net models are used to extract the areas of interest including hands (i.e., with fingers), palms (i.e., without fingers) and palm lines, which are combined to generate three channels being ped into the ensemble classifier. Then, the proposed palm information-based user identification system predicts the class using the classifier ensemble with three outperforming pre-trained CNN models. The proposed model demonstrates that the proposed model could achieve the classification accuracy, precision, recall, F1-score of 98.60%, 98.61%, 98.61%, 98.61% respectively, which indicate that the proposed model is effective even though we are using very cheap and inexpensive image sensors. We believe that in this COVID-19 pandemic circumstances, the proposed palm-based contactless user identification system can be an alternative, with high safety and reliability, compared with currently overwhelming contact-based systems.

Classifying the Latent Group of Elementary School Students Based on Social Achievement Goals Types and the Exploration of Peer Status and Aggression (초등학생의 사회적 성취목표 유형에 따른 잠재집단 분류와 또래지위 및 공격성과의 관련성 탐색)

  • Choi, Eun-Young
    • Korean Journal of School Psychology
    • /
    • v.17 no.2
    • /
    • pp.223-241
    • /
    • 2020
  • The purpose of this study was to explore the latent profiles of social achievement goals and to investigate the differences in peer status (perceived popularity, social preference) and aggression (overt, relational, cyber) among those profile groups. Social achievement goals and cyber aggression data was acquired through self-reporting, and perceived popularity, social preference, and overt and relational aggression were assessed through peer nomination. Applying the latent profile analysis(LPA) to 1,239 elementary school students, three distinct groups of social achievement goals were identified: a development-oriented achievement goal group, an average social goal group, and a overall-high social achievement goal group. Using logistic regression analysis, the relationships between the latent group, peer status, and aggression were examined. The result indicated that the higher the social preference, the lower the probability of belonging to the 'overall-high social achievement goal group'. And the higher the cyber aggression, the lower the probability of belonging to the 'development-oriented achievement goal group'. In addition, the higher the relational aggression of the second time, the higher the probability of belonging to the 'overall-high social achievement goal group' as compared to the 'average social goal group'.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.