• Title/Summary/Keyword: Multi-level classification

Search Result 161, Processing Time 0.027 seconds

The Design Of Microarray Classification System Using Combination Of Significant Gene Selection Method Based On Normalization. (표준화 기반 유의한 유전자 선택 방법 조합을 이용한 마이크로어레이 분류 시스템 설계)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.12
    • /
    • pp.2259-2264
    • /
    • 2008
  • Significant genes are defined as genes in which the expression level characterizes a specific experimental condition. Such genes in which the expression levels differ significantly between different groups are highly informative relevant to the studied phenomenon. In this paper, first the system can detect informative genes by similarity scale combination method being proposed in this paper after normalizing data with methods that are the most widely used among several normalization methods proposed the while. And it compare and analyze a performance of each of normalization methods with multi-perceptron neural network layer. The Result classifying in Multi-Perceptron neural network classifier for selected 200 genes using combination of PC(Pearson correlation coefficient) and ED(Euclidean distance coefficient) after Lowess normalization represented the improved classification performance of 98.84%.

Weakly-supervised Semantic Segmentation using Exclusive Multi-Classifier Deep Learning Model (독점 멀티 분류기의 심층 학습 모델을 사용한 약지도 시맨틱 분할)

  • Choi, Hyeon-Joon;Kang, Dong-Joong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.227-233
    • /
    • 2019
  • Recently, along with the recent development of deep learning technique, neural networks are achieving success in computer vision filed. Convolutional neural network have shown outstanding performance in not only for a simple image classification task, but also for tasks with high difficulty such as object segmentation and detection. However many such deep learning models are based on supervised-learning, which requires more annotation labels than image-level label. Especially image semantic segmentation model requires pixel-level annotations for training, which is very. To solve these problems, this paper proposes a weakly-supervised semantic segmentation method which requires only image level label to train network. Existing weakly-supervised learning methods have limitations in detecting only specific area of object. In this paper, on the other hand, we use multi-classifier deep learning architecture so that our model recognizes more different parts of objects. The proposed method is evaluated using VOC 2012 validation dataset.

The Implementation of Multi-Port UTOPIA Level2 Controller for Interworking ATM Interface Module and MPLS Interface Module (MPLS모듈과 ATM모듈과의 Cell Mode 인터페이스를 위한 Multi-Port지원 UTOPIA-L2 Controller구현)

  • 김광옥;최병철;박완기
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.11C
    • /
    • pp.1164-1170
    • /
    • 2002
  • In the ACE2000 MPLS system, MPLS Interface Module(MIM) is composed of an ATM Interface Module and a HFMA performing a packet forwarding. In the MIM, the HFMA RSAR receive cells from the Physical layer and reassemble the cells. And the IP Lookup controller perform a packet forwarding after packet classification. Forwarded packet is segmented into cells in the HFMA TSAR and transfer to the ALMA for the transmission to an ATM cell switch. When the MIM make use of an ATM Interface Module, it directly connect the ALMA with a PHY layer using the UTOPIA Level2 interface. Then, an ALMA performs Master Mode. Also, the HFMA TSAR performs the Master Mode in the MIM. Therefore, the UTOPIA-L2 Controller of the Slave Mode require for interfacing between an ALMA and a HFHA TSAR. In this paper, we implement the architecture and cell control mechanism for the UTOPIA-L2 Controller supporting Multi-ports.

Convolutional Neural Network with Expert Knowledge for Hyperspectral Remote Sensing Imagery Classification

  • Wu, Chunming;Wang, Meng;Gao, Lang;Song, Weijing;Tian, Tian;Choo, Kim-Kwang Raymond
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3917-3941
    • /
    • 2019
  • The recent interest in artificial intelligence and machine learning has partly contributed to an interest in the use of such approaches for hyperspectral remote sensing (HRS) imagery classification, as evidenced by the increasing number of deep framework with deep convolutional neural networks (CNN) structures proposed in the literature. In these approaches, the assumption of obtaining high quality deep features by using CNN is not always easy and efficient because of the complex data distribution and the limited sample size. In this paper, conventional handcrafted learning-based multi features based on expert knowledge are introduced as the input of a special designed CNN to improve the pixel description and classification performance of HRS imagery. The introduction of these handcrafted features can reduce the complexity of the original HRS data and reduce the sample requirements by eliminating redundant information and improving the starting point of deep feature training. It also provides some concise and effective features that are not readily available from direct training with CNN. Evaluations using three public HRS datasets demonstrate the utility of our proposed method in HRS classification.

A Multi-Layer Perceptron for Color Index based Vegetation Segmentation (색상지수 기반의 식물분할을 위한 다층퍼셉트론 신경망)

  • Lee, Moon-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.1
    • /
    • pp.16-25
    • /
    • 2020
  • Vegetation segmentation in a field color image is a process of distinguishing vegetation objects of interests like crops and weeds from a background of soil and/or other residues. The performance of the process is crucial in automatic precision agriculture which includes weed control and crop status monitoring. To facilitate the segmentation, color indices have predominantly been used to transform the color image into its gray-scale image. A thresholding technique like the Otsu method is then applied to distinguish vegetation parts from the background. An obvious demerit of the thresholding based segmentation will be that classification of each pixel into vegetation or background is carried out solely by using the color feature of the pixel itself without taking into account color features of its neighboring pixels. This paper presents a new pixel-based segmentation method which employs a multi-layer perceptron neural network to classify the gray-scale image into vegetation and nonvegetation pixels. The input data of the neural network for each pixel are 2-dimensional gray-level values surrounding the pixel. To generate a gray-scale image from a raw RGB color image, a well-known color index called Excess Green minus Excess Red Index was used. Experimental results using 80 field images of 4 vegetation species demonstrate the superiority of the neural network to existing threshold-based segmentation methods in terms of accuracy, precision, recall, and harmonic mean.

Accuracy Assessment of Forest Degradation Detection in Semantic Segmentation based Deep Learning Models with Time-series Satellite Imagery

  • Woo-Dam Sim;Jung-Soo Lee
    • Journal of Forest and Environmental Science
    • /
    • v.40 no.1
    • /
    • pp.15-23
    • /
    • 2024
  • This research aimed to assess the possibility of detecting forest degradation using time-series satellite imagery and three different deep learning-based change detection techniques. The dataset used for the deep learning models was composed of two sets, one based on surface reflectance (SR) spectral information from satellite imagery, combined with Texture Information (GLCM; Gray-Level Co-occurrence Matrix) and terrain information. The deep learning models employed for land cover change detection included image differencing using the Unet semantic segmentation model, multi-encoder Unet model, and multi-encoder Unet++ model. The study found that there was no significant difference in accuracy between the deep learning models for forest degradation detection. Both training and validation accuracies were approx-imately 89% and 92%, respectively. Among the three deep learning models, the multi-encoder Unet model showed the most efficient analysis time and comparable accuracy. Moreover, models that incorporated both texture and gradient information in addition to spectral information were found to have a higher classification accuracy compared to models that used only spectral information. Overall, the accuracy of forest degradation extraction was outstanding, achieving 98%.

Hierarchical Land Cover Classification using IKONOS and AIRSAR Images (IKONOS와 AIRSAR 영상을 이용한 계층적 토지 피복 분류)

  • Yeom, Jun-Ho;Lee, Jeong-Ho;Kim, Duk-Jin;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.4
    • /
    • pp.435-444
    • /
    • 2011
  • The land cover map derived from spectral features of high resolution optical images has low spectral resolution and heterogeneity in the same land cover class. For this reason, despite the same land cover class, the land cover can be classified into various land cover classes especially in vegetation area. In order to overcome these problems, detailed vegetation classification is applied to optical satellite image and SAR(Synthetic Aperture Radar) integrated data in vegetation area which is the result of pre-classification from optical image. The pre-classification and vegetation classification were performed with MLC(Maximum Likelihood Classification) method. The hierarchical land cover classification was proposed from fusion of detailed vegetation classes and non-vegetation classes of pre-classification. We can verify the facts that the proposed method has higher accuracy than not only general SAR data and GLCM(Gray Level Co-occurrence Matrix) texture integrated methods but also hierarchical GLCM integrated method. Especially the proposed method has high accuracy with respect to both vegetation and non-vegetation classification.

Bearing Multi-Faults Detection of an Induction Motor using Acoustic Emission Signals and Texture Analysis (음향 방출 신호와 질감 분석을 이용한 유도전동기의 베어링 복합 결함 검출)

  • Jang, Won-Chul;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.4
    • /
    • pp.55-62
    • /
    • 2014
  • This paper proposes a fault detection method utilizing converted images of acoustic emission signals and texture analysis for identifying bearing's multi-faults which frequently occur in an induction motor. The proposed method analyzes three texture features from the converted images of multi-faults: multi-faults image's entropy, homogeneity, and energy. These extracted features are then used as inputs of a fuzzy-ARTMAP to identify each multi-fault including outer-inner, inner-roller, and outer-roller. The experimental results using ten times trials indicate that the proposed method achieves 100% accuracy in the fault classification.

Radionuclide identification based on energy-weighted algorithm and machine learning applied to a multi-array plastic scintillator

  • Hyun Cheol Lee ;Bon Tack Koo ;Ju Young Jeon ;Bo-Wi Cheon ;Do Hyeon Yoo ;Heejun Chung;Chul Hee Min
    • Nuclear Engineering and Technology
    • /
    • v.55 no.10
    • /
    • pp.3907-3912
    • /
    • 2023
  • Radiation portal monitors (RPMs) installed at airports and harbors to prevent illicit trafficking of radioactive materials generally use large plastic scintillators. However, their energy resolution is poor and radionuclide identification is nearly unfeasible. In this study, to improve isotope identification, a RPM system based on a multi-array plastic scintillator and convolutional neural network (CNN) was evaluated by measuring the spectra of radioactive sources. A multi-array plastic scintillator comprising an assembly of 14 hexagonal scintillators was fabricated within an area of 50 × 100 cm2. The energy spectra of 137Cs, 60Co, 226Ra, and 4K (KCl) were measured at speeds of 10-30 km/h, respectively, and an energy-weighted algorithm was applied. For the CNN, 700 and 300 spectral images were used as training and testing images, respectively. Compared to the conventional plastic scintillator, the multi-arrayed detector showed a high collection probability of the optical photons generated inside. A Compton maximum peak was observed for four moving radiation sources, and the CNN-based classification results showed that at least 70% was discriminated. Under the speed condition, the spectral fluctuations were higher than those under dwelling condition. However, the machine learning results demonstrated that a considerably high level of nuclide discrimination was possible under source movement conditions.

A Study on the Algorithm for Estimating Rainfall According to the Rainfall Type Using Geostationary Meteorological Satellite Data (정지궤도 기상위성 자료를 활용한 강우유형별 강우량 추정연구)

  • Lee Eun-Joo;Suh Myoung-Seok
    • Proceedings of the KSRS Conference
    • /
    • 2006.03a
    • /
    • pp.117-120
    • /
    • 2006
  • Heavy rainfall events are occurred exceedingly various forms by a complex interaction between synoptic, dynamic and atmospheric stability. As the results, quantitative precipitation forecast is extraordinary difficult because it happens locally in a short time and has a strong spatial and temporal variations. GOES-9 imagery data provides continuous observations of the clouds in time and space at the right resolution. In this study, an power-law type algorithm(KAE: Korea auto estimator) for estimating rainfall based on the rainfall type was developed using geostationary meteorological satellite data. GOES-9 imagery and automatic weather station(AWS) measurements data were used for the classification of rainfall types and the development of estimation algorithm. Subjective and objective classification of rainfall types using GOES-9 imagery data and AWS measurements data showed that most of heavy rainfalls are occurred by the convective and mired type. Statistical analysis between AWS rainfall and GOES-IR data according to the rainfall types showed that estimation of rainfall amount using satellite data could be possible only for the convective and mixed type rainfall. The quality of KAE in estimating the rainfall amount and rainfall area is similar or slightly superior to the National Environmental Satellite Data and Information Service's auto-estimator(NESDIS AE), especially for the multi cell convective and mixed type heavy rainfalls. Also the high estimated level is denoted on the mature stage as well as decaying stages of rainfall system.

  • PDF