• Title/Summary/Keyword: Cross-Entropy

Search Result 116, Processing Time 0.03 seconds

Missing Pattern Matching of Rough Set Based on Attribute Variations Minimization in Rough Set (속성 변동 최소화에 의한 러프집합 누락 패턴 부합)

  • Lee, Young-Cheon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.6
    • /
    • pp.683-690
    • /
    • 2015
  • In Rough set, attribute missing values have several problems such as reduct and core estimation. Further, they do not give some discernable pattern for decision tree construction. Now, there are several methods such as substitutions of typical attribute values, assignment of every possible value, event covering, C4.5 and special LEMS algorithm. However, they are mainly substitutions into frequently appearing values or common attribute ones. Thus, decision rules with high information loss are derived in case that important attribute values are missing in pattern matching. In particular, there is difficult to implement cross validation of the decision rules. In this paper we suggest new method for substituting the missing attribute values into high information gain by using entropy variation among given attributes, and thereby completing the information table. The suggested method is validated by conducting the same rough set analysis on the incomplete information system using the software ROSE.

Discharge Computation in Natural Rivers Using Chiu's Velocity Distribution and Estimation of Maximum Velocity (자연하천에서 Chiu의 유속분포와 최대유속 추정을 이용한 유량산정)

  • Kim, Chang-Wan;Lee, Min-Ho;Yoo, Dong-Hoon;Jung, Sung-Won
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.6
    • /
    • pp.575-585
    • /
    • 2008
  • It is essential to obtain accurate and highly reliable streamflow data for water resources planning, evaluation and management as well as design of hydraulic structures. A new discharge computation method proposed in this research uses Chiu's velocity distribution and estimation of maximum velocity. This method shows acceptable channel discharges comparing these by the exiting velocity-area method. When velocity-area method is used, it is required to observe velocities at every specified point and vertical line using a velocity meter like Price-AA. If the method proposed in this research, is used, however it is not necessary to observe all point velocities needed in the velocity-area method. But this method can not be applied for the cases of very complex and strongly asymmetric channel cross-sections because Chiu's velocity distribution using entropy concept may be quite biased from that of natural rivers.

A Study on the Species Distribution Modeling using National Ecosystem Survey Data (전국자연환경조사 자료를 이용한 종분포모형 연구)

  • Kim, Jiyeon;Seo, Changwan;Kwon, Hyuksoo;Ryu, Jieun;Kim, Myungjin
    • Journal of Environmental Impact Assessment
    • /
    • v.21 no.4
    • /
    • pp.593-607
    • /
    • 2012
  • The Ministry of Environment have started the 'National Ecosystem Survey' since 1986. It has been carried out nationwide every ten years as the largest survey project in Korea. The second one and the third one produced the GIS-based inventory of species. Three survey methods were different from each other. There were few studies for species distribution using national survey data in Korea. The purposes of this study are to test species distribution models for finding the most suitable modeling methods for the National Ecosystem Survey data and to investigate the modeling results according to survey methods and taxonominal group. Occurrence data of nine species were extracted from the National Ecosystem Survey by taxonomical group (plant, mammal, and bird). Plants are Korean winter hazel (Corylopsis coreana), Iris odaesanensis (Iris odaesanensis), and Berchemia (Berchemia berchemiaefolia). Mammals are Korean Goral (Nemorhaedus goral), Marten (Martes flavigula koreana), and Leopard cat (Felis bengalensis). Birds are Black Woodpecker (Dryocopus martius), Eagle Owl (Bubo Bubo), and Common Buzzard (Buteo buteo). Environmental variables consisted of climate, topography, soil and vegetation structure. Two modeling methods (GAM, Maxent) were tested across nine species, and predictive species maps of target species were produced. The results of this study were as follows. Firstly, Maxent showed similar 5 cross-validated AUC with GAM. Maxent is more useful model to develop than GAM because National Ecosystem Survey data has presence-only data. Therefore, Maxent is more useful species distribution model for National Ecosystem Survey data. Secondly, the modeling results between the second and third survey methods showed sometimes different because of each different surveying methods. Therefore, we need to combine two data for producing a reasonable result. Lastly, modeling result showed different predicted distribution pattern by taxonominal group. These results should be considered if we want to develop a species distribution model using the National Ecosystem Survey and apply it to a nationwide biodiversity research.

Deep Learning: High-quality Imaging through Multicore Fiber

  • Wu, Liqing;Zhao, Jun;Zhang, Minghai;Zhang, Yanzhu;Wang, Xiaoyan;Chen, Ziyang;Pu, Jixiong
    • Current Optics and Photonics
    • /
    • v.4 no.4
    • /
    • pp.286-292
    • /
    • 2020
  • Imaging through multicore fiber (MCF) is of great significance in the biomedical domain. Although several techniques have been developed to image an object from a signal passing through MCF, these methods are strongly dependent on the surroundings, such as vibration and the temperature fluctuation of the fiber's environment. In this paper, we apply a new, strong technique called deep learning to reconstruct the phase image through a MCF in which each core is multimode. To evaluate the network, we employ the binary cross-entropy as the loss function of a convolutional neural network (CNN) with improved U-net structure. The high-quality reconstruction of input objects upon spatial light modulation (SLM) can be realized from the speckle patterns of intensity that contain the information about the objects. Moreover, we study the effect of MCF length on image recovery. It is shown that the shorter the fiber, the better the imaging quality. Based on our findings, MCF may have applications in fields such as endoscopic imaging and optical communication.

A study on the performance improvement of learning based on consistency regularization and unlabeled data augmentation (일치성규칙과 목표값이 없는 데이터 증대를 이용하는 학습의 성능 향상 방법에 관한 연구)

  • Kim, Hyunwoong;Seok, Kyungha
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.167-175
    • /
    • 2021
  • Semi-supervised learning uses both labeled data and unlabeled data. Recently consistency regularization is very popular in semi-supervised learning. Unsupervised data augmentation (UDA) that uses unlabeled data augmentation is also based on the consistency regularization. The Kullback-Leibler divergence is used for the loss of unlabeled data and cross-entropy for the loss of labeled data through UDA learning. UDA uses techniques such as training signal annealing (TSA) and confidence-based masking to promote performance. In this study, we propose to use Jensen-Shannon divergence instead of Kullback-Leibler divergence, reverse-TSA and not to use confidence-based masking for performance improvement. Through experiment, we show that the proposed technique yields better performance than those of UDA.

Optimal Algorithm and Number of Neurons in Deep Learning (딥러닝 학습에서 최적의 알고리즘과 뉴론수 탐색)

  • Jang, Ha-Young;You, Eun-Kyung;Kim, Hyeock-Jin
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.389-396
    • /
    • 2022
  • Deep Learning is based on a perceptron, and is currently being used in various fields such as image recognition, voice recognition, object detection, and drug development. Accordingly, a variety of learning algorithms have been proposed, and the number of neurons constituting a neural network varies greatly among researchers. This study analyzed the learning characteristics according to the number of neurons of the currently used SGD, momentum methods, AdaGrad, RMSProp, and Adam methods. To this end, a neural network was constructed with one input layer, three hidden layers, and one output layer. ReLU was applied to the activation function, cross entropy error (CEE) was applied to the loss function, and MNIST was used for the experimental dataset. As a result, it was concluded that the number of neurons 100-300, the algorithm Adam, and the number of learning (iteraction) 200 would be the most efficient in deep learning learning. This study will provide implications for the algorithm to be developed and the reference value of the number of neurons given new learning data in the future.

Skin Disease Classification Technique Based on Convolutional Neural Network Using Deep Metric Learning (Deep Metric Learning을 활용한 합성곱 신경망 기반의 피부질환 분류 기술)

  • Kim, Kang Min;Kim, Pan-Koo;Chun, Chanjun
    • Smart Media Journal
    • /
    • v.10 no.4
    • /
    • pp.45-54
    • /
    • 2021
  • The skin is the body's first line of defense against external infection. When a skin disease strikes, the skin's protective role is compromised, necessitating quick diagnosis and treatment. Recently, as artificial intelligence has advanced, research for technical applications has been done in a variety of sectors, including dermatology, to reduce the rate of misdiagnosis and obtain quick treatment using artificial intelligence. Although previous studies have diagnosed skin diseases with low incidence, this paper proposes a method to classify common illnesses such as warts and corns using a convolutional neural network. The data set used consists of 3 classes and 2,515 images, but there is a problem of lack of training data and class imbalance. We analyzed the performance using a deep metric loss function and a cross-entropy loss function to train the model. When comparing that in terms of accuracy, recall, F1 score, and accuracy, the former performed better.

Development of Semi-Supervised Deep Domain Adaptation Based Face Recognition Using Only a Single Training Sample (단일 훈련 샘플만을 활용하는 준-지도학습 심층 도메인 적응 기반 얼굴인식 기술 개발)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1375-1385
    • /
    • 2022
  • In this paper, we propose a semi-supervised domain adaptation solution to deal with practical face recognition (FR) scenarios where a single face image for each target identity (to be recognized) is only available in the training phase. Main goal of the proposed method is to reduce the discrepancy between the target and the source domain face images, which ultimately improves FR performances. The proposed method is based on the Domain Adatation network (DAN) using an MMD loss function to reduce the discrepancy between domains. In order to train more effectively, we develop a novel loss function learning strategy in which MMD loss and cross-entropy loss functions are adopted by using different weights according to the progress of each epoch during the learning. The proposed weight adoptation focuses on the training of the source domain in the initial learning phase to learn facial feature information such as eyes, nose, and mouth. After the initial learning is completed, the resulting feature information is used to training a deep network using the target domain images. To evaluate the effectiveness of the proposed method, FR performances were evaluated with pretrained model trained only with CASIA-webface (source images) and fine-tuned model trained only with FERET's gallery (target images) under the same FR scenarios. The experimental results showed that the proposed semi-supervised domain adaptation can be improved by 24.78% compared to the pre-trained model and 28.42% compared to the fine-tuned model. In addition, the proposed method outperformed other state-of-the-arts domain adaptation approaches by 9.41%.

Ensemble-based deep learning for autonomous bridge component and damage segmentation leveraging Nested Reg-UNet

  • Abhishek Subedi;Wen Tang;Tarutal Ghosh Mondal;Rih-Teng Wu;Mohammad R. Jahanshahi
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.335-349
    • /
    • 2023
  • Bridges constantly undergo deterioration and damage, the most common ones being concrete damage and exposed rebar. Periodic inspection of bridges to identify damages can aid in their quick remediation. Likewise, identifying components can provide context for damage assessment and help gauge a bridge's state of interaction with its surroundings. Current inspection techniques rely on manual site visits, which can be time-consuming and costly. More recently, robotic inspection assisted by autonomous data analytics based on Computer Vision (CV) and Artificial Intelligence (AI) has been viewed as a suitable alternative to manual inspection because of its efficiency and accuracy. To aid research in this avenue, this study performs a comparative assessment of different architectures, loss functions, and ensembling strategies for the autonomous segmentation of bridge components and damages. The experiments lead to several interesting discoveries. Nested Reg-UNet architecture is found to outperform five other state-of-the-art architectures in both damage and component segmentation tasks. The architecture is built by combining a Nested UNet style dense configuration with a pretrained RegNet encoder. In terms of the mean Intersection over Union (mIoU) metric, the Nested Reg-UNet architecture provides an improvement of 2.86% on the damage segmentation task and 1.66% on the component segmentation task compared to the state-of-the-art UNet architecture. Furthermore, it is demonstrated that incorporating the Lovasz-Softmax loss function to counter class imbalance can boost performance by 3.44% in the component segmentation task over the most employed alternative, weighted Cross Entropy (wCE). Finally, weighted softmax ensembling is found to be quite effective when used synchronously with the Nested Reg-UNet architecture by providing mIoU improvement of 0.74% in the component segmentation task and 1.14% in the damage segmentation task over a single-architecture baseline. Overall, the best mIoU of 92.50% for the component segmentation task and 84.19% for the damage segmentation task validate the feasibility of these techniques for autonomous bridge component and damage segmentation using RGB images.

Performance Evaluation of ResNet-based Pneumonia Detection Model with the Small Number of Layers Using Chest X-ray Images (흉부 X선 영상을 이용한 작은 층수 ResNet 기반 폐렴 진단 모델의 성능 평가)

  • Youngeun Choi;Seungwan Lee
    • Journal of radiological science and technology
    • /
    • v.46 no.4
    • /
    • pp.277-285
    • /
    • 2023
  • In this study, pneumonia identification networks with the small number of layers were constructed by using chest X-ray images. The networks had similar trainable-parameters, and the performance of the trained models was quantitatively evaluated with the modification of the network architectures. A total of 6 networks were constructed: convolutional neural network (CNN), VGGNet, GoogleNet, residual network with identity blocks, ResNet with bottleneck blocks and ResNet with identity and bottleneck blocks. Trainable parameters for the 6 networks were set in a range of 273,921-294,817 by adjusting the output channels of convolution layers. The network training was implemented with binary cross entropy (BCE) loss function, sigmoid activation function, adaptive moment estimation (Adam) optimizer and 100 epochs. The performance of the trained models was evaluated in terms of training time, accuracy, precision, recall, specificity and F1-score. The results showed that the trained models with the small number of layers precisely detect pneumonia from chest X-ray images. In particular, the overall quantitative performance of the trained models based on the ResNets was above 0.9, and the performance levels were similar or superior to those based on the CNN, VGGNet and GoogleNet. Also, the residual blocks affected the performance of the trained models based on the ResNets. Therefore, in this study, we demonstrated that the object detection networks with the small number of layers are suitable for detecting pneumonia using chest X-ray images. And, the trained models based on the ResNets can be optimized by applying appropriate residual-blocks.