• Title/Summary/Keyword: Cross-Entropy

Search Result 121, Processing Time 0.021 seconds

Text Categorization Using TextRank Algorithm (TextRank 알고리즘을 이용한 문서 범주화)

  • Bae, Won-Sik;Cha, Jeong-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.1
    • /
    • pp.110-114
    • /
    • 2010
  • We describe a new method for text categorization using TextRank algorithm. Text categorization is a problem that over one pre-defined categories are assigned to a text document. TextRank algorithm is a graph-based ranking algorithm. If we consider that each word is a vertex, and co-occurrence of two adjacent words is a edge, we can get a graph from a document. After that, we find important words using TextRank algorithm from the graph and make feature which are pairs of words which are each important word and a word adjacent to the important word. We use classifiers: SVM, Na$\ddot{i}$ve Bayesian classifier, Maximum Entropy Model, and k-NN classifier. We use non-cross-posted version of 20 Newsgroups data set. In consequence, we had an improved performance in whole classifiers, and the result tells that is a possibility of TextRank algorithm in text categorization.

Mean-Variance-Validation Technique for Sequential Kriging Metamodels (순차적 크리깅모델의 평균-분산 정확도 검증기법)

  • Lee, Tae-Hee;Kim, Ho-Sung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.34 no.5
    • /
    • pp.541-547
    • /
    • 2010
  • The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean$_0$ validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean$_0$ validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels.

A Study on the Estimation of Discharge in Unsteady Condition by Using the Entropy Concept (엔트로피 개념에 의한 부정류 유량 산정에 관한 연구)

  • Choo, Tai Ho;Chae, Soo Kwon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.12
    • /
    • pp.6159-6166
    • /
    • 2012
  • A discharge measurement is difficult in flood season which is especially important in the water resources field and the continuous discharge measurement for all rivers is impossible on the present system. So, the stage-discharge curve has been used for a long time to produce discharge data of rivers. However, there has been problems from a reliability angle due to the fact that this method uses only stage-discharge relationship, although the stage-discharge curve has the convenience. Therefore, a new mean velocity equation was derived by using Chiu's 2D velocity formula of the entropy concept in this paper. The derived equation reflected hydraulic characteristics such as the depth, gravity acceleration, hydraulic radius, energy slope, kinematic coefficient of viscosity, etc. and estimated also a maximum velocity. In addition, this method verified the relationship between a mean and maximum velocity and estimates an equilibrium state ${\phi}(M)$ well presenting properties of a river cross section as the results. The mean velocity was estimated by using the equilibrium state ${\phi}(M)$, and then the discharge was estimated. To prove this equation to be accurate, the comparison between the measured and estimated discharge is conducted by using the measured laboratory data in the unsteady condition flow showing loop state and the results are consistent. If this study is constantly carried out by using various laboratory and river data, this method will be widely utilized in water resources field.

Deep learning based crack detection from tunnel cement concrete lining (딥러닝 기반 터널 콘크리트 라이닝 균열 탐지)

  • Bae, Soohyeon;Ham, Sangwoo;Lee, Impyeong;Lee, Gyu-Phil;Kim, Donggyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.583-598
    • /
    • 2022
  • As human-based tunnel inspections are affected by the subjective judgment of the inspector, making continuous history management difficult. There is a lot of deep learning-based automatic crack detection research recently. However, the large public crack datasets used in most studies differ significantly from those in tunnels. Also, additional work is required to build sophisticated crack labels in current tunnel evaluation. Therefore, we present a method to improve crack detection performance by inputting existing datasets into a deep learning model. We evaluate and compare the performance of deep learning models trained by combining existing tunnel datasets, high-quality tunnel datasets, and public crack datasets. As a result, DeepLabv3+ with Cross-Entropy loss function performed best when trained on both public datasets, patchwise classification, and oversampled tunnel datasets. In the future, we expect to contribute to establishing a plan to efficiently utilize the tunnel image acquisition system's data for deep learning model learning.

Development of Deep Learning Model for Detecting Road Cracks Based on Drone Image Data (드론 촬영 이미지 데이터를 기반으로 한 도로 균열 탐지 딥러닝 모델 개발)

  • Young-Ju Kwon;Sung-ho Mun
    • Land and Housing Review
    • /
    • v.14 no.2
    • /
    • pp.125-135
    • /
    • 2023
  • Drones are used in various fields, including land survey, transportation, forestry/agriculture, marine, environment, disaster prevention, water resources, cultural assets, and construction, as their industrial importance and market size have increased. In this study, image data for deep learning was collected using a mavic3 drone capturing images at a shooting altitude was 20 m with ×7 magnification. Swin Transformer and UperNet were employed as the backbone and architecture of the deep learning model. About 800 sheets of labeled data were augmented to increase the amount of data. The learning process encompassed three rounds. The Cross-Entropy loss function was used in the first and second learning; the Tversky loss function was used in the third learning. In the future, when the crack detection model is advanced through convergence with the Internet of Things (IoT) through additional research, it will be possible to detect patching or potholes. In addition, it is expected that real-time detection tasks of drones can quickly secure the detection of pavement maintenance sections.

A Study on Reliability Analysis According to the Number of Training Data and the Number of Training (훈련 데이터 개수와 훈련 횟수에 따른 과도학습과 신뢰도 분석에 대한 연구)

  • Kim, Sung Hyeock;Oh, Sang Jin;Yoon, Geun Young;Kim, Wan
    • Korean Journal of Artificial Intelligence
    • /
    • v.5 no.1
    • /
    • pp.29-37
    • /
    • 2017
  • The range of problems that can be handled by the activation of big data and the development of hardware has been rapidly expanded and machine learning such as deep learning has become a very versatile technology. In this paper, mnist data set is used as experimental data, and the Cross Entropy function is used as a loss model for evaluating the efficiency of machine learning, and the value of the loss function in the steepest descent method is We applied the Gradient Descent Optimize algorithm to minimize and updated weight and bias via backpropagation. In this way we analyze optimal reliability value corresponding to the number of exercises and optimal reliability value without overfitting. And comparing the overfitting time according to the number of data changes based on the number of training times, when the training frequency was 1110 times, we obtained the result of 92%, which is the optimal reliability value without overfitting.

Improving Discriminative Feature Learning for Face Recognition utilizing a Center Expansion Algorithm (중심확장 알고리즘이 보강된 식별적 특징학습을 통한 얼굴인식 향상기법)

  • Kang, Myeong-Kyun;Lee, Sang C.;Lee, In-Ho
    • Annual Conference of KIPS
    • /
    • 2017.04a
    • /
    • pp.881-884
    • /
    • 2017
  • 좋은 특징을 도출할 수 있는 신경망은 곧 대상을 잘 이해하고 있는 신경망을 의미한다. 그러나 얼굴과 같이 유사한 이미지를 분류하기 위해서는 신경망이 좀 더 구분되는 특징을 도출해야한다. 본 논문에서는 얼굴과 같이 유사도한 이미지를 분류하기 위해 오차함수에 중심확장(Center Expansion)이라는 오차를 추가한다. 중심확장은 도출된 특징이 밀집되면 클래스를 분류하는 매니폴드를 구하기 어려워져 분류 성능이 하락되는 문제를 해결하기 위해 제안한 것으로 특징이 밀집될 가능성이 높은 부분에 특징이 도출되지 않도록 강제하는 방식이다. 학습 시 활용하는 오차는 일반적으로 분류 문제를 위해 사용되는 softmax cross-entropy 오차와 각 클래스의 분산을 줄이는 오차 그리고 제안한 중심확장 오차를 조합해 구할 것이다. 본 논문에서는 제안한 중심확장 오차를 조합한 모델과 조합되지 않은 모델이 결과적으로 특징 도출과 분류에 어떠한 영향을 주었는지 알아볼 것이다. 중심확장을 조합해 학습한 모델이 어떤 영향을 주었는지 알기 위해 본 논문에서는 Labeled Faces in the Wild를 활용해 분류 실험을 진행할 것이다. Labeled Faces in the Wild을 활용해 실험한 결과 중심확장을 활용한 모델과 활용하지 않은 모델간의 성능을 차이를 확인할 수 있었다.

Moving Picture Compression using Frame Classification by Luminance Characteristics (명암특성에 따른 프레임 분류를 이용한 동영상 압축기법)

  • Kim, Sang-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.4
    • /
    • pp.51-56
    • /
    • 2011
  • This paper proposes an efficient moving picture compression for video sequences with luminance variations. In the proposed algorithm, the luminance variation parameters are estimated and local motions are compensated. To detect the frame required luminance compensation, we employ the frame classification based on the cross entropy between histograms of two successive frames, which can reduce the computational redundancy. Simulation results show that the proposed method yields a higher peak signal to noise ratio (PSNR) than that of the conventional methods, with a low computational load, when the video scene contains large luminance variations.

A Simple Stopping Criterion for the MIN-SUM Iterative Decoding Algorithm on SCCC and Turbo code (반복 복호의 계산량 감소를 위한 간단한 복호 중단 판정 알고리즘)

  • Heo, Jun;Chung, Kyu-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.41 no.4
    • /
    • pp.11-16
    • /
    • 2004
  • A simple stopping criterion for iterative decoding based on min-sum processing is presented. While most stopping criteria suggested in the literature, are based on Cross Entropy (CE) and its simplification, the proposed stopping criterion is to check if a decoded sequence is a valid codeword along the encoder trellis structure. This new stopping criterion requires less computational complexity and saves mem4)ry compared to the conventional stopping rules. The numerical results are presented on the 3GPP turbo code and a Serially Concatenated Convolutional Cods (SCCC).

Analysis of Change Detection Results by UNet++ Models According to the Characteristics of Loss Function (손실함수의 특성에 따른 UNet++ 모델에 의한 변화탐지 결과 분석)

  • Jeong, Mila;Choi, Hoseong;Choi, Jaewan
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.929-937
    • /
    • 2020
  • In this manuscript, the UNet++ model, which is one of the representative deep learning techniques for semantic segmentation, was used to detect changes in temporal satellite images. To analyze the learning results according to various loss functions, we evaluated the change detection results using trained UNet++ models by binary cross entropy and the Jaccard coefficient. In addition, the learning results of the deep learning model were analyzed compared to existing pixel-based change detection algorithms by using WorldView-3 images. In the experiment, it was confirmed that the performance of the deep learning model could be determined depending on the characteristics of the loss function, but it showed better results compared to the existing techniques.