• Title/Summary/Keyword: Binary learning

Search Result 309, Processing Time 0.044 seconds

Optimal Method for Binary Neural Network using AETLA (AETLA를 이용한 이진 신경회로망의 최적 합성방법)

  • 성상규;정종원;이준탁
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.05a
    • /
    • pp.105-108
    • /
    • 2001
  • In this paper, the learning algorithm called advanced expanded and truncate algorithm(AETLA) is proposed to training multilayer binary neural network to approximate binary to binary mapping. AETLA used merit of ETL and MTGA learning algorithm. We proposed to new learning algorithm to decrease number of hidden layer. Therefore, learning speed of the proposed AETLA learning algorithm is much faster than other learning algorithm.

  • PDF

Supervised Learning-Based Collaborative Filtering Using Market Basket Data for the Cold-Start Problem

  • Hwang, Wook-Yeon;Jun, Chi-Hyuck
    • Industrial Engineering and Management Systems
    • /
    • v.13 no.4
    • /
    • pp.421-431
    • /
    • 2014
  • The market basket data in the form of a binary user-item matrix or a binary item-user matrix can be modelled as a binary classification problem. The binary logistic regression approach tackles the binary classification problem, where principal components are predictor variables. If users or items are sparse in the training data, the binary classification problem can be considered as a cold-start problem. The binary logistic regression approach may not function appropriately if the principal components are inefficient for the cold-start problem. Assuming that the market basket data can also be considered as a special regression problem whose response is either 0 or 1, we propose three supervised learning approaches: random forest regression, random forest classification, and elastic net to tackle the cold-start problem, comparing the performance in a variety of experimental settings. The experimental results show that the proposed supervised learning approaches outperform the conventional approaches.

NETLA Based Optimal Synthesis Method of Binary Neural Network for Pattern Recognition

  • Lee, Joon-Tark
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.216-221
    • /
    • 2004
  • This paper describes an optimal synthesis method of binary neural network for pattern recognition. Our objective is to minimize the number of connections and the number of neurons in hidden layer by using a Newly Expanded and Truncated Learning Algorithm (NETLA) for the multilayered neural networks. The synthesis method in NETLA uses the Expanded Sum of Product (ESP) of the boolean expressions and is based on the multilayer perceptron. It has an ability to optimize a given binary neural network in the binary space without any iterative learning as the conventional Error Back Propagation (EBP) algorithm. Furthermore, NETLA can reduce the number of the required neurons in hidden layer and the number of connections. Therefore, this learning algorithm can speed up training for the pattern recognition problems. The superiority of NETLA to other learning algorithms is demonstrated by an practical application to the approximation problem of a circular region.

Self-Organizing Feature Map with Constant Learning Rate and Binary Reinforcement (일정 학습계수와 이진 강화함수를 가진 자기 조직화 형상지도 신경회로망)

  • 조성원;석진욱
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.1
    • /
    • pp.180-188
    • /
    • 1995
  • A modified Kohonen's self-organizing feature map (SOFM) algorithm which has binary reinforcement function and a constant learning rate is proposed. In contrast to the time-varing adaptaion gain of the original Kohonen's SOFM algorithm, the proposed algorithm uses a constant adaptation gain, and adds a binary reinforcement function in order to compensate for the lowered learning ability of SOFM due to the constant learning rate. Since the proposed algorithm does not have the complicated multiplication, it's digital hardware implementation is much easier than that of the original SOFM.

  • PDF

Deep Learning based Dynamic Taint Detection Technique for Binary Code Vulnerability Detection (바이너리 코드 취약점 탐지를 위한 딥러닝 기반 동적 오염 탐지 기술)

  • Kwang-Man Ko
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.3
    • /
    • pp.161-166
    • /
    • 2023
  • In recent years, new and variant hacking of binary codes has increased, and the limitations of techniques for detecting malicious codes in source programs and defending against attacks are often exposed. Advanced software security vulnerability detection technology using machine learning and deep learning technology for binary code and defense and response capabilities against attacks are required. In this paper, we propose a malware clustering method that groups malware based on the characteristics of the taint information after entering dynamic taint information by tracing the execution path of binary code. Malware vulnerability detection was applied to a three-layered Few-shot learning model, and F1-scores were calculated for each layer's CPU and GPU. We obtained 97~98% performance in the learning process and 80~81% detection performance in the test process.

The dynamics of self-organizing feature map with constant learning rate and binary reinforcement function (시불변 학습계수와 이진 강화 함수를 가진 자기 조직화 형상지도 신경회로망의 동적특성)

  • Seok, Jin-Uk;Jo, Seong-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.2 no.2
    • /
    • pp.108-114
    • /
    • 1996
  • We present proofs of the stability and convergence of Self-organizing feature map (SOFM) neural network with time-invarient learning rate and binary reinforcement function. One of the major problems in Self-organizing feature map neural network concerns with learning rate-"Kalman Filter" gain in stochsatic control field which is monotone decreasing function and converges to 0 for satisfying minimum variance property. In this paper, we show that the stability and convergence of Self-organizing feature map neural network with time-invariant learning rate. The analysis of the proposed algorithm shows that the stability and convergence is guranteed with exponentially stable and weak convergence properties as well.s as well.

  • PDF

RLDB: Robust Local Difference Binary Descriptor with Integrated Learning-based Optimization

  • Sun, Huitao;Li, Muguo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.9
    • /
    • pp.4429-4447
    • /
    • 2018
  • Local binary descriptors are well-suited for many real-time and/or large-scale computer vision applications, while their low computational complexity is usually accompanied by the limitation of performance. In this paper, we propose a new optimization framework, RLDB (Robust-LDB), to improve a typical region-based binary descriptor LDB (local difference binary) and maintain its computational simplicity. RLDB extends the multi-feature strategy of LDB and applies a more complete region-comparing configuration. A cascade bit selection method is utilized to select the more representative patterns from massive comparison pairs and an online learning strategy further optimizes descriptor for each specific patch separately. They both incorporate LDP (linear discriminant projections) principle to jointly guarantee the robustness and distinctiveness of the features from various scales. Experimental results demonstrate that this integrated learning framework significantly enhances LDB. The improved descriptor achieves a performance comparable to floating-point descriptors on many benchmarks and retains a high computing speed similar to most binary descriptors, which better satisfies the demands of applications.

Binary clustering network for recognition of keywords in continuous speech (연속음성중 키워드(Keyword) 인식을 위한 Binary Clustering Network)

  • 최관선;한민홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.870-876
    • /
    • 1993
  • This paper presents a binary clustering network (BCN) and a heuristic algorithm to detect pitch for recognition of keywords in continuous speech. In order to classify nonlinear patterns, BCN separates patterns into binary clusters hierarchically and links same patterns at root level by using the supervised learning and the unsupervised learning. BCN has many desirable properties such as flexibility of dynamic structure, high classification accuracy, short learning time, and short recall time. Pitch Detection algorithm is a heuristic model that can solve the difficulties such as scaling invariance, time warping, time-shift invariance, and redundance. This recognition algorithm has shown recognition rates as high as 95% for speaker-dependent as well as multispeaker-dependent tests.

  • PDF

Recognition of Hangul Characters with Input Noise (잡음성분을 포함한 한글 문자 인식)

  • Chang, Sun-Young;Cho, Dong-Sub
    • Proceedings of the KIEE Conference
    • /
    • 1990.11a
    • /
    • pp.465-469
    • /
    • 1990
  • This thesis proposes a new scheme for the recognition of presegmented Hangul characters. The proposed approach is rather insensitive to noise and variation by applying 2 dimensional convolution to learning patterns. In this thesis, the hangul recognition neural network is implemented in the basis of this scheme and recognition rate is analyzed in boo cases of learning which are learning by binary patterns and learning by binary patterns and convoluted patterns together.

  • PDF

Weighted Least Squares Based on Feature Transformation using Distance Computation for Binary Classification (이진 분류를 위하여 거리계산을 이용한 특징 변환 기반의 가중된 최소 자승법)

  • Jang, Se-In;Park, Choong-Shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.219-224
    • /
    • 2020
  • Binary classification has been broadly investigated in machine learning. In addition, binary classification can be easily extended to multi class problems. To successfully utilize machine learning methods for classification tasks, preprocessing and feature extraction steps are essential. These are important steps to improve their classification performances. In this paper, we propose a new learning method based on weighted least squares. In the weighted least squares, designing weights has a significant role. Due to this necessity, we also propose a new technique to obtain weights that can achieve feature transformation. Based on this weighting technique, we also propose a method to combine the learning and feature extraction processes together to perform both processes simultaneously in one step. The proposed method shows the promising performance on five UCI machine learning data sets.