• Title/Summary/Keyword: Quantized Learning

Search Result 17, Processing Time 0.027 seconds

Study on Derivation and Implementation of Quantized Gradient for Machine Learning (기계학습을 위한 양자화 경사도함수 유도 및 구현에 관한 연구)

  • Seok, Jinwuk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.1
    • /
    • pp.1-8
    • /
    • 2020
  • A derivation method for a quantized gradient for machine learning on an embedded system is proposed, in this paper. The proposed differentiation method induces the quantized gradient vector to an objective function and provides that the validation of the directional derivation. Moreover, mathematical analysis shows that the sequence yielded by the learning equation based on the proposed quantization converges to the optimal point of the quantized objective function when the quantized parameter is sufficiently large. The simulation result shows that the optimization solver based on the proposed quantized method represents sufficient performance in comparison to the conventional method based on the floating-point system.

Bit-width Aware Generator and Intermediate Layer Knowledge Distillation using Channel-wise Attention for Generative Data-Free Quantization

  • Jae-Yong Baek;Du-Hwan Hur;Deok-Woong Kim;Yong-Sang Yoo;Hyuk-Jin Shin;Dae-Hyeon Park;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.7
    • /
    • pp.11-20
    • /
    • 2024
  • In this paper, we propose the BAG (Bit-width Aware Generator) and the Intermediate Layer Knowledge Distillation using Channel-wise Attention to reduce the knowledge gap between a quantized network, a full-precision network, and a generator in GDFQ (Generative Data-Free Quantization). Since the generator in GDFQ is only trained by the feedback from the full-precision network, the gap resulting in decreased capability due to low bit-width of the quantized network has no effect on training the generator. To alleviate this problem, BAG is quantized with same bit-width of the quantized network, and it can generate synthetic images, which are effectively used for training the quantized network. Typically, the knowledge gap between the quantized network and the full-precision network is also important. To resolve this, we compute channel-wise attention of outputs of convolutional layers, and minimize the loss function as the distance of them. As the result, the quantized network can learn which channels to focus on more from mimicking the full-precision network. To prove the efficiency of proposed methods, we quantize the network trained on CIFAR-100 with 3 bit-width weights and activations, and train it and the generator with our method. As the result, we achieve 56.14% Top-1 Accuracy and increase 3.4% higher accuracy compared to our baseline AdaDFQ.

Study on the Effective Compensation of Quantization Error for Machine Learning in an Embedded System (임베디드 시스템에서의 양자화 기계학습을 위한 효율적인 양자화 오차보상에 관한 연구)

  • Seok, Jinwuk
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.157-165
    • /
    • 2020
  • In this paper. we propose an effective compensation scheme to the quantization error arisen from quantized learning in a machine learning on an embedded system. In the machine learning based on a gradient descent or nonlinear signal processing, the quantization error generates early vanishing of a gradient and occurs the degradation of learning performance. To compensate such quantization error, we derive an orthogonal compensation vector with respect to a maximum component of the gradient vector. Moreover, instead of the conventional constant learning rate, we propose the adaptive learning rate algorithm without any inner loop to select the step size, based on a nonlinear optimization technique. The simulation results show that the optimization solver based on the proposed quantized method represents sufficient learning performance.

A Modified Deterministic Boltzmann Machine Learning Algorithm for Networks with Quantized Connection (양자화 결합 네트워크를 위한 수정된 결정론적 볼츠만머신 학습 알고리즘)

  • 박철영
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.62-67
    • /
    • 2002
  • From the view point of VLSI implementation, a new teaming algorithm suited for network with quantized connection weights is desired. This paper presents a new teaming algorithm for the DBM(deterministic Boltzmann machine) network with quantized connection weight. The performance of proposed algorithm is tested with the 2-input XOR problem and the 3-input parity problem through computer simulations. The simulation results show that our algorithm is efficient for quantized connection neural networks.

  • PDF

Distributed Estimation Using Non-regular Quantized Data

  • Kim, Yoon Hak
    • Journal of information and communication convergence engineering
    • /
    • v.15 no.1
    • /
    • pp.7-13
    • /
    • 2017
  • We consider a distributed estimation where many nodes remotely placed at known locations collect the measurements of the parameter of interest, quantize these measurements, and transmit the quantized data to a fusion node; this fusion node performs the parameter estimation. Noting that quantizers at nodes should operate in a non-regular framework where multiple codewords or quantization partitions can be mapped from a single measurement to improve the system performance, we propose a low-weight estimation algorithm that finds the most feasible combination of codewords. This combination is found by computing the weighted sum of the possible combinations whose weights are obtained by counting their occurrence in a learning process. Otherwise, tremendous complexity will be inevitable due to multiple codewords or partitions interpreted from non-regular quantized data. We conduct extensive experiments to demonstrate that the proposed algorithm provides a statistically significant performance gain with low complexity as compared to typical estimation techniques.

Optical Implementation of Perceptron Learning Model using the Polarization Property of Commercial LCTV (상용 LCTV의 편광 특성을 이용한 Perceptron 학습 모델의 광학적 구현)

  • 한종욱;용상순;김동훈;김성배;박일종;김은수
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.8
    • /
    • pp.1294-1302
    • /
    • 1990
  • In this paper, optical implementation of single layer perceptron to discriminate the even and odd numbers using commericla LCTV spatial light modulator is described. In order to overcome the low dynamic range of gray levels of LCTV, nonlinear quantized perceptron model is introduced, which is analyzed to have faster convergent time with small gray levels through the computer simulation. And the analog weights containing positive and negative values of single layer perceptron is represented by using the polarization-based encoding method. Finally, optical implementation of the nonlinear quantized perceptron learning model based on polarization property of the commercial LCTV is proposed and some experimental results are given.

  • PDF

Cardio-Angiographic Sequence Coding Using Neural Network Adaptive Vector Quantization (신격회로망 적응 VQ를 이용한 심장 조영상 부호화)

  • 주창희;최종수
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.40 no.4
    • /
    • pp.374-381
    • /
    • 1991
  • As a diagnostic image of hospitl, the utilization of digital image is steadily increasing. Image coding is indispensable for storing and compressing an enormous amount of diagnostic images economically and effectively. In this paper adaptive two stage vector quantization based on Kohonen's neural network for the compression of cardioangiography among typical angiography of radiographic image sequences is presented and the performance of the coding scheme is compare and gone over. In an attempt to exploit the known characteristics of changes in cardioangiography, relatively large blocks of image are quantized in the first stage and in the next stage the bloks subdivided by the threshold of quantization error are vector quantized employing the neural network of frequency sensitive competitive learning. The scheme is employed because the change produced in cardioangiography is due to such two types of motion as a heart itself and body motion, and a contrast dye material injected. Computer simulation shows that the good reproduction of images can be obtained at a bit rate of 0.78 bits/pixel.

  • PDF

Learning Algorithm for Deterministic Boltzmann Machine with Quantized Connections (양자화결합을 갖는 결정론적 볼츠만 머신 학습 알고리듬)

  • 박철영
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.409-412
    • /
    • 2000
  • 본 논문에서는 기존의 결정론적 볼츠만 머신의 학습알고리듬을 수정하여 양자화결합을 갖는 볼츠만 머신에도 적용할 수 있는 알고리듬을 제안하였다. 제안한 알고리듬은 2-입력 XOR문제와 3-입력 패리티문제에 적용하여 성능을 분석하였다. 그 결과 하중이 대폭적으로 양자화된 네트워크도 학습이 가능하다는 것은 은닉 뉴런수를 증가시키면 한정된 하중값의 범위로 유지할 수 있다는 것을 보여주었다. 또한 1회에 갱신하는 하중의 개수 m$_{s}$를 제어함으로써 학습계수를 제어하는 효과가 얻어지는 것을 확인하였다..

  • PDF

Study on Quantized Learning for Machine Learning Equation in an Embedded System (임베디드 시스템에서의 양자화 기계학습을 위한 양자화 오차보상에 관한 연구)

  • Seok, Jinwuk;Kim, Jeong-Si
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.110-113
    • /
    • 2019
  • 본 논문에서는 임베디드 시스템에서의 양자화 기계학습을 수행할 경우 발생하는 양자화 오차를 효과적으로 보상하기 위한 방법론을 제안한다. 경사 도함수(Gradient)를 사용하는 기계학습이나 비선형 신호처리 알고리즘에서 양자화 오차는 경사 도함수의 조기 소산(Early Vanishing Gradient)을 야기하여 전체적인 알고리즘의 성능 하락을 가져온다. 이를 보상하기 위하여 경사 도함수의 최대 성분에 대하여 직교하는 방향의 보상 탐색 벡터를 유도하여 양자화 오차로 인한 성능 하락을 보상하도록 한다. 또한, 기존의 고정 학습률 대신, 내부 순환(Inner Loop) 없는 비선형 최적화 알고리즘에 기반한 적응형 학습률 결정 알고리즘을 제안한다. 실험결과 제안한 방식의 알고리즘을 비선형 최적화 문제에 적용할 시 양자화 오차로 인한 성능 하락을 최소화시킬 수 있음을 확인하였다.

  • PDF

A study on the application of residual vector quantization for vector quantized-variational autoencoder-based foley sound generation model (벡터 양자화 변분 오토인코더 기반의 폴리 음향 생성 모델을 위한 잔여 벡터 양자화 적용 연구)

  • Seokjin Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.243-252
    • /
    • 2024
  • Among the Foley sound generation models that have recently begun to be studied, a sound generation technique using the Vector Quantized-Variational AutoEncoder (VQ-VAE) structure and generation model such as Pixelsnail are one of the important research subjects. On the other hand, in the field of deep learning-based acoustic signal compression, residual vector quantization technology is reported to be more suitable than the conventional VQ-VAE structure. Therefore, in this paper, we aim to study whether residual vector quantization technology can be effectively applied to the Foley sound generation. In order to tackle the problem, this paper applies the residual vector quantization technique to the conventional VQ-VAE-based Foley sound generation model, and in particular, derives a model that is compatible with the existing models such as Pixelsnail and does not increase computational resource consumption. In order to evaluate the model, an experiment was conducted using DCASE2023 Task7 data. The results show that the proposed model enhances about 0.3 of the Fréchet audio distance. Unfortunately, the performance enhancement was limited, which is believed to be due to the decrease in the resolution of time-frequency domains in order to do not increase consumption of the computational resources.