• 제목/요약/키워드: amount of learning

검색결과 1,008건 처리시간 0.027초

강화학습을 통한 유전자 알고리즘의 성능개선 (Performance Improvement of Genetic Algorithms by Reinforcement Learning)

  • 이상환;전효병;심귀보
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1998년도 춘계학술대회 학술발표 논문집
    • /
    • pp.81-84
    • /
    • 1998
  • Genetic Algorithms (GAs) are stochastic algorithms whose search methods model some natural phenomena. The procedure of GAs may be divided into two sub-procedures : Operation and Selection. Chromosomes can produce new offspring by means of operation, and the fitter chromosomes can produce more offspring than the less fit ones by means of selection. However, operation which is executed randomly and has some limits to its execution can not guarantee to produce fitter chromosomes. Thus, we propose a method which gives a directional information to the genetic operator by reinforcement learning. It can be achived by using neural networks to apply reinforcement learning to the genetic operator. We use the amount of fitness change which can be considered as reinforcement signal to calcualte the error terms for the output units. Then the weights are updated using backpropagtion algorithm. The performance improvement of GAs using reinforcement learning can be measured by applying the pr posed method to GA-hard problem.

  • PDF

Research on Covert Communication Technology Based on Matrix Decomposition of Digital Currency Transaction Amount

  • Lejun Zhang;Bo Zhang;Ran Guo;Zhujun Wang;Guopeng Wang;Jing Qiu;Shen Su;Yuan Liu;Guangxia Xu;Zhihong Tian;Sergey Gataullin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권4호
    • /
    • pp.1020-1041
    • /
    • 2024
  • With the development of covert communication technologies, the number of covert communication technologies using blockchain as a carrier is increasing. However, using the transaction amount of digital currency as a carrier for covert communication has problems such as low embedding rate, large consumption of transaction amount, and easy detection. In this paper, firstly, by experimentally analyzing the distribution of bitcoin transaction amounts, we determine the most suitable range of amounts for matrix decomposition. Secondly, we design a novel matrix decomposition method that can successfully decompose a large amount matrix into two small amount matrices and utilize the elements in the small amount matrices for covert communication. Finally, we analyze the feasibility of the novel matrix decomposition method in this scheme in detail from four aspects, and verify it by experimental comparison, which proves that our scheme not only improves the embedding rate and reduces the consumption of transaction amount, but also has a certain degree of resistance to detection.

A Hybrid Selection Method of Helpful Unlabeled Data Applicable for Semi-Supervised Learning Algorithm

  • Le, Thanh-Binh;Kim, Sang-Woon
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제3권4호
    • /
    • pp.234-239
    • /
    • 2014
  • This paper presents an empirical study on selecting a small amount of useful unlabeled data to improve the classification accuracy of semi-supervised learning algorithms. In particular, a hybrid method of unifying the simply recycled selection method and the incrementally-reinforced selection method was considered and evaluated empirically. The experimental results, which were obtained from well-known benchmark data sets using semi-supervised support vector machines, demonstrated that the hybrid method works better than the traditional ones in terms of the classification accuracy.

가지치기 기반 경량 딥러닝 모델을 활용한 해상객체 이미지 분류에 관한 연구 (A Study on Maritime Object Image Classification Using a Pruning-Based Lightweight Deep-Learning Model)

  • 한영훈;이춘주;강재구
    • 한국군사과학기술학회지
    • /
    • 제27권3호
    • /
    • pp.346-354
    • /
    • 2024
  • Deep learning models require high computing power due to a substantial amount of computation. It is difficult to use them in devices with limited computing environments, such as coastal surveillance equipments. In this study, a lightweight model is constructed by analyzing the weight changes of the convolutional layers during the training process based on MobileNet and then pruning the layers that affects the model less. The performance comparison results show that the lightweight model maintains performance while reducing computational load, parameters, model size, and data processing speed. As a result of this study, an effective pruning method for constructing lightweight deep learning models and the possibility of using equipment resources efficiently through lightweight models in limited computing environments such as coastal surveillance equipments are presented.

Bi-LSTM 기반 물품 소요량 예측을 통한 최적의 적재 위치 선정 (Selecting the Optimal Loading Location through Prediction of Required Amount for Goods based on Bi-LSTM)

  • 장세인;김여진;김근태;이종환
    • 반도체디스플레이기술학회지
    • /
    • 제22권3호
    • /
    • pp.41-45
    • /
    • 2023
  • Currently, the method of loading items in the warehouse, the worker directly decides the loading location, and the most used method is to load the product at the location closest to the entrance. This can be effective when there is no difference in the required amount for goods, but when there is a difference in the required amount for goods, it is inefficient because items with a small required amount are loaded near the entrance and occupy the corresponding space for a long time. Therefore, in order to minimize the release time of goods, it is essential to select an appropriate location when loading goods. In this study, a method for determining the loading location by predicting the required amount of goods was studied to select the optimal loading location. Deep learning based bidirectional long-term memory networks (Bi-LSTM) was used to predict the required amount for goods. This study compares and analyzes the release time of goods in the conventional method of loading close to the entrance and in the loading method using the required amount for goods using the Bi-LSTM model.

  • PDF

최적의 퍼지제어규칙을 얻기위한 퍼지학습법 (A Learning Algorithm for Optimal Fuzzy Control Rules)

  • 정병묵
    • 대한기계학회논문집A
    • /
    • 제20권2호
    • /
    • pp.399-407
    • /
    • 1996
  • A fuzzy learning algorithm to get the optimal fuzzy rules is presented in this paper. The algorithm introduces a reference model to generate a desired output and a performance index funtion instead of the performance index table. The performance index funtion is a cost function based on the error and error-rate between the reference and plant output. The cost function is minimized by a gradient method and the control input is also updated. In this case, the control rules which generate the desired response can be obtained by changing the portion of the error-rate in the cost funtion. In SISO(Single-Input Single- Output)plant, only by the learning delay, it is possible to experss the plant model and to get the desired control rules. In the long run, this algorithm gives us the good control rules with a minimal amount of prior informaiton about the environment.

인공 신경망 회귀 모델을 활용한 인버터 기반 태양광 발전량 예측 알고리즘 (Inverter-Based Solar Power Prediction Algorithm Using Artificial Neural Network Regression Model)

  • 박건하;임수창;김종찬
    • 한국전자통신학회논문지
    • /
    • 제19권2호
    • /
    • pp.383-388
    • /
    • 2024
  • 본 논문은 전라남도에서 측정한 태양광 발전 데이터를 기반으로 발전량 예측값을 도출하기 위한 연구이다. 발전량 측정을 위해 인버터에서 직류, 교류, 환경데이터와 같은 다변량 변수를 측정하였고, 측정값의 안정성과 신뢰성 확보를 위한 전처리 작업을 수행하였다. 상관관계 분석은 부분자기상관함수(PACF: Partial Autocorrelation Function)을 활용하여 시계열 데이터에서 발전량과 상관성이 높은 데이터만을 예측을 위해 사용하였다. 태양광 발전량 예측을 위해 딥러닝 모델을 이용하여 발전량을 측정했고, 예측 정확도를 높이기 위해 각 다변량 변수의 상관관계 분석 결과를 이용하였다. 정제된 데이터를 활용한 학습은 기존 데이터를 그대로 사용했을 때 보다 안정되었고, 상관관계 분석 결과를 반영하여 다변량 변수 중 상관성이 높은 변수만을 활용하여 태양광 발전량 예측 알고리즘을 개선하였다.

센서 네트워크에서 기계학습을 사용한 잔류 전력 추정 방안 (A Residual Power Estimation Scheme Using Machine Learning in Wireless Sensor Networks)

  • 배시규
    • 한국멀티미디어학회논문지
    • /
    • 제24권1호
    • /
    • pp.67-74
    • /
    • 2021
  • As IoT(Internet Of Things) devices like a smart sensor have constrained power sources, a power strategy is critical in WSN(Wireless Sensor Networks). Therefore, it is necessary to figure out the residual power of each sensor node for managing power strategies in WSN, which, however, requires additional data transmission, leading to more power consumption. In this paper, a residual power estimation method was proposed, which uses ignorantly small amount of power consumption in the resource-constrained wireless networks including WSN. A residual power prediction is possible with the least data transmission by using Machine Learning method with some training data in this proposal. The performance of the proposed scheme was evaluated by machine learning method, simulation, and analysis.

딥네트워크 기반 음성 감정인식 기술 동향 (Speech Emotion Recognition Based on Deep Networks: A Review)

  • 무스타킴;권순일
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2021년도 춘계학술발표대회
    • /
    • pp.331-334
    • /
    • 2021
  • In the latest eras, there has been a significant amount of development and research is done on the usage of Deep Learning (DL) for speech emotion recognition (SER) based on Convolutional Neural Network (CNN). These techniques are usually focused on utilizing CNN for an application associated with emotion recognition. Moreover, numerous mechanisms are deliberated that is based on deep learning, meanwhile, it's important in the SER-based human-computer interaction (HCI) applications. Associating with other methods, the methods created by DL are presenting quite motivating results in many fields including automatic speech recognition. Hence, it appeals to a lot of studies and investigations. In this article, a review with evaluations is illustrated on the improvements that happened in the SER domain though likewise arguing the existing studies that are existence SER based on DL and CNN methods.

Sentiment Analysis to Classify Scams in Crowdfunding

  • shafqat, Wafa;byun, Yung-cheol
    • Soft Computing and Machine Intelligence
    • /
    • 제1권1호
    • /
    • pp.24-30
    • /
    • 2021
  • The accelerated growth of the internet and the enormous amount of data availability has become the primary reason for machine learning applications for data analysis and, more specifically, pattern recognition and decision making. In this paper, we focused on the crowdfunding site Kickstarter and collected the comments in order to apply neural networks to classify the projects based on the sentiments of backers. The power of customer reviews and sentiment analysis has motivated us to apply this technique in crowdfunding to find timely indications and identify suspicious activities and mitigate the risk of money loss.