• Title/Summary/Keyword: Rate of Learning

Search Result 2,129, Processing Time 0.027 seconds

Stock Trading Model using Portfolio Optimization and Forecasting Stock Price Movement (포트폴리오 최적화와 주가예측을 이용한 투자 모형)

  • Park, Kanghee;Shin, Hyunjung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.6
    • /
    • pp.535-545
    • /
    • 2013
  • The goal of stock investment is earning high rate or return with stability. To accomplish this goal, using a portfolio that distributes stocks with high rate of return with less variability and a stock price prediction model with high accuracy is required. In this paper, three methods are suggested to require these conditions. First of all, in portfolio re-balance part, Max-Return and Min-Risk (MRMR) model is suggested to earn the largest rate of return with stability. Secondly, Entering/Leaving Rule (E/L) is suggested to upgrade portfolio when particular stock's rate of return is low. Finally, to use outstanding stock price prediction model, a model based on Semi-Supervised Learning (SSL) which was suggested in last research was applied. The suggested methods were validated and applied on stocks which are listed in KOSPI200 from January 2007 to August 2008.

A Study on the Intention of Continuous use of MOOC Applying Self-Determination Theory and Learning Flow Theory : Focused on Differences between Korean and Chinese Culture (자기결정성이론과 학습몰입이론을 적용한 MOOC 지속사용의도에 관한 연구 : 한·중 문화차이 분석)

  • Jin, Qiuxiang;Chi, Yong Duk;Gim, Gwangyong
    • Journal of Information Technology Services
    • /
    • v.17 no.1
    • /
    • pp.121-134
    • /
    • 2018
  • Massive Open Online Course (MOOC) is online education that anyone can register for free and has internet access. MOOC is also called Education Revolution and is spreading rapidly all over the world. Although recent MOOC high-quality classes may enhance the value of MOOC, MOOC learning still needs much research. Since MOOC has low learning completion rate and continuous use rate, various studies on the reasons that learners give up at the beginning of learning have not been tried yet This research is studied for the continuous intention of use of MOOC applying self-determination theory and learning flow theory based on technology acceptance model. In particular, the research are conducted for cultural difference in continuous usage of MOOC between Korean and Chinese. The research results show that self-determination theory applying perceived autonomy, perceived competence, and perceived relatedness and learning flow is useful to explain continuous use of MOOC. The research also shows that Hofstede theory works well in explaining the cultural difference between Korea and China in continuous usage of MOOC. The result shows that korean is more influenced by perceived external motivation like perceived usefulness and chinese is more influenced by internal motivation like learning flow in continuous use of MOOC.

Socio-economic Indicators Based Relative Comparison Methodology of National Occupational Accident Fatality Rates Using Machine Learning (머신러닝을 활용한 사회 · 경제지표 기반 산재 사고사망률 상대비교 방법론)

  • Kyunghun, Kim;Sudong, Lee
    • Journal of the Korea Safety Management & Science
    • /
    • v.24 no.4
    • /
    • pp.41-47
    • /
    • 2022
  • A reliable prediction model of national occupational accident fatality rate can be used to evaluate level of safety and health protection for workers in a country. Moreover, the socio-economic aspects of occupational accidents can be identified through interpretation of a well-organized prediction model. In this paper, we propose a machine learning based relative comparison methods to predict and interpret a national occupational accident fatality rate based on socio-economic indicators. First, we collected 29 years of the relevant data from 11 developed countries. Second, we applied 4 types of machine learning regression models and evaluate their performance. Third, we interpret the contribution of each input variable using Shapley Additive Explanations(SHAP). As a result, Gradient Boosting Regressor showed the best predictive performance. We found that different patterns exist across countries in accordance with different socio-economic variables and occupational accident fatality rate.

The structural relationships among adolescents'mobile phone dependency, trajectories of depression, and self-regulated learning abilities (청소년의 휴대전화의존도, 우울의 변화 궤적 및 자기조절학습 능력 간의 구조적 관계)

  • Hong, Yea-Ji
    • Human Ecology Research
    • /
    • v.59 no.3
    • /
    • pp.341-351
    • /
    • 2021
  • The purpose of this study was to examine the longitudinal relationships between Korean adolescents'mobile phone dependency, trajectories of depression, and self-regulated learning abilities. To achieve these goals, structural equation modeling analysis was conducted, using the 3rd, 5th and 7th wave of the data on 4th graders taken from the Korean Children and Youth Panel Survey. The results can be summarized as follows. First, growth-curve longitudinal analysis indicates that depression in 6th through 10th grade has increased. Second, mobile phone dependency among adolescents at 6th grade has a significant effect on both the initial value and the rate of change in depression. Also, the initial value and the rate of change in depression have significant relationships with mobile phone dependency at 10th grade. Moreover, both increased levels of mobile phone dependency and the rate of change in depression significantly influence adolescents'self-regulated learning abilities at 10th grade. Based on a longitudinal data set, these findings demonstrate the causal relationships between Korean adolescents'trajectories of depression and their mobile phone dependency. The findings also provide a comprehensive framework with implications for adolescents'development through an understanding of the relationships between adolescents'depression and mobile phone dependency, which impact their self-regulated learning abilities.

Improvement of the Gonu game using progressive deepening in reinforcement learning (강화학습에서 점진적인 심화를 이용한 고누게임의 개선)

  • Shin, YongWoo
    • Journal of Korea Game Society
    • /
    • v.20 no.6
    • /
    • pp.23-30
    • /
    • 2020
  • There are many cases in the game. So, Game have to learn a lot. This paper uses reinforcement learning to improve the learning speed. However, because reinforcement learning has many cases, it slows down early in learning. So, the speed of learning was improved by using the minimax algorithm. In order to compare the improved performance, a Gonu game was produced and tested. As for the experimental results, the win rate was high, but the result of a tie occurred. The game tree was further explored using progressive deepening to reduce tie cases and win rate has improved by about 75%.

Tidy-up Task Planner based on Q-learning (정리정돈을 위한 Q-learning 기반의 작업계획기)

  • Yang, Min-Gyu;Ahn, Kuk-Hyun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.1
    • /
    • pp.56-63
    • /
    • 2021
  • As the use of robots in service area increases, research has been conducted to replace human tasks in daily life with robots. Among them, this study focuses on the tidy-up task on a desk using a robot arm. The order in which tidy-up motions are carried out has a great impact on the success rate of the task. Therefore, in this study, a neural network-based method for determining the priority of the tidy-up motions from the input image is proposed. Reinforcement learning, which shows good performance in the sequential decision-making process, is used to train such a task planner. The training process is conducted in a virtual tidy-up environment that is configured the same as the actual tidy-up environment. To transfer the learning results in the virtual environment to the actual environment, the input image is preprocessed into a segmented image. In addition, the use of a neural network that excludes unnecessary tidy-up motions from the priority during the tidy-up operation increases the success rate of the task planner. Experiments were conducted in the real world to verify the proposed task planning method.

Accelerating Levenberg-Marquardt Algorithm using Variable Damping Parameter (가변 감쇠 파라미터를 이용한 Levenberg-Marquardt 알고리즘의 학습 속도 향상)

  • Kwak, Young-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.57-63
    • /
    • 2010
  • The damping parameter of Levenberg-Marquardt algorithm switches between error backpropagation and Gauss-Newton learning and affects learning speed. Fixing the damping parameter induces some oscillation of error and decreases learning speed. Therefore, we propose the way of a variable damping parameter with referring to the alternation of error. The proposed method makes the damping parameter increase if error rate is large and makes it decrease if error rate is small. This method so plays the role of momentum that it can improve learning speed. We tested both iris recognition and wine recognition for this paper. We found out that this method improved learning speed in 67% cases on iris recognition and in 78% cases on wine recognition. It was also showed that the oscillation of error by the proposed way was less than those of other algorithms.

An Implementation of Embedded Linux System for Embossed Digit Recognition using CNN based Deep Learning (CNN 기반 딥러닝을 이용한 임베디드 리눅스 양각 문자 인식 시스템 구현)

  • Yu, Yeon-Seung;Kim, Cheong Ghil;Hong, Chung-Pyo
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.2
    • /
    • pp.100-104
    • /
    • 2020
  • Over the past several years, deep learning has been widely used for feature extraction in image and video for various applications such as object classification and facial recognition. This paper introduces an implantation of embedded Linux system for embossed digits recognition using CNN based deep learning methods. For this purpose, we implemented a coin recognition system based on deep learning with the Keras open source library on Raspberry PI. The performance evaluation has been made with the success rate of coin classification using the images captured with ultra-wide angle camera on Raspberry PI. The simulation result shows 98% of the success rate on average.

The Improvement of Convergence Rate in n-Queen Problem Using Reinforcement learning (강화학습을 이용한 n-Queen 문제의 수렴속도 향상)

  • Lim SooYeon;Son KiJun;Park SeongBae;Lee SangJo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.1-5
    • /
    • 2005
  • The purpose of reinforcement learning is to maximize rewards from environment, and reinforcement learning agents learn by interacting with external environment through trial and error. Q-Learning, a representative reinforcement learning algorithm, is a type of TD-learning that exploits difference in suitability according to the change of time in learning. The method obtains the optimal policy through repeated experience of evaluation of all state-action pairs in the state space. This study chose n-Queen problem as an example, to which we apply reinforcement learning, and used Q-Learning as a problem solving algorithm. This study compared the proposed method using reinforcement learning with existing methods for solving n-Queen problem and found that the proposed method improves the convergence rate to the optimal solution by reducing the number of state transitions to reach the goal.

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.2
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.