• Title/Summary/Keyword: Self Learning Network

Search Result 420, Processing Time 0.024 seconds

A Personal Agent for Combining the Home Appliance Services and Its Learning Mechanism

  • Takeda, Yuji;Sakamaki, Kazumi;Ootsu, Kanemitsu;Yokota, Takashi;Baba, Takanobu
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.74-77
    • /
    • 2003
  • In this paper, we propose a new personal agent for generating the combinational services from using history of appliances in the home network environment. In such environment, it is required that flexible services can be provided by combining services of appliances and unskillful users can use these services without knowledge. So, it is needed to satisfy following: (1) combinational services can be suggested automatically and (2) the increase of services can be followed. Then, we propose a new personal agent that suggests combinational services by learning the lifestyle. Its learning mechanism is based on Self-Organizing Map (SOM), and can follow the increase of services. We implemented the the agent, and use history of a user for two weeks was made to learn. As the result, we confirmed that the agent can extract services related with time or location and can suggest combinational services.

  • PDF

A Study on Modeling Digital library for Informatization of School Education (학교교육 정보화를 위한 디지털 도서관 모형에 관한 연구)

  • 유양근
    • Journal of Korean Library and Information Science Society
    • /
    • v.32 no.2
    • /
    • pp.93-119
    • /
    • 2001
  • There were two purpose of this study. One was to develope a digital library as a way of facilitating students\` learning. The other was to design a model of a digital library for school. For this, current Korean educational information network plan and the problems of school libraries were analyzed first. Second, the role of school library was identified: to build a self-motivated learning environment in an information society and to help students developing creative learning abilities. Finally, a model of digital school library was suggested.

  • PDF

Deep learning-based scalable and robust channel estimator for wireless cellular networks

  • Anseok Lee;Yongjin Kwon;Hanjun Park;Heesoo Lee
    • ETRI Journal
    • /
    • v.44 no.6
    • /
    • pp.915-924
    • /
    • 2022
  • In this paper, we present a two-stage scalable channel estimator (TSCE), a deep learning (DL)-based scalable, and robust channel estimator for wireless cellular networks, which is made up of two DL networks to efficiently support different resource allocation sizes and reference signal configurations. Both networks use the transformer, one of cutting-edge neural network architecture, as a backbone for accurate estimation. For computation-efficient global feature extractions, we propose using window and window averaging-based self-attentions. Our results show that TSCE learns wireless propagation channels correctly and outperforms both traditional estimators and baseline DL-based estimators. Additionally, scalability and robustness evaluations are performed, revealing that TSCE is more robust in various environments than the baseline DL-based estimators.

Unsupervised Learning with Natural Low-light Image Enhancement (자연스러운 저조도 영상 개선을 위한 비지도 학습)

  • Lee, Hunsang;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.135-145
    • /
    • 2020
  • Recently, deep-learning based methods for low-light image enhancement accomplish great success through supervised learning. However, they still suffer from the lack of sufficient training data due to difficulty of obtaining a large amount of low-/normal-light image pairs in real environments. In this paper, we propose an unsupervised learning approach for single low-light image enhancement using the bright channel prior (BCP), which gives the constraint that the brightest pixel in a small patch is likely to be close to 1. With this prior, pseudo ground-truth is first generated to establish an unsupervised loss function. The proposed enhancement network is then trained using the proposed unsupervised loss function. To the best of our knowledge, this is the first attempt that performs a low-light image enhancement through unsupervised learning. In addition, we introduce a self-attention map for preserving image details and naturalness in the enhanced result. We validate the proposed method on various public datasets, demonstrating that our method achieves competitive performance over state-of-the-arts.

Effects of Utilization of Social Network Service on Collaborative Learning (소셜 네트워크 서비스 활용이 협력 학습에 미치는 효과)

  • Shin, Jin;Chon, Eunhwa
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.241-254
    • /
    • 2013
  • The purpose of this study is to analyse the effects of social network service on the collaborative learning. Four groups were categorized depending on the use of different types of the social network services - Kakao Talk, Facebook, both Kakao Talk and Facebook, and unused group. A preliminary test revealed that there was no difference in mobile efficacy, career decision making self-efficacy, course interest among the four groups. In the post test, the groups that used either Kakao Talk group or the group that used both Kakao Talk and Facebook retained significantly higher average score in team collaboration scale than Facebook group and unused group. The analysis of the messages in Facebook exhibited that the group used both Kakao Talk and Facebook generated larger number of messages, read, replies, clicks of "good" than the groups used only Facebook. These results strongly support the statistical significance.

Fuzzy neural network modeling using hyper elliptic gaussian membership functions (초타원 가우시안 소속함수를 사용한 퍼지신경망 모델링)

  • 권오국;주영훈;박진배
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.442-445
    • /
    • 1997
  • We present a hybrid self-tuning method of fuzzy inference systems with hyper elliptic Gaussian membership functions using genetic algorithm(GA) and back-propagation algorithm. The proposed self-tuning method has two phases : one is the coarse tuning process based on GA and the other is the fine tuning process based on back-propagation. But the parameters which is obtained by a GA are near optimal solutions. In order to solve the problem in GA applications, it uses a back-propagation algorithm, which is one of learning algorithms in neural networks, to finely tune the parameters obtained by a GA. We provide Box-Jenkins time series to evaluate the advantage and effectiveness of the proposed approach and compare with the conventional method.

  • PDF

Validity Study of Kohonen Self-Organizing Maps

  • Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.2
    • /
    • pp.507-517
    • /
    • 2003
  • Self-organizing map (SOM) has been developed mainly by T. Kohonen and his colleagues as a unsupervised learning neural network. Because of its topological ordering property, SOM is known to be very useful in pattern recognition and text information retrieval areas. Recently, data miners use Kohonen´s mapping method frequently in exploratory analyses of large data sets. One problem facing SOM builder is that there exists no sensible criterion for evaluating goodness-of-fit of the map at hand. In this short communication, we propose valid evaluation procedures for the Kohonen SOM of any size. The methods can be used in selecting the best map among several candidates.

Shadow Removal based on the Deep Neural Network Using Self Attention Distillation (자기 주의 증류를 이용한 심층 신경망 기반의 그림자 제거)

  • Kim, Jinhee;Kim, Wonjun
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.419-428
    • /
    • 2021
  • Shadow removal plays a key role for the pre-processing of image processing techniques such as object tracking and detection. With the advances of image recognition based on deep convolution neural networks, researches for shadow removal have been actively conducted. In this paper, we propose a novel method for shadow removal, which utilizes self attention distillation to extract semantic features. The proposed method gradually refines results of shadow detection, which are extracted from each layer of the proposed network, via top-down distillation. Specifically, the training procedure can be efficiently performed by learning the contextual information for shadow removal without shadow masks. Experimental results on various datasets show the effectiveness of the proposed method for shadow removal under real world environments.

Performance of Investment Strategy using Investor-specific Transaction Information and Machine Learning (투자자별 거래정보와 머신러닝을 활용한 투자전략의 성과)

  • Kim, Kyung Mock;Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.65-82
    • /
    • 2021
  • Stock market investors are generally split into foreign investors, institutional investors, and individual investors. Compared to individual investor groups, professional investor groups such as foreign investors have an advantage in information and financial power and, as a result, foreign investors are known to show good investment performance among market participants. The purpose of this study is to propose an investment strategy that combines investor-specific transaction information and machine learning, and to analyze the portfolio investment performance of the proposed model using actual stock price and investor-specific transaction data. The Korea Exchange offers daily information on the volume of purchase and sale of each investor to securities firms. We developed a data collection program in C# programming language using an API provided by Daishin Securities Cybosplus, and collected 151 out of 200 KOSPI stocks with daily opening price, closing price and investor-specific net purchase data from January 2, 2007 to July 31, 2017. The self-organizing map model is an artificial neural network that performs clustering by unsupervised learning and has been introduced by Teuvo Kohonen since 1984. We implement competition among intra-surface artificial neurons, and all connections are non-recursive artificial neural networks that go from bottom to top. It can also be expanded to multiple layers, although many fault layers are commonly used. Linear functions are used by active functions of artificial nerve cells, and learning rules use Instar rules as well as general competitive learning. The core of the backpropagation model is the model that performs classification by supervised learning as an artificial neural network. We grouped and transformed investor-specific transaction volume data to learn backpropagation models through the self-organizing map model of artificial neural networks. As a result of the estimation of verification data through training, the portfolios were rebalanced monthly. For performance analysis, a passive portfolio was designated and the KOSPI 200 and KOSPI index returns for proxies on market returns were also obtained. Performance analysis was conducted using the equally-weighted portfolio return, compound interest rate, annual return, Maximum Draw Down, standard deviation, and Sharpe Ratio. Buy and hold returns of the top 10 market capitalization stocks are designated as a benchmark. Buy and hold strategy is the best strategy under the efficient market hypothesis. The prediction rate of learning data using backpropagation model was significantly high at 96.61%, while the prediction rate of verification data was also relatively high in the results of the 57.1% verification data. The performance evaluation of self-organizing map grouping can be determined as a result of a backpropagation model. This is because if the grouping results of the self-organizing map model had been poor, the learning results of the backpropagation model would have been poor. In this way, the performance assessment of machine learning is judged to be better learned than previous studies. Our portfolio doubled the return on the benchmark and performed better than the market returns on the KOSPI and KOSPI 200 indexes. In contrast to the benchmark, the MDD and standard deviation for portfolio risk indicators also showed better results. The Sharpe Ratio performed higher than benchmarks and stock market indexes. Through this, we presented the direction of portfolio composition program using machine learning and investor-specific transaction information and showed that it can be used to develop programs for real stock investment. The return is the result of monthly portfolio composition and asset rebalancing to the same proportion. Better outcomes are predicted when forming a monthly portfolio if the system is enforced by rebalancing the suggested stocks continuously without selling and re-buying it. Therefore, real transactions appear to be relevant.

Genetically Optimized Self-Organizing Polynomial Neural Networks (진화론적 최적 자기구성 다항식 뉴럴 네트워크)

  • 박호성;박병준;장성환;오성권
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.1
    • /
    • pp.40-49
    • /
    • 2004
  • In this paper, we propose a new architecture of Genetic Algorithms(GAs)-based Self-Organizing Polynomial Neural Networks(SOPNN), discuss a comprehensive design methodology and carry out a series of numeric experiments. The conventional SOPNN is based on the extended Group Method of Data Handling(GMDH) method and utilized the polynomial order (viz. linear, quadratic, and modified quadratic) as well as the number of node inputs fixed (selected in advance by designer) at Polynomial Neurons (or nodes) located in each layer through a growth process of the network. Moreover it does not guarantee that the SOPNN generated through learning has the optimal network architecture. But the proposed GA-based SOPNN enable the architecture to be a structurally more optimized network, and to be much more flexible and preferable neural network than the conventional SOPNN. In order to generate the structurally optimized SOPNN, GA-based design procedure at each stage (layer) of SOPNN leads to the selection of preferred nodes (or PNs) with optimal parameters- such as the number of input variables, input variables, and the order of the polynomial-available within SOPNN. An aggregate performance index with a weighting factor is proposed in order to achieve a sound balance between approximation and generalization (predictive) abilities of the model. A detailed design procedure is discussed in detail. To evaluate the performance of the GA-based SOPNN, the model is experimented with using two time series data (gas furnace and NOx emission process data of gas turbine power plant). A comparative analysis shows that the proposed GA-based SOPNN is model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.