• Title/Summary/Keyword: matrix learning

Search Result 352, Processing Time 0.03 seconds

Exercise Recommendation System Using Deep Neural Collaborative Filtering (신경망 협업 필터링을 이용한 운동 추천시스템)

  • Jung, Wooyong;Kyeong, Chanuk;Lee, Seongwoo;Kim, Soo-Hyun;Sun, Young-Ghyu;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.173-178
    • /
    • 2022
  • Recently, a recommendation system using deep learning in social network services has been actively studied. However, in the case of a recommendation system using deep learning, the cold start problem and the increased learning time due to the complex computation exist as the disadvantage. In this paper, the user-tailored exercise routine recommendation algorithm is proposed using the user's metadata. Metadata (the user's height, weight, sex, etc.) set as the input of the model is applied to the designed model in the proposed algorithms. The exercise recommendation system model proposed in this paper is designed based on the neural collaborative filtering (NCF) algorithm using multi-layer perceptron and matrix factorization algorithm. The learning proceeds with proposed model by receiving user metadata and exercise information. The model where learning is completed provides recommendation score to the user when a specific exercise is set as the input of the model. As a result of the experiment, the proposed exercise recommendation system model showed 10% improvement in recommended performance and 50% reduction in learning time compared to the existing NCF model.

Supervised Learning-Based Collaborative Filtering Using Market Basket Data for the Cold-Start Problem

  • Hwang, Wook-Yeon;Jun, Chi-Hyuck
    • Industrial Engineering and Management Systems
    • /
    • v.13 no.4
    • /
    • pp.421-431
    • /
    • 2014
  • The market basket data in the form of a binary user-item matrix or a binary item-user matrix can be modelled as a binary classification problem. The binary logistic regression approach tackles the binary classification problem, where principal components are predictor variables. If users or items are sparse in the training data, the binary classification problem can be considered as a cold-start problem. The binary logistic regression approach may not function appropriately if the principal components are inefficient for the cold-start problem. Assuming that the market basket data can also be considered as a special regression problem whose response is either 0 or 1, we propose three supervised learning approaches: random forest regression, random forest classification, and elastic net to tackle the cold-start problem, comparing the performance in a variety of experimental settings. The experimental results show that the proposed supervised learning approaches outperform the conventional approaches.

Smoothed Local PC0A by BYY data smoothing learning

  • Liu, Zhiyong;Xu, Lei
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.109.3-109
    • /
    • 2001
  • The so-called curse of dimensionality arises when Gaussian mixture is used on high-dimensional small-sample-size data, since the number of free elements that needs to be specied in each covariance matrix of Gaussian mixture increases exponentially with the number of dimension d. In this paper, by constraining the covariance matrix in its decomposed orthonormal form we get a local PCA model so as to reduce the number of free elements needed to be specified. Moreover, to cope with the small sample size problem, we adopt BYY data smoothing learning which is a regularization over maximum likelihood learning obtained from BYY harmony learning to implement this local PCA model.

  • PDF

Multi-Description Image Compression Coding Algorithm Based on Depth Learning

  • Yong Zhang;Guoteng Hui;Lei Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.2
    • /
    • pp.232-239
    • /
    • 2023
  • Aiming at the poor compression quality of traditional image compression coding (ICC) algorithm, a multi-description ICC algorithm based on depth learning is put forward in this study. In this study, first an image compression algorithm was designed based on multi-description coding theory. Image compression samples were collected, and the measurement matrix was calculated. Then, it processed the multi-description ICC sample set by using the convolutional self-coding neural system in depth learning. Compressing the wavelet coefficients after coding and synthesizing the multi-description image band sparse matrix obtained the multi-description ICC sequence. Averaging the multi-description image coding data in accordance with the effective single point's position could finally realize the compression coding of multi-description images. According to experimental results, the designed algorithm consumes less time for image compression, and exhibits better image compression quality and better image reconstruction effect.

Deep Learning-based Product Recommendation Model for Influencer Marketing (인플루언서를 위한 딥러닝 기반의 제품 추천모델 개발)

  • Song, Hee Seok;Kim, Jae Kyung
    • Journal of Information Technology Applications and Management
    • /
    • v.29 no.3
    • /
    • pp.43-55
    • /
    • 2022
  • In this study, with the goal of developing a deep learning-based product recommendation model for effective matching of influencers and products, a deep learning model with a collaborative filtering model combined with generalized matrix decomposition(GMF), a collaborative filtering model based on multi-layer perceptron (MLP), and neural collaborative filtering and generalized matrix Factorization (NeuMF), a hybrid model combining GMP and MLP was developed and tested. In particular, we utilize one-class problem free boosting (OCF-B) method to solve the one-class problem that occurs when training is performed only on positive cases using implicit feedback in the deep learning-based collaborative filtering recommendation model. In relation to model selection based on overall experimental results, the MLP model showed highest performance with weighted average precision, weighted average recall, and f1 score were 0.85 in the model (n=3,000, term=15). This study is meaningful in practice as it attempted to commercialize a deep learning-based recommendation system where influencer's promotion data is being accumulated, pactical personalized recommendation service is not yet commercially applied yet.

A study of MIMO Fuzzy system with a Learning Ability (학습기능을 갖는 MIMO 퍼지시스템에 관한 연구)

  • Park, Jin-Hyun;Bae, Kang-Yul;Choi, Young-Kiu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.3
    • /
    • pp.505-513
    • /
    • 2009
  • Z. Cao had proposed NFRM(new fuzzy reasoning method) which infers in detail using relation matrix. In spite of the small inference rules, it shows good performance than mamdani's fuzzy inference method. But the most of fuzzy systems are difficult to make fuzzy inference rules in the case of MIMO system. The past days, We had proposed the MIMO fuzzy inference which had extended a Z. Cao's fuzzy inference to handle MIMO system. But many times and effort needed to determine the relation matrix elements of MIMO fuzzy inference by heuristic and trial and error method in order to improve inference performances. In this paper, we propose a MIMO fuzzy inference method with the learning ability witch is used a gradient descent method in order to improve the performances. Through the computer simulation studies for the inverse kinematics problem of 2-axis robot, we show that proposed inference method using a gradient descent method has good performances.

Pseudoinverse Matrix Decomposition Based Incremental Extreme Learning Machine with Growth of Hidden Nodes

  • Kassani, Peyman Hosseinzadeh;Kim, Euntai
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.125-130
    • /
    • 2016
  • The proposal of this study is a fast version of the conventional extreme learning machine (ELM), called pseudoinverse matrix decomposition based incremental ELM (PDI-ELM). One of the main problems in ELM is to determine the number of hidden nodes. In this study, the number of hidden nodes is automatically determined. The proposed model is an incremental version of ELM which adds neurons with the goal of minimization the error of the ELM network. To speed up the model the information of pseudoinverse from previous step is taken into account in the current iteration. To show the ability of the PDI-ELM, it is applied to few benchmark classification datasets in the University of California Irvine (UCI) repository. Compared to ELM learner and two other versions of incremental ELM, the proposed PDI-ELM is faster.

Application of Matrix Thinking Method to Introduction Program in Engineering Education

  • Satoh, Yasuta;Kubota, Shusuke;Takahashi, Koji;Takahata, Yasuyuki;Kim, Yun-Hae
    • Journal of Engineering Education Research
    • /
    • v.13 no.2
    • /
    • pp.22-27
    • /
    • 2010
  • From a lot of survey, it is obvious that most students in universities lose their desire for learning just after entering their universities. In order to solve this problem, we developed a novel educational tool for the students, named "The thinking method based on matrix diagram". If they try hard with the help of this tool, they will be able to learn how to design and manage their splendid university lives in addition to get the basic knowledge and to improve their basic abilities. It is also found that they can earn the shared knowledge mutually after learning a common method, which supports to make them to improve their communication abilities drastically.

  • PDF

Face Recognition using Non-negative Matrix Factorization and Learning Vector Quantization (비음수 행렬 분해와 학습 벡터 양자화를 이용한 얼굴 인식)

  • Jin, Donghan;Kang, Hyunchul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.3
    • /
    • pp.55-62
    • /
    • 2017
  • Non-negative matrix factorization (NMF) is one of the typical parts-based representation in which images are expressed as a linear combination of basis vectors that show the lcoal features or objects in the images. In this paper, we represent face images using various NMF methods and recognize their face identities based on extracted features using a learning vector quantization. We analyzed the various NMF methods by comparing extracted basis vectors. Also we confirmed the availability of NMF to the face recognition by verification of recognition rate of the various NMF methods.

Multilayer Neural Network Using Delta Rule: Recognitron III (텔타규칙을 이용한 다단계 신경회로망 컴퓨터:Recognitron III)

  • 김춘석;박충규;이기한;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.40 no.2
    • /
    • pp.224-233
    • /
    • 1991
  • The multilayer expanson of single layer NN (Neural Network) was needed to solve the linear seperability problem as shown by the classic example using the XOR function. The EBP (Error Back Propagation ) learning rule is often used in multilayer Neural Networks, but it is not without its faults: 1)D.Rimmelhart expanded the Delta Rule but there is a problem in obtaining Ca from the linear combination of the Weight matrix N between the hidden layer and the output layer and H, wich is the result of another linear combination between the input pattern and the Weight matrix M between the input layer and the hidden layer. 2) Even if using the difference between Ca and Da to adjust the values of the Weight matrix N between the hidden layer and the output layer may be valid is correct, but using the same value to adjust the Weight matrixd M between the input layer and the hidden layer is wrong. Recognitron III was proposed to solve these faults. According to simulation results, since Recognitron III does not learn the three layer NN itself, but divides it into several single layer NNs and learns these with learning patterns, the learning time is 32.5 to 72.2 time faster than EBP NN one. The number of patterns learned in a EBP NN with n input and output cells and n+1 hidden cells are 2**n, but n in Recognitron III of the same size. [5] In the case of pattern generalization, however, EBP NN is less than Recognitron III.

  • PDF