• Title/Summary/Keyword: Improvement of prediction performance

Search Result 440, Processing Time 0.029 seconds

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

Machine Learning-Based Transactions Anomaly Prediction for Enhanced IoT Blockchain Network Security and Performance

  • Nor Fadzilah Abdullah;Ammar Riadh Kairaldeen;Asma Abu-Samah;Rosdiadee Nordin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1986-2009
    • /
    • 2024
  • The integration of blockchain technology with the rapid growth of Internet of Things (IoT) devices has enabled secure and decentralised data exchange. However, security vulnerabilities and performance limitations remain significant challenges in IoT blockchain networks. This work proposes a novel approach that combines transaction representation and machine learning techniques to address these challenges. Various clustering techniques, including k-means, DBSCAN, Gaussian Mixture Models (GMM), and Hierarchical clustering, were employed to effectively group unlabelled transaction data based on their intrinsic characteristics. Anomaly transaction prediction models based on classifiers were then developed using the labelled data. Performance metrics such as accuracy, precision, recall, and F1-measure were used to identify the minority class representing specious transactions or security threats. The classifiers were also evaluated on their performance using balanced and unbalanced data. Compared to unbalanced data, balanced data resulted in an overall average improvement of approximately 15.85% in accuracy, 88.76% in precision, 60% in recall, and 74.36% in F1-score. This demonstrates the effectiveness of each classifier as a robust classifier with consistently better predictive performance across various evaluation metrics. Moreover, the k-means and GMM clustering techniques outperformed other techniques in identifying security threats, underscoring the importance of appropriate feature selection and clustering methods. The findings have practical implications for reinforcing security and efficiency in real-world IoT blockchain networks, paving the way for future investigations and advancements.

Analysis of streamflow prediction performance by various deep learning schemes

  • Le, Xuan-Hien;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.131-131
    • /
    • 2021
  • Deep learning models, especially those based on long short-term memory (LSTM), have presented their superiority in addressing time series data issues recently. This study aims to comprehensively evaluate the performance of deep learning models that belong to the supervised learning category in streamflow prediction. Therefore, six deep learning models-standard LSTM, standard gated recurrent unit (GRU), stacked LSTM, bidirectional LSTM (BiLSTM), feed-forward neural network (FFNN), and convolutional neural network (CNN) models-were of interest in this study. The Red River system, one of the largest river basins in Vietnam, was adopted as a case study. In addition, deep learning models were designed to forecast flowrate for one- and two-day ahead at Son Tay hydrological station on the Red River using a series of observed flowrate data at seven hydrological stations on three major river branches of the Red River system-Thao River, Da River, and Lo River-as the input data for training, validation, and testing. The comparison results have indicated that the four LSTM-based models exhibit significantly better performance and maintain stability than the FFNN and CNN models. Moreover, LSTM-based models may reach impressive predictions even in the presence of upstream reservoirs and dams. In the case of the stacked LSTM and BiLSTM models, the complexity of these models is not accompanied by performance improvement because their respective performance is not higher than the two standard models (LSTM and GRU). As a result, we realized that in the context of hydrological forecasting problems, simple architectural models such as LSTM and GRU (with one hidden layer) are sufficient to produce highly reliable forecasts while minimizing computation time because of the sequential data nature.

  • PDF

Assessment of the Prediction Derived from Larger Ensemble Size and Different Initial Dates in GloSea6 Hindcast (기상청 기후예측시스템(GloSea6) 과거기후 예측장의 앙상블 확대와 초기시간 변화에 따른 예측 특성 분석)

  • Kim, Ji-Yeong;Park, Yeon-Hee;Ji, Heesook;Hyun, Yu-Kyung;Lee, Johan
    • Atmosphere
    • /
    • v.32 no.4
    • /
    • pp.367-379
    • /
    • 2022
  • In this paper, the evaluation of the performance of Korea Meteorological Administratio (KMA) Global Seasonal forecasting system version 6 (GloSea6) is presented by assessing the effects of larger ensemble size and carrying out the test using different initial conditions for hindcast in sub-seasonal to seasonal scales. The number of ensemble members increases from 3 to 7. The Ratio of Predictable Components (RPC) approaches the appropriate signal magnitude with increase of ensemble size. The improvement of annual variability is shown for all basic variables mainly in mid-high latitude. Over the East Asia region, there are enhancements especially in 500 hPa geopotential height and 850 hPa wind fields. It reveals possibility to improve the performance of East Asian monsoon. Also, the reliability tends to become better as the ensemble size increases in summer than winter. To assess the effects of using different initial conditions, the area-mean values of normalized bias and correlation coefficients are compared for each basic variable for hindcast according to the four initial dates. The results have better performance when the initial date closest to the forecasting time is used in summer. On the seasonal scale, it is better to use four initial dates, where the maximum size of the ensemble increases to 672, mainly in winter. As the use of larger ensemble size, therefore, it is most efficient to use two initial dates for 60-days prediction and four initial dates for 6-months prediction, similar to the current Time-Lagged ensemble method.

Dynamic Per-Branch History Length Fitting for High-Performance Processor (고성능 프로세서를 위한 분기 명령어의 동적 History 길이 조절 기법)

  • Kwak, Jong-Wook;Jhang, Seong-Tae;Jhon, Chu-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.2 s.314
    • /
    • pp.1-10
    • /
    • 2007
  • Branch prediction accuracy is critical for the overall system performance. Branch miss-prediction penalty is the one of the significant performance limiters for improving processor performance, as the pipeline deepens and the instruction issued per cycle increases. In this paper, we propose "Dynamic Per-Branch History Length Fitting Method" by tracking the data dependencies among the register writing instructions. The proposed solution first identifies the key branches, and then it selectively uses the histories of the key branches. To support this mechanism, we provide a history length adjustment algorithm and a required hardware module. As the result of simulation, the proposed mechanism outperforms the previous fixed static method, up to 5.96% in prediction accuracy. Furthermore, our method introduces the performance improvement, compared to the profiled results which are generally considered as the optimal ones.

Plant breeding in the 21st century: Molecular breeding and high throughput phenotyping

  • Sorrells, Mark E.
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2017.06a
    • /
    • pp.14-14
    • /
    • 2017
  • The discipline of plant breeding is experiencing a renaissance impacting crop improvement as a result of new technologies, however fundamental questions remain for predicting the phenotype and how the environment and genetics shape it. Inexpensive DNA sequencing, genotyping, new statistical methods, high throughput phenotyping and gene-editing are revolutionizing breeding methods and strategies for improving both quantitative and qualitative traits. Genomic selection (GS) models use genome-wide markers to predict performance for both phenotyped and non-phenotyped individuals. Aerial and ground imaging systems generate data on correlated traits such as canopy temperature and normalized difference vegetative index that can be combined with genotypes in multivariate models to further increase prediction accuracy and reduce the cost of advanced trials with limited replication in time and space. Design of a GS training population is crucial to the accuracy of prediction models and can be affected by many factors including population structure and composition. Prediction models can incorporate performance over multiple environments and assess GxE effects to identify a highly predictive subset of environments. We have developed a methodology for analyzing unbalanced datasets using genome-wide marker effects to group environments and identify outlier environments. Environmental covariates can be identified using a crop model and used in a GS model to predict GxE in unobserved environments and to predict performance in climate change scenarios. These new tools and knowledge challenge the plant breeder to ask the right questions and choose the tools that are appropriate for their crop and target traits. Contemporary plant breeding requires teams of people with expertise in genetics, phenotyping and statistics to improve efficiency and increase prediction accuracy in terms of genotypes, experimental design and environment sampling.

  • PDF

A Case Study on Machine Learning Applications and Performance Improvement in Learning Algorithm (기계학습 응용 및 학습 알고리즘 성능 개선방안 사례연구)

  • Lee, Hohyun;Chung, Seung-Hyun;Choi, Eun-Jung
    • Journal of Digital Convergence
    • /
    • v.14 no.2
    • /
    • pp.245-258
    • /
    • 2016
  • This paper aims to present the way to bring about significant results through performance improvement of learning algorithm in the research applying to machine learning. Research papers showing the results from machine learning methods were collected as data for this case study. In addition, suitable machine learning methods for each field were selected and suggested in this paper. As a result, SVM for engineering, decision-making tree algorithm for medical science, and SVM for other fields showed their efficiency in terms of their frequent use cases and classification/prediction. By analyzing cases of machine learning application, general characterization of application plans is drawn. Machine learning application has three steps: (1) data collection; (2) data learning through algorithm; and (3) significance test on algorithm. Performance is improved in each step by combining algorithm. Ways of performance improvement are classified as multiple machine learning structure modeling, $+{\alpha}$ machine learning structure modeling, and so forth.

A Strategy for improving Performance of Q-learning with Prediction Information (예측 정보를 이용한 Q-학습의 성능 개선 기법)

  • Lee, Choong-Hyeon;Um, Ky-Hyun;Cho, Kyung-Eun
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.105-116
    • /
    • 2007
  • Nowadays, learning of agents gets more and more useful in game environments. But it takes a long learning time to produce satisfactory results in game. So, we need a good method to shorten the learning time. In this paper, we present a strategy for improving the learning performance of Q-learning with prediction information. It refers to the chosen action at each status in the Q-learning algorithm, It stores the referred value at the P-table of prediction module, and then it searches some values with high frequency at the table. The values are used to renew second compensation value from the Q-table. Our experiments show that our approach gets the efficiency improvement of average 9% after the middle point of learning experiments, and that the more actions in a status space, the higher performance.

  • PDF

Resource Demand and Price Prediction-based Grid Resource Transaction Model (자원 요구량과 가격 예측 기반의 그리드 자원 거래 모델)

  • Kim, In-Kee;Lee, Jong-Sik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.5
    • /
    • pp.275-285
    • /
    • 2006
  • This paper proposes an efficient market mechanism-based resource transaction model for grid computing. This model predicts the next resource demand of users and suggests reasonable resource price for both of customers and resource providers. This model increases resource transactions between customers and resource providers and reduces the average of transaction response times from resource providers. For prediction accuracy improvement of resource demands and suggestion of reasonable resource price, this model introduces a statistics-based prediction model and a price decision model of microeconomics. For performance evaluating, this paper measures resource demand prediction accuracy rate of users, response time of resource transaction, the number of resource transactions, and resource utilization. With 87.45% of reliable prediction accuracy, this model works on the less 72.39% of response time than existing resource transaction models in a grid computing environment. The number of transactions and the resource utilization increase up to 162.56% and up to 230%, respectively.

Development of AI and IoT-based smart farm pest prediction system: Research on application of YOLOv5 and Isolation Forest models (AI 및 IoT 기반 스마트팜 병충해 예측시스템 개발: YOLOv5 및 Isolation Forest 모델 적용 연구)

  • Mi-Kyoung Park;Hyun Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.4
    • /
    • pp.771-780
    • /
    • 2024
  • In this study, we implemented a real-time pest detection and prediction system for a strawberry farm using a computer vision model based on the YOLOv5 architecture and an Isolation Forest Classifier. The model performance evaluation showed that the YOLOv5 model achieved a mean average precision (mAP 0.5) of 78.7%, an accuracy of 92.8%, a recall of 90.0%, and an F1-score of 76%, indicating high predictive performance. This system was designed to be applicable not only to strawberry farms but also to other crops and various environments. Based on data collected from a tomato farm, a new AI model was trained, resulting in a prediction accuracy of over 85% for major diseases such as late blight and yellow leaf curl virus. Compared to the previous model, this represented an improvement of more than 10% in prediction accuracy.