• Title/Summary/Keyword: Detection algorithm

Search Result 6,891, Processing Time 0.044 seconds

Multi-resolution SAR Image-based Agricultural Reservoir Monitoring (농업용 저수지 모니터링을 위한 다해상도 SAR 영상의 활용)

  • Lee, Seulchan;Jeong, Jaehwan;Oh, Seungcheol;Jeong, Hagyu;Choi, Minha
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.497-510
    • /
    • 2022
  • Agricultural reservoirs are essential structures for water supplies during dry period in the Korean peninsula, where water resources are temporally unequally distributed. For efficient water management, systematic and effective monitoring of medium-small reservoirs is required. Synthetic Aperture Radar (SAR) provides a way for continuous monitoring of those, with its capability of all-weather observation. This study aims to evaluate the applicability of SAR in monitoring medium-small reservoirs using Sentinel-1 (10 m resolution) and Capella X-SAR (1 m resolution), at Chari (CR), Galjeon (GJ), Dwitgol (DG) reservoirs located in Ulsan, Korea. Water detected results applying Z fuzzy function-based threshold (Z-thresh) and Chan-vese (CV), an object detection-based segmentation algorithm, are quantitatively evaluated using UAV-detected water boundary (UWB). Accuracy metrics from Z-thresh were 0.87, 0.89, 0.77 (at CR, GJ, DG, respectively) using Sentinel-1 and 0.78, 0.72, 0.81 using Capella, and improvements were observed when CV was applied (Sentinel-1: 0.94, 0.89, 0.84, Capella: 0.92, 0.89, 0.93). Boundaries of the waterbody detected from Capella agreed relatively well with UWB; however, false- and un-detections occurred from speckle noises, due to its high resolution. When masked with optical sensor-based supplementary images, improvements up to 13% were observed. More effective water resource management is expected to be possible with continuous monitoring of available water quantity, when more accurate and precise SAR-based water detection technique is developed.

The Accuracy Evaluation of Digital Elevation Models for Forest Areas Produced Under Different Filtering Conditions of Airborne LiDAR Raw Data (항공 LiDAR 원자료 필터링 조건에 따른 산림지역 수치표고모형 정확도 평가)

  • Cho, Seungwan;Choi, Hyung Tae;Park, Joowon
    • Journal of agriculture & life science
    • /
    • v.50 no.3
    • /
    • pp.1-11
    • /
    • 2016
  • With increasing interest, there have been studies on LiDAR(Light Detection And Ranging)-based DEM(Digital Elevation Model) to acquire three dimensional topographic information. For producing LiDAR DEM with better accuracy, Filtering process is crucial, where only surface reflected LiDAR points are left to construct DEM while non-surface reflected LiDAR points need to be removed from the raw LiDAR data. In particular, the changes of input values for filtering algorithm-constructing parameters are supposed to produce different products. Therefore, this study is aimed to contribute to better understanding the effects of the changes of the levels of GroundFilter Algrothm's Mean parameter(GFmn) embedded in FUSION software on the accuracy of the LiDAR DEM products, using LiDAR data collected for Hwacheon, Yangju, Gyeongsan and Jangheung watershed experimental area. The effect of GFmn level changes on the products' accuracy is estimated by measuring and comparing the residuals between the elevations at the same locations of a field and different GFmn level-produced LiDAR DEM sample points. In order to test whether there are any differences among the five GFmn levels; 1, 3, 5, 7 and 9, One-way ANOVA is conducted. In result of One-way ANOVA test, it is found that the change in GFmn level significantly affects the accuracy (F-value: 4.915, p<0.01). After finding significance of the GFmn level effect, Tukey HSD test is also conducted as a Post hoc test for grouping levels by the significant differences. In result, GFmn levels are divided into two subsets ('7, 5, 9, 3' vs. '1'). From the observation of the residuals of each individual level, it is possible to say that LiDAR DEM is generated most accurately when GFmn is given as 7. Through this study, the most desirable parameter value can be suggested to produce filtered LiDAR DEM data which can provide the most accurate elevation information.

Development of Position Encoding Circuit for a Multi-Anode Position Sensitive Photomultiplier Tube (다중양극 위치민감형 광전자증배관을 위한 위치검출회로 개발)

  • Kwon, Sun-Il;Hong, Seong-Jong;Ito, Mikiko;Yoon, Hyun-Suk;Lee, Geon-Song;Sim, Kwang-Souk;Rhee, June-Tak;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.6
    • /
    • pp.469-477
    • /
    • 2008
  • Purpose: The goal of this paper is to present the design and performance of a position encoding circuit for $16{\times}16$ array of position sensitive multi-anode photomultiplier tube for small animal PET scanners. This circuit which reduces the number of readout channels from 256 to 4 channels is based on a charge division method utilizing a resistor array. Materials and Methods: The position encoding circuit was simulated with PSpice before fabrication. The position encoding circuit reads out the signals from H9500 flat panel PMTs (Hamamatsu Photonics K.K., Japan) on which $1.5{\times}1.5{\times}7.0\;mm^3$ $L_{0.9}GSO$ ($Lu_{1.8}Gd_{0.2}SiO_{5}:Ce$) crystals were mounted. For coincidence detection, two different PET modules were used. One PET module consisted of a $29{\times}29\;L_{0.9}GSO$ crystal layer, and the other PET module two $28{\times}28$ and $29{\times}29\;L_{0.9}GSO$ crystal layers which have relative offsets by half a crystal pitch in x- and y-directions. The crystal mapping algorithm was also developed to identify crystals. Results: Each crystal was clearly visible in flood images. The crystal identification capability was enhanced further by changing the values of resistors near the edge of the resistor array. Energy resolutions of individual crystal were about 11.6%(SD 1.6). The flood images were segmented well with the proposed crystal mapping algorithm. Conclusion: The position encoding circuit resulted in a clear separation of crystals and sufficient energy resolutions with H9500 flat-panel PMT and $L_{0.9}GSO$ crystals. This circuit is good enough for use in small animal PET scanners.

The Effect of Rain on Traffic Flows in Urban Freeway Basic Segments (기상조건에 따른 도시고속도로 교통류변화 분석)

  • 최정순;손봉수;최재성
    • Journal of Korean Society of Transportation
    • /
    • v.17 no.1
    • /
    • pp.29-39
    • /
    • 1999
  • An earlier study of the effect of rain found that the capacity of freeway systems was reduced, but did not address the effects of rain on the nature of traffic flows. Indeed, the substantial variation due to the intensity of adverse weather conditions is entirely rational so that its effects must be considered in freeway facility design. However, all of the data in Highway Capacity Manual(HCM) have come from ideal conditions. The primary objective of this study is to investigate the effect of rain on urban freeway traffic flows in Seoul. To do so, the relations between three key traffic variables(flow rates, speed, occupancy), their threshold values between congested and uncontested traffic flow regimes, and speed distribution were investigated. The traffic data from Olympic Expressway in Seoul were obtained from Imagine Detection System (Autoscope) with 30 seconds and 1 minute time periods. The slope of the regression line relating flow to occupancy in the uncongested regime decreases when it is raining. In essence, this result indicates that the average service flow rate (it may be interpreted as a capacity of freeway) is reduced as weather conditions deteriorate. The reduction is in the range between 10 and 20%, which agrees with the range proposed by 1994 US HCM. It is noteworthy that the service flow rates of inner lanes are relatively higher than those of other lanes. The average speed is also reduced in rainy day, but the flow-speed relationship and the threshold values of speed and occupancy (these are called critical speed and critical occupancy) are not very sensitive to the weather conditions.

  • PDF

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Comparison of Association Rule Learning and Subgroup Discovery for Mining Traffic Accident Data (교통사고 데이터의 마이닝을 위한 연관규칙 학습기법과 서브그룹 발견기법의 비교)

  • Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.1-16
    • /
    • 2015
  • Traffic accident is one of the major cause of death worldwide for the last several decades. According to the statistics of world health organization, approximately 1.24 million deaths occurred on the world's roads in 2010. In order to reduce future traffic accident, multipronged approaches have been adopted including traffic regulations, injury-reducing technologies, driving training program and so on. Records on traffic accidents are generated and maintained for this purpose. To make these records meaningful and effective, it is necessary to analyze relationship between traffic accident and related factors including vehicle design, road design, weather, driver behavior etc. Insight derived from these analysis can be used for accident prevention approaches. Traffic accident data mining is an activity to find useful knowledges about such relationship that is not well-known and user may interested in it. Many studies about mining accident data have been reported over the past two decades. Most of studies mainly focused on predict risk of accident using accident related factors. Supervised learning methods like decision tree, logistic regression, k-nearest neighbor, neural network are used for these prediction. However, derived prediction model from these algorithms are too complex to understand for human itself because the main purpose of these algorithms are prediction, not explanation of the data. Some of studies use unsupervised clustering algorithm to dividing the data into several groups, but derived group itself is still not easy to understand for human, so it is necessary to do some additional analytic works. Rule based learning methods are adequate when we want to derive comprehensive form of knowledge about the target domain. It derives a set of if-then rules that represent relationship between the target feature with other features. Rules are fairly easy for human to understand its meaning therefore it can help provide insight and comprehensible results for human. Association rule learning methods and subgroup discovery methods are representing rule based learning methods for descriptive task. These two algorithms have been used in a wide range of area from transaction analysis, accident data analysis, detection of statistically significant patient risk groups, discovering key person in social communities and so on. We use both the association rule learning method and the subgroup discovery method to discover useful patterns from a traffic accident dataset consisting of many features including profile of driver, location of accident, types of accident, information of vehicle, violation of regulation and so on. The association rule learning method, which is one of the unsupervised learning methods, searches for frequent item sets from the data and translates them into rules. In contrast, the subgroup discovery method is a kind of supervised learning method that discovers rules of user specified concepts satisfying certain degree of generality and unusualness. Depending on what aspect of the data we are focusing our attention to, we may combine different multiple relevant features of interest to make a synthetic target feature, and give it to the rule learning algorithms. After a set of rules is derived, some postprocessing steps are taken to make the ruleset more compact and easier to understand by removing some uninteresting or redundant rules. We conducted a set of experiments of mining our traffic accident data in both unsupervised mode and supervised mode for comparison of these rule based learning algorithms. Experiments with the traffic accident data reveals that the association rule learning, in its pure unsupervised mode, can discover some hidden relationship among the features. Under supervised learning setting with combinatorial target feature, however, the subgroup discovery method finds good rules much more easily than the association rule learning method that requires a lot of efforts to tune the parameters.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Reproducibility of Regional Pulse Wave Velocity in Healthy Subjects

  • Im Jae-Joong;Lee, Nak-Bum;Rhee Moo-Yong;Na Sang-Hun;Kim, Young-Kwon;Lee, Myoung-Mook;Cockcroft John R.
    • International Journal of Vascular Biomedical Engineering
    • /
    • v.4 no.2
    • /
    • pp.19-24
    • /
    • 2006
  • Background: Pulse wave velocity (PWV), which is inversely related to the distensibility of an arterial wall, offers a simple and potentially useful approach for an evaluation of cardiovascular diseases. In spite of the clinical importance and widespread use of PWV, there exist no standard either for pulse sensors or for system requirements for accurate pulse wave measurement. Objective of this study was to assess the reproducibility of PWV values using a newly developed PWV measurement system in healthy subjects prior to a large-scale clinical study. Methods: System used for the study was the PP-1000 (Hanbyul Meditech Co., Korea), which provides regional PWV values based on the measurements of electrocardiography (ECG), phonocardiography (PCG), and pulse waves from four different sites of arteries (carotid, femoral, radial, and dorsalis pedis) simultaneously. Seventeen healthy male subjects with a mean age of 33 years (ranges 22 to 52 years) without any cardiovascular disease were participated for the experiment. Two observers (observer A and B) performed two consecutive measurements from the same subject in a random order. For an evaluation of system reproducibility, two analyses (within-observer and between-observer) were performed, and expressed in terms of mean difference ${\pm}2SD$, as described by Bland and Altman plots. Results: Mean and SD of PWVs for aorta, arm, and leg were $7.07{\pm}1.48m/sec,\;8.43{\pm}1.14m/sec,\;and\;8.09{\pm}0.98m/sec$ measured from observer A and $6.76{\pm}1.00m/sec,\;7.97{\pm}0.80m/sec,\;and\;\7.97{\pm}0.72m/sec$ from observer B, respectively. Between-observer differences ($mean{\pm}2SD$) for aorta, arm, and leg were $0.14{\pm\}0.62m/sec,\;0.18{\pm\}0.84m/sec,\;and\;0.07{\pm}0.86m/sec$, and the correlation coefficients were high especially 0.93 for aortic PWV. Within-observer differences ($mean{\pm}2SD$) for aorta, arm, and leg were $0.01{\pm}0.26m/sec,\;0.02{\pm}0.26m/sec,\;and\;0.08{\pm}0.32m/sec$ from observer A and $0.01{\pm}0.24m/sec,\;0.04{\pm}0.28m/sec,\;and\;0.01{\pm}0.20m/sec$ from observer B, respectively. All the measurements showed significantly high correlation coefficients ranges from 0.94 to 0.99. Conclusion: PWV measurement system used for the study offers comfortable and simple operation and provides accurate analysis results with high reproducibility. Since the reproducibility of the measurement is critical for the diagnosis in clinical use, it is necessary to provide an accurate algorithm for the detection of additional features such as flow wave, reflection wave, and dicrotic notch from a pulse waveform. This study will be extended for the comparison of PWV values from patients with various vascular risks for clinical application. Data acquired from the study could be used for the determination of the appropriate sample size for further studies relating various types of arteriosclerosis-related vascular disease.

  • PDF

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.