• Title/Summary/Keyword: 예측 기법

Search Result 6,880, Processing Time 0.04 seconds

Comparison of the Mid-term Changes at the Remnant Distal Aorta after Aortic Arch Replacement or Ascending Aortic Replacement for Treating Type A Aortic Dissection (A형 급성대동맥박리증에서 대동맥궁치환술과 상행대동맥치환술 후 잔존 원위부 대동맥의 변화에 대한 중기 관찰 비교)

  • Cho, Kwang-Jo;Woo, Jong-Su;Bang, Jung-Hee;Choi, Pill-Jo
    • Journal of Chest Surgery
    • /
    • v.40 no.6 s.275
    • /
    • pp.414-419
    • /
    • 2007
  • Background: Replacing the ascending aorta is a standard surgical option for treating acute type A aortic dissection. But replacing the aortic arch has recently been reported as an acceptable procedure for this disease. We compared the effects of aortic arch replacement for treating acute type A aortic dissection with the effects of ascending aortic replacement. Material and Method: From 2002 to 2006, 25 patients undewent surgical treatment for acute type A aortic dissection, 12 patients undewent ascending aortic replacement and 13 patients underwent aortic arch replacement. Among the aortic arch group, an additional distal stent-graft was inserted during the operation in 5 patients. 19 patients (11 arch replaced patients and 8 ascending aortic replaced patients) were followed up at the out patient clinic for an average of $756{\pm}373$ days. All the patients undewent CT scanning and we analyzed their distal aortic segments. Result: 4 patients who underwent ascending aortic replacement died, so the overall mortality rate was 16%. Among the 11 long term followed-up arch replacement patients, 2 patients (18.1 %) developed distal aortic dilatation and one of them underwent thoracoabdominal aortic replacement later on. However, among the 8 the ascending aortic replaced patients, 5 patients (62.5%) developed distal aortic dilatation. Conclusion: Aortic arch replacement is one of the safe options for treating acute type A aortic dissection. Aortic arch replacement for treating acute type A aortic dissection could contribute to a reduced distal aortic dilatation rate and fewer secondary aortic procedures.

Risk Ranking Analysis for the City-Gas Pipelines in the Underground Laying Facilities (지하매설물 중 도시가스 지하배관에 대한 위험성 서열화 분석)

  • Ko, Jae-Sun;Kim, Hyo
    • Fire Science and Engineering
    • /
    • v.18 no.1
    • /
    • pp.54-66
    • /
    • 2004
  • In this article, we are to suggest the hazard-assessing method for the underground pipelines, and find out the pipeline-maintenance schemes of high efficiency in cost. Three kinds of methods are applied in order to refer to the approaching methods of listing the hazards for the underground pipelines: the first is RBI(Risk Based Inspection), which firstly assess the effect of the neighboring population, the dimension, thickness of pipe, and working time. It enables us to estimate quantitatively the risk exposure. The second is the scoring system which is based on the environmental factors of the buried pipelines. Last we quantify the frequency of the releases using the present THOMAS' theory. In this work, as a result of assessing the hazard of it using SPC scheme, the hazard score related to how the gas pipelines erodes indicate the numbers from 30 to 70, which means that the assessing criteria define well the relative hazards of actual pipelines. Therefore. even if one pipeline region is relatively low score, it can have the high frequency of leakage due to its longer length. The acceptable limit of the release frequency of pipeline shows 2.50E-2 to 1.00E-l/yr, from which we must take the appropriate actions to have the consequence to be less than the acceptable region. The prediction of total frequency using regression analysis shows the limit operating time of pipeline is the range of 11 to 13 years, which is well consistent with that of the actual pipeline. Concludingly, the hazard-listing scheme suggested in this research will be very effectively applied to maintaining the underground pipelines.

Literature Review and Analysis on Research Trends of Sociology in the Journal of Korean Gerontological Society (한국노년학의 사회학 분야 연구동향)

  • Kim, Ju-Hyun;Yeom, Jihye;Kim, Tae-il
    • 한국노년학
    • /
    • v.38 no.3
    • /
    • pp.745-766
    • /
    • 2018
  • The purpose of this study is to examine the research trends regarding the published articles in the Journal of Korean Gerongological Society within the past 10 years. This study is based on the article written by Won and Mo (2008). This article classified previously published studies into themes, methods, and application of theory. Out of the total of 187 articles published in the past 10 years, 11 articles were about social change and institution, 94 articles were about social issues, 12 articles were about social problems and deviation, 42 articles were about social culture, 14 papers were about gerontological theory and 13 papers were about residence/architecture. In the last 10 years, the most popular topic was around the various ways aging. New topic that emerged was the effect of IT and technology on the quality of life among the older adults. Other topics that gained interest were age discrimination and prejudice on aging. Trends in research methods showed increased use of qualitative methods. In the future, more research needs to be completed to theorize the results of quantitative research. Furthermore, the use of qualitative research methods needs to be increased in order to understand the lives of older adults in depth. Through more meta analysis, the results of past research articles should be synthesized to get a bigger picture of the Korean older adults.

A Study on Improvement of Collaborative Filtering Based on Implicit User Feedback Using RFM Multidimensional Analysis (RFM 다차원 분석 기법을 활용한 암시적 사용자 피드백 기반 협업 필터링 개선 연구)

  • Lee, Jae-Seong;Kim, Jaeyoung;Kang, Byeongwook
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.139-161
    • /
    • 2019
  • The utilization of the e-commerce market has become a common life style in today. It has become important part to know where and how to make reasonable purchases of good quality products for customers. This change in purchase psychology tends to make it difficult for customers to make purchasing decisions in vast amounts of information. In this case, the recommendation system has the effect of reducing the cost of information retrieval and improving the satisfaction by analyzing the purchasing behavior of the customer. Amazon and Netflix are considered to be the well-known examples of sales marketing using the recommendation system. In the case of Amazon, 60% of the recommendation is made by purchasing goods, and 35% of the sales increase was achieved. Netflix, on the other hand, found that 75% of movie recommendations were made using services. This personalization technique is considered to be one of the key strategies for one-to-one marketing that can be useful in online markets where salespeople do not exist. Recommendation techniques that are mainly used in recommendation systems today include collaborative filtering and content-based filtering. Furthermore, hybrid techniques and association rules that use these techniques in combination are also being used in various fields. Of these, collaborative filtering recommendation techniques are the most popular today. Collaborative filtering is a method of recommending products preferred by neighbors who have similar preferences or purchasing behavior, based on the assumption that users who have exhibited similar tendencies in purchasing or evaluating products in the past will have a similar tendency to other products. However, most of the existed systems are recommended only within the same category of products such as books and movies. This is because the recommendation system estimates the purchase satisfaction about new item which have never been bought yet using customer's purchase rating points of a similar commodity based on the transaction data. In addition, there is a problem about the reliability of purchase ratings used in the recommendation system. Reliability of customer purchase ratings is causing serious problems. In particular, 'Compensatory Review' refers to the intentional manipulation of a customer purchase rating by a company intervention. In fact, Amazon has been hard-pressed for these "compassionate reviews" since 2016 and has worked hard to reduce false information and increase credibility. The survey showed that the average rating for products with 'Compensated Review' was higher than those without 'Compensation Review'. And it turns out that 'Compensatory Review' is about 12 times less likely to give the lowest rating, and about 4 times less likely to leave a critical opinion. As such, customer purchase ratings are full of various noises. This problem is directly related to the performance of recommendation systems aimed at maximizing profits by attracting highly satisfied customers in most e-commerce transactions. In this study, we propose the possibility of using new indicators that can objectively substitute existing customer 's purchase ratings by using RFM multi-dimensional analysis technique to solve a series of problems. RFM multi-dimensional analysis technique is the most widely used analytical method in customer relationship management marketing(CRM), and is a data analysis method for selecting customers who are likely to purchase goods. As a result of verifying the actual purchase history data using the relevant index, the accuracy was as high as about 55%. This is a result of recommending a total of 4,386 different types of products that have never been bought before, thus the verification result means relatively high accuracy and utilization value. And this study suggests the possibility of general recommendation system that can be applied to various offline product data. If additional data is acquired in the future, the accuracy of the proposed recommendation system can be improved.

Estimation of spatial distribution of snow depth using DInSAR of Sentinel-1 SAR satellite images (Sentinel-1 SAR 위성영상의 위상차분간섭기법(DInSAR)을 이용한 적설심의 공간분포 추정)

  • Park, Heeseong;Chung, Gunhui
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.12
    • /
    • pp.1125-1135
    • /
    • 2022
  • Damages by heavy snow does not occur very often, but when it does, it causes damage to a wide area. To mitigate snow damage, it is necessary to know, in advance, the depth of snow that causes damage in each region. However, snow depths are measured at observatory locations, and it is difficult to understand the spatial distribution of snow depth that causes damage in a region. To understand the spatial distribution of snow depth, the point measurements are interpolated. However, estimating spatial distribution of snow depth is not easy when the number of measured snow depth is small and topographical characteristics such as altitude are not similar. To overcome this limit, satellite images such as Synthetic Aperture Radar (SAR) can be analyzed using Differential Interferometric SAR (DInSAR) method. DInSAR uses two different SAR images measured at two different times, and is generally used to track minor changes in topography. In this study, the spatial distribution of snow depth was estimated by DInSAR analysis using dual polarimetric IW mode C-band SAR data of Sentinel-1B satellite operated by the European Space Agency (ESA). In addition, snow depth was estimated using geostationary satellite Chollian-2 (GK-2A) to compare with the snow depth from DInSAR method. As a result, the accuracy of snow cover estimation in terms with grids was about 0.92% for DInSAR and about 0.71% for GK-2A, indicating high applicability of DInSAR method. Although there were cases of overestimation of the snow depth, sufficient information was provided for estimating the spatial distribution of the snow depth. And this will be helpful in understanding regional damage-causing snow depth.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Study on Legal and Institutional Improvement Measures for the Effective Implementation of SMS -Focusing on Aircraft Accident Investigation-

  • Yoo, Kyung-In
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.32 no.2
    • /
    • pp.101-127
    • /
    • 2017
  • Even with the most advanced aviation technology benefits, aircraft accidents are constantly occurring while air passenger transportation volume is expected to double in the next 15 years. Since it is not possible to secure aviation safety only by the post aircraft accident safety action of accident investigations, it has been recognized and consensus has been formed that proactive and predictive prevention measures are necessary. In this sense, the aviation safety management system (SMS) was introduced in 2008 and has been carried out in earnest since 2011. SMS is a proactive and predictive aircraft accident preventive measure, which is a mechanism to eliminate the fundamental risk factors by approaching organizational factors beyond technological factors and human factors related to aviation safety. The methodology is to collect hazards in all the sites required for aircraft operations, to build a database, to analyze the risks, and through managing risks, to keep the risks acceptable or below. Therefore, the improper implementation of SMS indicates that the aircraft accident prevention is insufficient and it is to be directly connected with the aircraft accident. Reports of duty performance related hazards including their own errors are essential and most important in SMS. Under the policy of just culture for voluntary reporting, the guarantee of information providers' anonymity, non-punishment and non-blame should be basically secured, but to this end, under-reporting is stagnant due to lack of trust in their own organizations. It is necessary for the accountable executive(CEO) and senior management to take a leading role to foster the safety culture initiating from just culture with the safety consciousness, balancing between safety and profit for the organization. Though a Ministry of Land, Infrastructure and Transport's order, "Guidance on SMS Implementation" states the training required for the accountable executive(CEO) and senior management, it is not legally binding. Thus it is suggested that the SMS training completion certificates of accountable executive(CEO) and senior management be included in SMS approval application form that is legally required by "Korea Aviation Safety Program" in addition to other required documents such as a copy of SMS manual. Also, SMS related items are missing in the aircraft accident investigation, so that organizational factors in association with safety culture and risk management are not being investigated. This hinders from preventing future accidents, as the root cause cannot be identified. The Aircraft Accident Investigation Manuals issued by ICAO contain the SMS investigation wheres it is not included in the final report form of Annex 13 to the Convention on International Civil Aviation. In addition, the US National Transportation Safety Board(NTSB) that has been a substantial example of the aircraft accident investigation for the other accident investigation agencies worldwide does not appear to expand the scope of investigation activities further to SMS. For these reasons, it is believed that investigation agencies conducting their investigations under Annex 13 do not include SMS in the investigation items, and the aircraft accident investigators are hardly exposed to SMS investigation methods or techniques. In this respect, it is necessary to include the SMS investigation in the organization and management information of the final report format of Annex 13. In Korea as well, in the same manner, SMS item should be added to the final report format of the Operating Regulation of the Aircraft and Railway Accident Investigation Board. If such legal and institutional improvement methods are complemented, SMS will serve the purpose of aircraft accident prevention effectively and contribute to the improvement of aviation safety in the future.

  • PDF

Impact of Semantic Characteristics on Perceived Helpfulness of Online Reviews (온라인 상품평의 내용적 특성이 소비자의 인지된 유용성에 미치는 영향)

  • Park, Yoon-Joo;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.29-44
    • /
    • 2017
  • In Internet commerce, consumers are heavily influenced by product reviews written by other users who have already purchased the product. However, as the product reviews accumulate, it takes a lot of time and effort for consumers to individually check the massive number of product reviews. Moreover, product reviews that are written carelessly actually inconvenience consumers. Thus many online vendors provide mechanisms to identify reviews that customers perceive as most helpful (Cao et al. 2011; Mudambi and Schuff 2010). For example, some online retailers, such as Amazon.com and TripAdvisor, allow users to rate the helpfulness of each review, and use this feedback information to rank and re-order them. However, many reviews have only a few feedbacks or no feedback at all, thus making it hard to identify their helpfulness. Also, it takes time to accumulate feedbacks, thus the newly authored reviews do not have enough ones. For example, only 20% of the reviews in Amazon Review Dataset (Mcauley and Leskovec, 2013) have more than 5 reviews (Yan et al, 2014). The purpose of this study is to analyze the factors affecting the usefulness of online product reviews and to derive a forecasting model that selectively provides product reviews that can be helpful to consumers. In order to do this, we extracted the various linguistic, psychological, and perceptual elements included in product reviews by using text-mining techniques and identifying the determinants among these elements that affect the usability of product reviews. In particular, considering that the characteristics of the product reviews and determinants of usability for apparel products (which are experiential products) and electronic products (which are search goods) can differ, the characteristics of the product reviews were compared within each product group and the determinants were established for each. This study used 7,498 apparel product reviews and 106,962 electronic product reviews from Amazon.com. In order to understand a review text, we first extract linguistic and psychological characteristics from review texts such as a word count, the level of emotional tone and analytical thinking embedded in review text using widely adopted text analysis software LIWC (Linguistic Inquiry and Word Count). After then, we explore the descriptive statistics of review text for each category and statistically compare their differences using t-test. Lastly, we regression analysis using the data mining software RapidMiner to find out determinant factors. As a result of comparing and analyzing product review characteristics of electronic products and apparel products, it was found that reviewers used more words as well as longer sentences when writing product reviews for electronic products. As for the content characteristics of the product reviews, it was found that these reviews included many analytic words, carried more clout, and related to the cognitive processes (CogProc) more so than the apparel product reviews, in addition to including many words expressing negative emotions (NegEmo). On the other hand, the apparel product reviews included more personal, authentic, positive emotions (PosEmo) and perceptual processes (Percept) compared to the electronic product reviews. Next, we analyzed the determinants toward the usefulness of the product reviews between the two product groups. As a result, it was found that product reviews with high product ratings from reviewers in both product groups that were perceived as being useful contained a larger number of total words, many expressions involving perceptual processes, and fewer negative emotions. In addition, apparel product reviews with a large number of comparative expressions, a low expertise index, and concise content with fewer words in each sentence were perceived to be useful. In the case of electronic product reviews, those that were analytical with a high expertise index, along with containing many authentic expressions, cognitive processes, and positive emotions (PosEmo) were perceived to be useful. These findings are expected to help consumers effectively identify useful product reviews in the future.

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.