• Title/Summary/Keyword: 실험 분석 방법

Search Result 12,187, Processing Time 0.044 seconds

Numerical Study on Thermochemical Conversion of Non-Condensable Pyrolysis Gas of PP and PE Using 0D Reaction Model (0D 반응 모델을 활용한 PP와 PE의 비응축성 열분해 기체의 열화학적 전환에 대한 수치해석 연구)

  • Eunji Lee;Won Yang;Uendo Lee;Youngjae Lee
    • Clean Technology
    • /
    • v.30 no.1
    • /
    • pp.37-46
    • /
    • 2024
  • Environmental problems caused by plastic waste have been continuously growing around the world, and plastic waste is increasing even faster after COVID-19. In particular, PP and PE account for more than half of all plastic production, and the amount of waste from these two materials is at a serious level. As a result, researchers are searching for an alternative method to plastic recycling, and plastic pyrolysis is one such alternative. In this paper, a numerical study was conducted on the pyrolysis behavior of non-condensable gas to predict the chemical reaction behavior of the pyrolysis gas. Based on gas products estimated from preceding literature, the behavior of non-condensable gas was analyzed according to temperature and residence time. Numerical analysis showed that as the temperature and residence time increased, the production of H2 and heavy hydrocarbons increased through the conversion of the non-condensable gas, and at the same time, the CH4 and C6H6 species decreased by participating in the reaction. In addition, analysis of the production rate showed that the decomposition reaction of C2H4 was the dominant reaction for H2 generation. Also, it was found that more H2 was produced by PE with higher C2H4 contents. As a future work, an experiment is needed to confirm how to increase the conversion rate of H2 and carbon in plastics through the various operating conditions derived from this study's numerical analysis results.

The difference of image quality using other radioactive isotope in uniformity correction map of myocardial perfusion SPECT (심근 관류 SPECT에서 핵종에 따른 Uniformity correction map 설정을 통한 영상의 질 비교)

  • Song, Jae hyuk;Kim, Kyeong Sik;Lee, Dong Hoon;Kim, Sung Hwan;Park, Jang Won
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.2
    • /
    • pp.87-92
    • /
    • 2015
  • Purpose When the patients takes myocardial perfusion SPECT using $^{201}Tl$, the operator gives the patients an injection of $^{201}Tl$. But the uniformity correction map in SPECT uses $^{99m}Tc$ uniformity correction map. Thus, we want to compare the image quality when it uses $^{99m}Tc$ uniformity correction map and when it uses $^{201}Tl$ uniformity correction map. Materials and Methods Phantom study is performed. We take the data by Asan medical center daily QC condition with flood phantom including $^{201}Tl$ 21.3 kBq/mL. After postprocessing with this data, we analyze CFOV integral uniformity(I.U) and differential uniformity(D.U). And we take the data with Jaszczak ECT Phantom by American college of radiology accreditation program instruction including $^{201}Tl$ 33.4 kBq/mL. After post processing with this data, we analyze spatial Resolution, Integral Uniformity(I.U), coefficient of variation(C.V) and Contrast with Interactive data language program. Results In the flood phantom test, when it uses $^{99m}Tc$ uniformity correction map, Flood I.U is 3.6% and D.U is 3.0%. When it uses $^{201}Tl$ uniformity correction map, Flood I.U is 3.8% and D.U is 2.1%. The flood I.U is worsen about 5%, but the D.U is improved about 30% inversely. In the Jaszczak ECT phantom test, when it uses $^{99m}Tc$ uniformity correction map, SPECT I.U, C.V and contrast is 13.99%, 4.89% and 0.69. When it uses $^{201}Tl$ uniformity correction map, SPECT I.U, C.V and contrast is 11.37%, 4.79% and 0.78. All of data are improved about 18%, 2%, 13% The spatial resolution was no significant changes. Conclusion In the flood phantom test, Flood I.U is worsen but Flood D.U is improved. Therefore, it's uncertain that an image quality is improved with flood phantom test. On the other hand, SPECT I.U, C.V, Contrast are improved about 18%, 2%, 13% in the Jaszczak ECT phantom test. This study has limitations that we can't take all variables into account and study with two phantoms. We need think about things that it has a good effect when doctors decipher the nuclear medicine image and it's possible to improve the image quality using the uniformity correction map of other radionuclides other than $^{99m}Tc$, $^{201}Tl$ when we make other nuclear medicine examinations.

  • PDF

An influence of operator's posture on the shape of prepared tooth surfaces for fixed partial denture (진료자세가 고정성 국소의치의 지대치 삭제에 미치는 영향)

  • Won, In-Jae;Kwon, Kung-Rock;Pae, Ah-Ran;Choi, Dae-Gyun
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.49 no.1
    • /
    • pp.38-48
    • /
    • 2011
  • Purpose: Dentists suffer back, neck and shoulder pain during their careers due to bad operating posture. If dentists have a good operating posture ergonomically, there would be less pain and discomfort in the shoulder and back. Therefore, dentists should learn the Home position which enables dentists to approach a stable posture ergonomically. This study was to compare tooth preparation in the Home position and the Random position, and evaluate the clinical efficacy of the Home position. Materials and methods: Tooth preparation for fixed partial denture was performed on the maxillary left 2nd premolar and maxillary left 2nd molar at the two different operating positions were compared. The amount of occlusal reduction, marginal width, subgingival margin depth, and convergence angle were measured. A T-test was performed separately to compare the results of the Random position and the Home position. Results: 1. The amounts of average thickness of occlusal reduction on fossa were deficient to the ordered ones in the Random position and the Home position (P > .05). 2. The average subgingival margin depth of prepared margin on maxillary left 2nd premolar, maxillary left 2nd molar were excessive in the Random position than in the Home position. On the maxillary left 2nd premolar, there was no statistical difference in the Random position and the Home position except Distal midline, DL line angle, Lingual midline, ML line angle (P< .05). On the maxillary left 2nd molar, there was no statistical difference in the Random position and the Home position (P < .05). 3. Average convergence angle in the Random position and the Home position were excessive compared to the ordered angle. There was no statistical difference in the Random position and the Home position (P > .05). 4. Analysis of pearson correlation : In the Random position, the amounts of average thickness of occlusal reduction, the average subgingival margin depth of prepared margin, convergence angle were significantly associated with each other (P < .05). But in the Home position, they were not significantly associated with each other (P < .05). 5. The time needed for preparation in the Home position was faster or equal than that of the Random position as time went on. Conclusion: In conclusion, there were no significant differences between Home position and Random position in measures of occlusal reduction, marginal width, marginal depth, convergence angle. However, preparation time and incidence of damaging adjacent teeth were less in Home position than in Random position. Therefore, if trained properly, Home position which is more ergonomically stable can be adopted for clinical use.

Evaluating the usefulness of BinkieRTTM (oral positioning stent) for Head and Neck Radiotherapy (두경부암 환자 방사선 치료 시 BinkieRTTM(구강용 고정장치)에 대한 유용성 평가)

  • GyeongJin Lee;SangJun Son;GyeongDal Lim;ChanYong Kim;JeHee Lee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.34
    • /
    • pp.21-30
    • /
    • 2022
  • Purpose: The purpose of this study is to evaluate the effectiveness of oral positioning stent, the BinkieRTTM in radiation treatment for head and neck cancer patients in terms of tongue positions reproducibility, tongue doses and material properties. Materials and Methods: 24 cases using BinkieRTTM during radiation treatments were enrolled. The tongue was contoured on planning CT and CBCT images taken every 3 days during treatment, and then the DSC and center of tongue shift values were analyzed to evaluate the reproducibility of the tongue. The tongue dose was compared in terms of dose distribution when using BinkieRTTM and different type of oral stents (mouthpiece, paraffin wax). Randomly selected respective 10 patients were measured tongue doses of initial treatment plan for nasal cavity and unilateral parotid cancer. Finally, In terms of material evaluation, HU and relative electron density were identified in RTPS. Results: As a result of DSC analysis, it was 0.8 ± 0.07, skewness -0.8, kurtosis 0.61, and 95% CI was 0.79~0.82. To analyze the deviation of the central tongue shift during the treatment period, a 95% confidence interval for shift in the LR, SI, and AP directions were indicated, and a one-sample t-test for 0, which is an ideal value in the deviation(n=144). As a result of the t-test, the mean and SD in the LR and SI directions were 0.01 ± 0.14 cm (p→.05), 0.03 ± 0.25 cm (p→.05), and -0.08 ± 0.25 cm (p ←.05) in the AP direction. In the case of unilateral parotid cancer patients, the Dmean to the tongue of patients using BinkieRTTM was 16.92% ± 3.58% compared to the prescribed dose, and 23.99% ± 10.86% of patients with Paraffin Wax, indicating that the tongue dose was relatively lower when using BinkieRTTM (p←.05). On the other hand, among nasal cavity cancer patients, the Dmean of tongue dose for patients who used BinkieRTTM was 4.4% ± 5.6%, and for those who used mouthpiece, 5.9% ± 6.8%, but it was not statistically significant (p→.05). The relative electron density of Paraffin Wax, BinkieRTTM and Putty is 0.94, 0.99, 1.26 and the mass density is 0.95, 0.99 and 1.32 (g/cc), Transmission Factor is 0.99, 0.98, 0.96 respectively. Conclusion: The result of the tongue DSC analysis over the treatment period was about 0.8 and Deviation of the center of tongue shifts were within 0.2 cm, the reproducibility was more likely excellent. In the case of unilateral head and neck cancer patients, it was found that the use of BinkieRTTM rather than Paraffin Wax or Putty can reduce the unnecessary dose irradiated to the tongue. This study might be useful to understand of BinkieRTTM's properties and advantages. And also it could be another considered option as oral stent to keep the reproducibility of tongue and reducing dose during head and neck radiation treatments.

Radiosynthesis of $[^{11}C]6-OH-BTA-1$ in Different Media and Confirmation of Reaction By-products. ($[^{11}C]6-OH-BTA-1$ 조제 시 생성되는 부산물 규명과 반응용매에 따른 표지 효율 비교)

  • Lee, Hak-Jeong;Jeong, Jae-Min;Lee, Yun-Sang;Kim, Hyung-Woo;Lee, Eun-Kyoung;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.241-246
    • /
    • 2007
  • Purpose: $[^{11}C]6-OH-BTA-1$ ([N-methyl-$^{11}C$]2-(4'-methylaminophenyl)-6-hydroxybenzothiazole, 1), a -amyloid imaging agent for the diagnosis of Alzheimer's disease in PET, can be labeled with higher yield by a simple loop method. During the synthesis of $[^{11}C]1$, we found the formation of by-products in various solvents, e.g., methylethylketone (MEK), cyclohexanone (CHO), diethylketone (DEK), and dimethylformamide (DMF). Materials and Methods: In Automated radiosynthesis module, 1 mg of 4-aminophenyl-6-hydroxybenzothiazole (4) in 100 l of each solvent was reacted with $[^{11}C]methyl$ triflate in HPLC loop at room temperature (RT). The reaction mixture was separated by semi-preparative HPLC. Aliquots eluted at 14.4, 16.3 and 17.6 min were collected and analyzed by analytical HPLC and LC/MS spectrometer. Results: The labeling efficiencies of $[^{11}C]1$ were $86.0{\pm}5.5%$, $59.7{\pm}2.4%$, $29.9{\pm}1.8%$, and $7.6{\pm}0.5%$ in MEK, CHO, DEK and DMF, respectively. The LC/MS spectra of three products eluted at 14.4, 16.3 and 17.6 mins showed m/z peaks at 257.3 (M+1), 257.3 (M+1) and 271.3 (M+1), respectively, indicating their structures as 1, 2-(4'-aminophenyl)-6-methoxybenzothiazole (2) and by-product (3), respectively. Ratios of labeling efficiencies for the three products $([^{11}C]1:[^{11}C]2:[^{11}C]3)$ were $86.0{\pm}5.5%:5.0{\pm}3.4%:1.5{\pm}1.3%$ in MEK, $59.7{\pm}2.4%:4.7{\pm}3.2%:1.3{\pm}0.5%$ in CHO, $9.9{\pm}1.8%:2.0{\pm}0.7%:0.3{\pm}0.1%$ in DEK and $7.6{\pm}0.5%:0.0%:0.0%$ in DMF, respectively. Conclusion: The labeling efficiency of $[^{11}C]1$ was the highest when MEK was used as a reaction solvent. As results of mass spectrometry, 1 and 2 were conformed. 3 was presumed.

Performance Characteristics of 3D GSO PET/CT Scanner (Philips GEMINI PET/DT) (3차원 GSO PET/CT 스캐너(Philips GEMINI PET/CT의 특성 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Byeong-Il;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.4
    • /
    • pp.318-324
    • /
    • 2004
  • Purpose: Philips GEMINI is a newly introduced whole-body GSO PET/CT scanner. In this study, performance of the scanner including spatial resolution, sensitivity, scatter fraction, noise equivalent count ratio (NECR) was measured utilizing NEMA NU2-2001 standard protocol and compared with performance of LSO, BGO crystal scanner. Methods: GEMINI is composed of the Philips ALLEGRO PET and MX8000 D multi-slice CT scanners. The PET scanner has 28 detector segments which have an array of 29 by 22 GSO crystals ($4{\times}6{\times}20$ mm), covering axial FOV of 18 cm. PET data to measure spatial resolution, sensitivity, scatter fraction, and NECR were acquired in 3D mode according to the NEMA NU2 protocols (coincidence window: 8 ns, energy window: $409[\sim}664$ keV). For the measurement of spatial resolution, images were reconstructed with FBP using ramp filter and an iterative reconstruction algorithm, 3D RAMLA. Data for sensitivity measurement were acquired using NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves after we confirmed that dead time loss did not exceed 1%. To measure NECR and scatter fraction, 1110 MBq of F-18 solution was injected into a NEMA scatter phantom with a length of 70 cm and dynamic scan with 20-min frame duration was acquired for 7 half-lives. Oblique sinograms were collapsed into transaxial slices using single slice rebinning method, and true to background (scatter+random) ratio for each slice and frame was estimated. Scatter fraction was determined by averaging the true to background ratio of last 3 frames in which the dead time loss was below 1%. Results: Transverse and axial resolutions at 1cm radius were (1) 5.3 and 6.5 mm (FBP), (2) 5.1 and 5.9 mm (3D RAMLA). Transverse radial, transverse tangential, and axial resolution at 10 cm were (1) 5.7, 5.7, and 7.0 mm (FBP), (2) 5.4, 5.4, and 6.4 mm (3D RAMLA). Attenuation free values of sensitivity were 3,620 counts/sec/MBq at the center of transaxial FOV and 4,324 counts/sec/MBq at 10 cm offset from the center. Scatter fraction was 40.6%, and peak true count rate and NECR were 88.9 kcps @ 12.9 kBq/mL and 34.3 kcps @ 8.84 kBq/mL. These characteristics are better than that of ECAT EXACT PET scanner with BGO crystal. Conclusion: The results of this field test demonstrate high resolution, sensitivity and count rate performance of the 3D PET/CT scanner with GSO crystal. The data provided here will be useful for the comparative study with other 3D PET/CT scanners using BGO or LSO crystals.

Smoking-Induced Dopamine Release Studied with $[^{11}C]Raclopride$ PET ($[^{11}C]Raclopride$ PET을 이용한 흡연에 의한 도파민 유리 영상 연구)

  • Kim, Yu-Kyeong;Cho, Sang-Soo;Lee, Do-Hoon;Ryu, Hye-Jung;Lee, Eun-Ju;Ryu, Chang-Hung;Jeong, In-Soon;Hong, Soo-Kyung;Lee, Jae-Sung;Seo, Hong-Gwan;Jeong, Jae-Min;Lee, Won-Woo;Kim, Sang-Eun
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.421-429
    • /
    • 2005
  • Purpose: It has been postulated that dopamine release in the striatum underlies the reinforcing properties of nicotine. Substantial evidence in the animal studies demonstrates that nicotine interacts with dopaminergic neuron and regulates the activation of the dopaminergic system. The aim of this study was to visualize the dopamine release by smoking in human brain using PET scan with $[^{11}C]raclopride$. Materials and Methods: Five male non-smokers or ex-smokers with an abstinence period longer than 1 year (mean age of $24.4{\pm}1.7$ years) were enrolled in this study $[^{11}C]raclopride$, a dopamine D2 receptor radioligand, was administrated with bolus-plus-constant infusion. Dynamic PET was performed during 120 minutes ($3{\times}20s,\;2{\times}60s,\;2{\times}120s,\;1{\times}180s\;and\;22{\times}300s$). following the 50 minute-scanning, subjects smoked a cigarette containing 1 mg of nicotine while in the scanner. Blood samples for the measurement of plasma nicotine level were collected at 0, 5, 10, 15, 20, 25, 30, 45, 60, and 90 minute after smoking. Regions for striatal structures were drawn on the coronal summed PET images guided with co-registered MRI. Binding potential, calculated as (striatal-cerebellar)/cerebellar activity, was measured under equilibrium condition at baseline and smoking session. Results: The mean decrease in binding potential of $[^{11}C]raclopride$ between the baseline and smoking in caudate head, anterior putamen and ventral striatum was 4.7%, 4.0% and 7.8%, respectively. This indicated the striatal dopamine release by smoking. Of these, the reduction in binding potential in the ventral striatum was significantly correlated with the cumulated plasma level of the nicotine (Spearman's rho=0.9, p=0.04). Conclusion: These data demonstrate that in vivo imaging with $[^{11}C]raclopride$ PET could measure nicotine-induced dopamine release in the human brain, which has a significant positive correlation with the amount or nicotine administered bt smoking.

Effects of Anti-thyroglobulin Antibody on the Measurement of Thyroglobulin : Differences Between Immunoradiometric Assay Kits Available (면역방사계수법을 이용한 Thyroglobulin 측정시 항 Thyroglobulin 항체의 존재가 미치는 영향: Thyroglobulin 측정 키트에 따른 차이)

  • Ahn, Byeong-Cheol;Seo, Ji-Hyeong;Bae, Jin-Ho;Jeong, Shin-Young;Yoo, Jeong-Soo;Jung, Jin-Hyang;Park, Ho-Yong;Kim, Jung-Guk;Ha, Sung-Woo;Sohn, Jin-Ho;Lee, In-Kyu;Lee, Jae-Tae;Kim, Bo-Wan
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.4
    • /
    • pp.252-256
    • /
    • 2005
  • Purpose: Thyroglobulin (Tg) is a valuable and sensitive tool as a marker for diagnosis and follow-up for several thyroid disorders, especially, in the follow-up of patients with differentiated thyroid cancer (DTC). Often, clinical decisions rely entirely on the serum Tg concentration. But the Tg assay is one of the most challenging laboratory measurements to perform accurately owing to antithyroglobulin antibody (Anti-Tg). In this study, we have compared the degree of Anti-Tg effects on the measurement of Tg between availale Tg measuring kits. Materials and Methods: Measurement of Tg levels for standard Tg solution was performed with two different kits commercially available (A/B kits) using immunoradiometric assay technique either with absence or presence of three different concentrations of Anti-Tg. Measurement of Tg for patient's serum was also performed with the same kits. Patient's serum samples were prepared with mixtures of a serum containing high Tg levels and a serum containg high Anti-Tg concentrations. Results: In the measurements of standard Tg solution, presence of Anti-Tg resulted in falsely lower Tg level by both A and B kits. Degree of Tg underestimation by h kit was more prominent than B kit. The degree of underestimation by B kit was trivial therefore clinically insignificant, but statistically significant. Addition of Anti-Tg to patient serum resulted in falsely lower Tg levels with only A kit. Conclusion: Tg level could be underestimated in the presence of anti-Tg. Anti-Tg effect on Tg measurement was variable according to assay kit used. Therefore, accuracy test must be performed for individual Tg-assay kit.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.