• Title/Summary/Keyword: The types of error

Search Result 1,312, Processing Time 0.036 seconds

Characterization of compounds and quantitative analysis of oleuropein in commercial olive leaf extracts (상업용 올리브 잎 추출물의 화합물 특성과 이들의 oleuropein 함량 비교분석)

  • Park, Mi Hyeon;Kim, Doo-Young;Arbianto, Alfan Danny;Kim, Jung-Hee;Lee, Seong Mi;Ryu, Hyung Won;Oh, Sei-Ryang
    • Journal of Applied Biological Chemistry
    • /
    • v.64 no.2
    • /
    • pp.113-119
    • /
    • 2021
  • Olive (Olea europaea L.) leaves, a raw material for health functional foods and cosmetics have abundant polyphenols including oleuropein (major bioactive compound) with various biological activities: antioxidant, antibacterial, antiviral, anticancer activity, and inhibit platelet activation. Oleuropein has been reported as skin protectant, antioxidant, anti-ageing, anti-cancer, anti-inflammation, anti-atherogenic, anti-viral, and anti-microbial activity. Despite oleuropein is the important compound in olive leaves, there is still no quantitative approach to reveal oleuropein content in commercial products. Therefore, a validated method of analysis has to develop for oleuropein. In this study, the components and oleuropein content in 10 types of products were analyzed using a developed method with ultra-performance liquid chromatography to quadrupole time-of-flight mass spectrometry, charge of aerosol detector, and photodiode array. The total of 18 compounds including iridoids (1, 3, 4, 14, and 16-18), coumarin (2), phenylethanoids (5, 9, and 11), flavonoids (6-8, 10, 12, and 13), lignan (15), were tentatively identified in the leaves extract based high resolution mass spectrometry data, and the content of oleuropein in each product was almost identical between two detection methods. The oleuropein in three commercial product (A, G, H) was contained more over the suggested content, and it of five products (B, E, H, I, J) were analyzed within 5-10% error range. However, the two products (C, D) were found far lower than suggested contents. This study provides that analytical results of oleuropein could be a potential information for the quality control of leaf extract for a manufactured functional food.

Long-term Clinical Consequences in Patients with Urea Cycle Disorders in Korea: A Single-center Experience (요소회로대사 질환 환자들의 장기적인 임상 경과에 대한 단일 기관 경험)

  • Lee, Jun;Kim, Min-ji;Yoo, Sukdong;Yoon, Ju Young;Kim, Yoo-Mi;Cheon, Chong Kun
    • Journal of The Korean Society of Inherited Metabolic disease
    • /
    • v.21 no.1
    • /
    • pp.15-21
    • /
    • 2021
  • Purpose: Urea cycle disorder (UCD) is an inherited inborn error of metabolism, acting on each step of urea cycle that cause various phenotypes. The purpose of the study was to investigate the long-term clinical consequences in different groups of UCD to characterize it. Methods: Twenty-two patients with UCD genetically confirmed were enrolled at Pusan National University Children's hospital and reviewed clinical features, biochemical and genetic features retrospectively. Results: UCD diagnosed in the present study included ornithine transcarbamylase deficiency (OTCD) (n=10, 45.5%), argininosuccinate synthase 1 deficiency (ASSD) (n=6, 27.3%), carbamoyl-phosphate synthetase 1 deficiency (CPS1D) (n=3, 13.6%), hyperornithinemia-hyperammonemia-homocitrullinuria syndrome (HHHS) (n=2, 9.1%), and arginase-1 deficiency (ARG1D) (n=1, 4.5%). The age at the diagnosis was 32.7±66.2 months old (range 0.1 to 228.0 months). Eight (36.4%) patients with UCD displayed short stature. Neurologic sequelae were observed in eleven (50%) patients with UCD. Molecular analysis identified 37 different mutation types (14 missense, 6 nonsense, 6 deletion, 6 splicing, 3 delins, 1 insertion, and 1 duplication) including 14 novel variants. Progressive growth impairment and poor neurological outcomes were associated with plasma isoleucine and leucine concentrations, respectively. Conclusion: Although combinations of treatments such as nutritional restriction of proteins and use of alternative pathways for discarding excessive nitrogen are extensively employed, the prognosis of UCD remains unsatisfactory. Prospective clinical trials are necessary to evaluate whether supplementation with BCAAs might improve growth or neurological outcomes and decrease metabolic crisis episodes in patients with UCD.

Development of Empirical Fragility Function for High-speed Railway System Using 2004 Niigata Earthquake Case History (2004 니가타 지진 사례 분석을 통한 고속철도 시스템의 지진 취약도 곡선 개발)

  • Yang, Seunghoon;Kwak, Dongyoup
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.11
    • /
    • pp.111-119
    • /
    • 2019
  • The high-speed railway system is mainly composed of tunnel, bridge, and viaduct to meet the straightness needed for keeping the high speed up to 400 km/s. Seismic fragility for the high-speed railway infrastructure can be assessed as two ways: one way is studying each element of infrastructure analytically or numerically, but it requires lots of research efforts due to wide range of railway system. On the other hand, empirical method can be used to access the fragility of an entire system efficiently, which requires case history data. In this study, we collect the 2004 MW 6.6 Niigata earthquake case history data to develop empirical seismic fragility function for a railway system. Five types of intensity measures (IMs) and damage levels are assigned to all segments of target system for which the unit length is 200 m. From statistical analysis, probability of exceedance for a certain damage level (DL) is calculated as a function of IM. For those probability data points, log-normal CDF is fitted using MLE method, which forms fragility function for each damage level of exceedance. Evaluating fragility functions calculated, we observe that T=3.0 spectral acceleration (SAT3.0) is superior to other IMs, which has lower standard deviation of log-normal CDF and low error of the fit. This indicates that long-period ground motion has more impacts on railway infrastructure system such as tunnel and bridge. It is observed that when SAT3.0 = 0.1 g, P(DL>1) = 2%, and SAT3.0 = 0.2 g, P(DL>1) = 23.9%.

Methodological Comparison of the Quantification of Total Carbon and Organic Carbon in Marine Sediment (해양 퇴적물내 총탄소 및 유기탄소의 분석기법 고찰)

  • Kim, Kyeong-Hong;Son, Seung-Kyu;Son, Ju-Won;Ju, Se-Jong
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.9 no.4
    • /
    • pp.235-242
    • /
    • 2006
  • The precise estimation of total and organic carbon contents in sediments is fundamental to understand the benthic environment. To test the precision and accuracy of CHN analyzer and the procedure to quantify total and organic carbon contents(using in-situ acidification with sulfurous acid($H_2SO_3$)) in the sediment, the reference material s such as Acetanilide($C_8H_9NO$), Sulfanilammide($C_6H_8N_2O_2S$), and BCSS-1(standard estuary sediment) were used. The results indicate that CHN analyzer to quantify carbon and nitrogen content has high precision(percent error=3.29%) and accuracy(relative standard deviation=1.26%). Additionally, we conducted the instrumental comparison of carbon values analyzed using CHN analyzer and Coulometeric Carbon Analyzer. Total carbon contents measured from two different instruments were highly correlated($R^2=0.9993$, n=84, p<0.0001) with a linear relationship and show no significant differences(paired t-test, p=0.0003). The organic carbon contents from two instruments also showed the similar results with a significant linear relationship($R^2=0.8867$, n=84, p<0.0001) and no significant differences(paired t-test, p<0.0001). Although it is possible to overestimate organic carbon contents for some sediment types having high inorganic carbon contents(such as calcareous ooze) due to procedural and analytical errors, analysis of organic carbon contents in sediments using CHN Analyzer and current procedures seems to provide the best estimates. Therefore, we recommend that this method can be applied to measure the carbon content in normal any sediment samples and are considered to be one of the best procedure far routine analysis of total and organic carbon.

  • PDF

Mutagenicity of Chloropropanols in SOS Chromotest and Ames Test (SOS Chromotest 및 Ames test에서의 Chloropropanol류의 변이원성)

  • Song, Geun-Seoup;Han, Sang-Bae;Uhm, Tae-Boong;Choi, Dong-Seong
    • Korean Journal of Food Science and Technology
    • /
    • v.30 no.6
    • /
    • pp.1464-1469
    • /
    • 1998
  • SOS Chromotest and Ames test were carried out to evaluate the mutagenicity of three chloropropanols. In the SOS Chromotest, 3-monochloro-l,2-propanediol (3-MCPD) and 2,3-dichloro-1-propanol (2,3-DCP) except for 1,3-dichloro-2-propanol (1,3-DCP) induced SOS response in Escherichia coli PQ37 with dose-response relationship and 2,3-DCP was far more genotoxic than 3-MCPD. The genotoxic activities of both compounds, however, were very lower in E. coli PQ35 (PQ37 $uvrA^+)$ as compared to them in E. coli PQ37, whereas much higher in E. coli PQ243 (PQ37 tagA alkA). These results indicate that there are at least two types of DNA lesions caused by these compounds; one is a excision-repairable and the other is 3-methyladenine or any similar lesion which is excision-unrepairable and can induce adaptive response. In Salmonella typhimurium TA100, all the compounds showed strong mutagenicities, establishing the following genotoxic order: 2,3-DCP>3-MCPD>1,3-DCP. But the mutagenic activities were very low in S. typhimurium TA98 and TA97a. These results suggest that the mutation by chloropropanols can be induced by the DNA lesions causing base-pair substitutions. From the result that the mutagenicities of 3-MCPD and 2,3-DCP in S. typhimurium TA1535 were very low as compared to those in S. typhimurium TA100, it was appeared that the mutations by both compounds necessitate error-prone SOS repair.

  • PDF

Quantifying Uncertainty of Calcium Determination in Infant Formula by AAS and ICP-AES (AAS 및 ICP-AES에 의한 조제분유 중 칼슘 함량 분석의 측정불확도 산정)

  • Jun, Jang-Young;Kwak, Byung-Man;Ahn, Jang-Hyuk;Kong, Un-Young
    • Korean Journal of Food Science and Technology
    • /
    • v.36 no.5
    • /
    • pp.701-710
    • /
    • 2004
  • Uncertainty was quantified to evaluate calcium determination result in infant formula with AAS (Atomic Absorption Spectrometry) and ICP-AES (Inductively Coupled Plasma-Atomic Emission Spectrometry). Uncertainty sources in measurand, such as sample weight, final volume of sample, sample dilution and the instrumental result were identified and used as parameters for combined standard uncertainty based on the GUM (Guide to the expression of uncertainty in measurement) and Draft EURACHEM/CITAC Guide. Uncertainty components of each sources in measurand were identified as resolution, reproducibility and stability of chemical balance, standard material purity, standard material molecular weight, standard solution concentration, standard solution dilution factor, sample dilution factor, calibration curve, recovery, instrumental precision, reproducibility, and stability, Each uncertainty components were evaluated by uncertainty types and included to calculate combined uncertainty. The kinds of uncertainty sources and components in the analytical method by AAS and ICP-AES were same except sample dilution factor for AAS. The analytical results and combined standard uncertainties of calcium content were estimated within the certification range $(367{\pm}20\;mg/100g)$ of CRM (Certified Reference Material) and were not significantly different between method by AAS followed by ashing and method by ICP-AES followed by acid digestion as $359.52{\pm}23.61\;mg/100g\;and\;354.75{\pm}16.16\;mg/100g$, respectively. Identifying uncertainty sources related with precision, repeatability, stability, and maintaining proper instrumental conditions as well as personal proficiency was needed to reduce analytical error.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.

A Study on the Dimensions, Surface Area and Volume of Grains (곡립(穀粒)의 치수, 표면적(表面積) 및 체적(體積)에 관(關)한 연구(硏究))

  • Park, Jong Min;Kim, Man Soo
    • Korean Journal of Agricultural Science
    • /
    • v.16 no.1
    • /
    • pp.84-101
    • /
    • 1989
  • An accurate measurement of size, surface area and volume of agricultural products is essential in many engineering operations such as handling and sorting, and in heat transfer studies on heating and cooling processes. Little information is available on these properties due to their irregular shape, and moreover very little information on the rough rice, soybean, barley, and wheat has been published. Physical dimensions of grain, such as length, width, thickness, surface area, and volume vary according to the variety, environmental conditions, temperature, and moisture content. Especially, recent research has emphasized on the variation of these properties with the important factors such as moisture content. The objectives of this study were to determine physical dimensions such as length, width and thickness, surface area and volume of the rough rice, soybean, barley, and wheat as a function of moisture content, to investigate the effect of moisture content on the properties, and to develop exponential equations to predict the surface area and the volume of the grains as a function of physical dimensions. The varieties of the rough rice used in this study were Akibare, Milyang 15, Seomjin, Samkang, Chilseong, and Yongmun, as a soybean sample Jangyeobkong and Hwangkeumkong, as a barley sample Olbori and Salbori, and as a wheat sample Eunpa and Guru were selected, respectively. The physical properties of the grain samples were determined at four levels of moisture content and ten or fifteen replications were run at each moisture content level and each variety. The results of this study are summarized as follows; 1. In comparison of the surface area and the volume of the 0.0375m diameter-sphere measured in this study with the calculated values by the formula the percent error between them showed least values of 0.65% and 0.77% at the rotational degree interval of 15 degree respectively. 2. The statistical test(t-test) results of the physical properties between the types of rough rice, and between the varieties of soybean and wheat indicated that there were significant difference at the 5% level between them. 3. The physical dimensions varied linearly with the moisture content, and the ratios of length to thickness (L/T) and of width to thickness (W/T) in rough rice decreased with increase of moisture content, while increased in soybean, but uniform tendency of the ratios in barley and wheat was not shown. In all of the sample grains except Olbori, sphericity decreased with increase of moisture content. 4. Over the experimental moisture levels, the surface area and the volume were in the ranges of about $45{\sim}51{\times}10^{-6}m^2$, $25{\sim}30{\times}10^{-9}m^3$ for Japonica-type rough rice, about $42{\sim}47{\times}10^{-6}m^2$, $21{\sim}26{\times}10^{-9}m^3$ for Indica${\times}$Japonica type rough rice, about $188{\sim}200{\times}10^{-6}m^2$, $277{\sim}300{\times}10^{-9}m^3$ for Jangyeobkong, about $180{\sim}201{\times}10^{-6}m^2$, $190{\sim}253{\times}10^{-9}m^3$ for Hwangkeumkong, about $60{\sim}69{\times}10^{-6}m^2$, $36{\sim}45{\times}10^{-9}m^3$ for Covered barley, about $47{\sim}60{\times}10^{-6}m^2$, $22{\sim}28{\times}10^{-9}m^3$ for Naked barley, about $51{\sim}20{\times}10^{-6}m^2$, $23{\sim}31{\times}10^{-9}m^3$ for Eunpamill, and about $57{\sim}69{\times}10^{-6}m^2$, $27{\sim}34{\times}10^{-9}m^3$ for Gurumill, respectively. 5. The increasing rate of surface area and volume with increase of moisture content was higher in soybean than other sample grains, and that of Japonica-type was slightly higher than Indica${\times}$Japonica type in rough rice. 6. The regression equations of physical dimensions, surface area and volume were developed as a function of moisture content, the exponential equations of surface area and volume were also developed as a function of physical dimensions, and the regression equations of surface area were also developed as a function of volume in all grain samples.

  • PDF