• Title/Summary/Keyword: default coefficient

Search Result 19, Processing Time 0.027 seconds

Default Voting using User Coefficient of Variance in Collaborative Filtering System (협력적 여과 시스템에서 사용자 변동 계수를 이용한 기본 평가간 예측)

  • Ko, Su-Jeong
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1111-1120
    • /
    • 2005
  • In collaborative filtering systems most users do not rate preferences; so User-Item matrix shows great sparsity because it has missing values for items not rated by users. Generally, the systems predict the preferences of an active user based on the preferences of a group of users. However, default voting methods predict all missing values for all users in User-Item matrix. One of the most common methods predicting default voting values tried two different approaches using the average rating for a user or using the average rating for an item. However, there is a problem that they did not consider the characteristics of items, users, and the distribution of data set. We replace the missing values in the User-Item matrix by the default noting method using user coefficient of variance. We select the threshold of user coefficient of variance by using equations automatically and determine when to shift between the user averages and item averages according to the threshold. However, there are not always regular relations between the averages and the thresholds of user coefficient of variances in datasets. It is caused that the distribution information of user coefficient of variances in datasets affects the threshold of user coefficient of variance as well as their average. We decide the threshold of user coefficient of valiance by combining them. We evaluate our method on MovieLens dataset of user ratings for movies and show that it outperforms previously default voting methods.

Default Bayesian testing for the bivariate normal correlation coefficient

  • Kang, Sang-Gil;Kim, Dal-Ho;Lee, Woo-Dong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.5
    • /
    • pp.1007-1016
    • /
    • 2011
  • This article deals with the problem of testing for the correlation coefficient in the bivariate normal distribution. We propose Bayesian hypothesis testing procedures for the bivariate normal correlation coefficient under the noninformative prior. The noninformative priors are usually improper which yields a calibration problem that makes the Bayes factor to be defined up to a multiplicative constant. So we propose the default Bayesian hypothesis testing procedures based on the fractional Bayes factor and the intrinsic Bayes factors under the reference priors. A simulation study and an example are provided.

An innovative method for determining the diffusion coefficient of product nuclide

  • Chen, Chih-Lung;Wang, Tsing-Hai
    • Nuclear Engineering and Technology
    • /
    • v.49 no.5
    • /
    • pp.1019-1030
    • /
    • 2017
  • Diffusion is a crucial mechanism that regulates the migration of radioactive nuclides. In this study, an innovative numerical method was developed to simultaneously calculate the diffusion coefficient of both parent and, afterward, series daughter nuclides in a sequentially reactive through-diffusion model. Two constructed scenarios, a serial reaction (RN_1 ${\rightarrow}$ RN_2 ${\rightarrow}$ RN_3) and a parallel reaction (RN_1 ${\rightarrow}$ RN_2A + RN_2B), were proposed and calculated for verification. First, the accuracy of the proposed three-member reaction equations was validated using several default numerical experiments. Second, by applying the validated numerical experimental concentration variation data, the as-determined diffusion coefficient of the product nuclide was observed to be identical to the default data. The results demonstrate the validity of the proposed method. The significance of the proposed numerical method will be particularly powerful in determining the diffusion coefficients of systems with extremely thin specimens, long periods of diffusion time, and parent nuclides with fast decay constants.

Improvement of Efficient Tone-Mapping Curve using Adaptive Depth Range Coefficient (적응적 깊이 영역 변수를 활용한 효율적인 톤 매핑 커브 개선)

  • Lee, Yong-Hwan;Kim, Youngseop;Ahn, Byoung-Man
    • Journal of the Semiconductor & Display Technology
    • /
    • v.14 no.4
    • /
    • pp.92-97
    • /
    • 2015
  • The purpose of this work is to support a solution of optimizing TMO (tone mapping operator). JPEG XT Profile A and C utilize Erik Reinhard TMO that works well in most cases, however, detailed information of a scene is lost in some cases. Reinhard TMO only calculates its coefficient to have tone-mapping curve from log-average luminance, and this lead to lose details of bright and dark area of scenes in turn. Thus, this paper proposes an enhancement of the default TMO for JPEG XT Profile C to optimize tone-mapping curve. Main idea is that we divide tone mapping curve into several ranges, and set reasonable parameters for each range. By the experimental results, the proposed scheme shows and obtains better performance within a dark scene, compared to the default Reinhard TMO.

Default Bayesian testing for normal mean with known coefficient of variation

  • Kang, Sang-Gil;Kim, Dal-Ho;Le, Woo-Dong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.2
    • /
    • pp.297-308
    • /
    • 2010
  • This article deals with the problem of testing mean when the coefficient of variation in normal distribution is known. We propose Bayesian hypothesis testing procedures for the normal mean under the noninformative prior. The noninformative prior is usually improper which yields a calibration problem that makes the Bayes factor to be defined up to a multiplicative constant. So we propose the objective Bayesian hypothesis testing procedures based on the fractional Bayes factor and the intrinsic Bayes factor under the reference prior. Specially, we develop intrinsic priors which give asymptotically same Bayes factor with the intrinsic Bayes factor under the reference prior. Simulation study and a real data example are provided.

Undecided inference using logistic regression for credit evaluation (신용평가에서 로지스틱 회귀를 이용한 미결정자 추론)

  • Hong, Chong-Sun;Jung, Min-Sub
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.2
    • /
    • pp.149-157
    • /
    • 2011
  • Undecided inference could be regarded as a missing data problem such as MARand MNAR. Under the assumption of MAR, undecided inference make use of logistic regression model. The probability of default for the undecided group is obtained with regression coefficient vectors for the decided group and compare with the probability of default for the decided group. And under the assumption of MNAR, undecide dinference make use of logistic regression model with additional feature random vector. Simulation results based on two kinds of real data are obtained and compared. It is found that the misclassification rates are not much different from the rate of rawdata under the assumption of MAR. However the misclassification rates under the assumption of MNAR are less than those under the assumption of MAR, and as the ratio of the undecided group is increasing, the misclassification rates is decreasing.

Evaluation of Operational Options of Wastewater Treatment Using EQPS Models (EQPS 모델을 이용한 하수처리장 운전 평가)

  • Yoo, Hosik;Ahn, Seyoung
    • Journal of the Korean Society of Urban Environment
    • /
    • v.18 no.4
    • /
    • pp.401-408
    • /
    • 2018
  • EQPS (Effluent Quality Prediction System, Dynamita, France) was applied to analyze the appropriateness of the design of a bioreactor in A sewage treatment plant. A sewage treatment plant was designed by setting the design concentration of the secondary clarifier effluent to total nitrogen and total phosphorus, 10 mg/L and 1.8 mg/L, respectively, in order to comply with the target water quality at the level of the hydrophilic water. The retention time of the 4-stage BNR reactor was 9.6 hours, which was 0.5 for the pre-anoxic tank, 1.0 for the anaerobic tank, 2.9 for the anoxic tank, and 5.2 hours for the aerobic tank. As a result of the modeling of the winter season, the retention time of the anaerobic tank was increased by 0.2 hours in order to satisfy the target water quality of the hydrophilic water level. The default coefficients of the one step nitrification denitrification model proposed by the software manufacturer were used to exclude distortion of the modeling results. Since the process modeling generally presents optimal conditions, the retention time of the 4-stage BNR should be increased to 9.8 hours considering the bioreactor margin. The accurate use of process modeling in the design stage of the sewage treatment plant is a way to ensure the stability of the treatment performance and efficiency after construction of the sewage treatment plant.

Optimal Associative Neighborhood Mining using Representative Attribute (대표 속성을 이용한 최적 연관 이웃 마이닝)

  • Jung Kyung-Yong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.4 s.310
    • /
    • pp.50-57
    • /
    • 2006
  • In Electronic Commerce, the latest most of the personalized recommender systems have applied to the collaborative filtering technique. This method calculates the weight of similarity among users who have a similar preference degree in order to predict and recommend the item which hits to propensity of users. In this case, we commonly use Pearson Correlation Coefficient. However, this method is feasible to calculate a correlation if only there are the items that two users evaluated a preference degree in common. Accordingly, the accuracy of prediction falls. The weight of similarity can affect not only the case which predicts the item which hits to propensity of users, but also the performance of the personalized recommender system. In this study, we verify the improvement of the prediction accuracy through an experiment after observing the rule of the weight of similarity applying Vector similarity, Entropy, Inverse user frequency, and Default voting of Information Retrieval field. The result shows that the method combining the weight of similarity using the Entropy with Default voting got the most efficient performance.

Comparative Evaluation of User Similarity Weight for Improving Prediction Accuracy in Personalized Recommender System (개인화 추천 시스템의 예측 정확도 향상을 위한 사용자 유사도 가중치에 대한 비교 평가)

  • Jung Kyung-Yong;Lee Jung-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.6
    • /
    • pp.63-74
    • /
    • 2005
  • In Electronic Commerce, the latest most of the personalized recommender systems have applied to the collaborative filtering technique. This method calculates the weight of similarity among users who have a similar preference degree in order to predict and recommend the item which hits to propensity of users. In this case, we commonly use Pearson Correlation Coefficient. However, this method is feasible to calculate a correlation if only there are the items that two users evaluated a preference degree in common. Accordingly, the accuracy of prediction falls. The weight of similarity can affect not only the case which predicts the item which hits to propensity of users, but also the performance of the personalized recommender system. In this study, we verify the improvement of the prediction accuracy through an experiment after observing the rule of the weight of similarity applying Vector similarity, Entropy, Inverse user frequency, and Default voting of Information Retrieval field. The result shows that the method combining the weight of similarity using the Entropy with Default voting got the most efficient performance.

Optimization of SWAN Wave Model to Improve the Accuracy of Winter Storm Wave Prediction in the East Sea

  • Son, Bongkyo;Do, Kideok
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.4
    • /
    • pp.273-286
    • /
    • 2021
  • In recent years, as human casualties and property damage caused by hazardous waves have increased in the East Sea, precise wave prediction skills have become necessary. In this study, the Simulating WAves Nearshore (SWAN) third-generation numerical wave model was calibrated and optimized to enhance the accuracy of winter storm wave prediction in the East Sea. We used Source Term 6 (ST6) and physical observations from a large-scale experiment conducted in Australia and compared its results to Komen's formula, a default in SWAN. As input wind data, we used Korean Meteorological Agency's (KMA's) operational meteorological model called Regional Data Assimilation and Prediction System (RDAPS), the European Centre for Medium Range Weather Forecasts' newest 5th generation re-analysis data (ERA5), and Japanese Meteorological Agency's (JMA's) meso-scale forecasting data. We analyzed the accuracy of each model's results by comparing them to observation data. For quantitative analysis and assessment, the observed wave data for 6 locations from KMA and Korea Hydrographic and Oceanographic Agency (KHOA) were used, and statistical analysis was conducted to assess model accuracy. As a result, ST6 models had a smaller root mean square error and higher correlation coefficient than the default model in significant wave height prediction. However, for peak wave period simulation, the results were incoherent among each model and location. In simulations with different wind data, the simulation using ERA5 for input wind datashowed the most accurate results overall but underestimated the wave height in predicting high wave events compared to the simulation using RDAPS and JMA meso-scale model. In addition, it showed that the spatial resolution of wind plays a more significant role in predicting high wave events. Nevertheless, the numerical model optimized in this study highlighted some limitations in predicting high waves that rise rapidly in time caused by meteorological events. This suggests that further research is necessary to enhance the accuracy of wave prediction in various climate conditions, such as extreme weather.