• Title/Summary/Keyword: 확률 모델

Search Result 2,130, Processing Time 0.048 seconds

Characteristics of the Graded Wildlife Dose Assessment Code K-BIOTA and Its Application (단계적 야생동식물 선량평가 코드 K-BIOTA의 특성 및 적용)

  • Keum, Dong-Kwon;Jun, In;Lim, Kwang-Muk;Kim, Byeong-Ho;Choi, Yong-Ho
    • Journal of Radiation Protection and Research
    • /
    • v.40 no.4
    • /
    • pp.252-260
    • /
    • 2015
  • This paper describes the technical background for the Korean wildlife radiation dose assessment code, K-BIOTA, and the summary of its application. The K-BIOTA applies the graded approaches of 3 levels including the screening assessment (Level 1 & 2), and the detailed assessment based on the site specific data (Level 3). The screening level assessment is a preliminary step to determine whether the detailed assessment is needed, and calculates the dose rate for the grouped organisms, rather than an individual biota. In the Level 1 assessment, the risk quotient (RQ) is calculated by comparing the actual media concentration with the environmental media concentration limit (EMCL) derived from a bench-mark screening reference dose rate. If RQ for the Level 1 assessment is less than 1, it can be determined that the ecosystem would maintain its integrity, and the assessment is terminated. If the RQ is greater than 1, the Level 2 assessment, which calculates RQ using the average value of the concentration ratio (CR) and equilibrium distribution coefficient (Kd) for the grouped organisms, is carried out for the more realistic assessment. Thus, the Level 2 assessment is less conservative than the Level 1 assessment. If RQ for the Level 2 assessment is less than 1, it can be determined that the ecosystem would maintain its integrity, and the assessment is terminated. If the RQ is greater than 1, the Level 3 assessment is performed for the detailed assessment. In the Level 3 assessment, the radiation dose for the representative organism of a site is calculated by using the site specific data of occupancy factor, CR and Kd. In addition, the K-BIOTA allows the uncertainty analysis of the dose rate on CR, Kd and environmental medium concentration among input parameters optionally in the Level 3 assessment. The four probability density functions of normal, lognormal, uniform and exponential distribution can be applied.The applicability of the code was tested through the participation of IAEA EMRAS II (Environmental Modeling for Radiation Safety) for the comparison study of environmental models comparison, and as the result, it was proved that the K-BIOTA would be very useful to assess the radiation risk of the wildlife living in the various contaminated environment.

Development of a Korean Standard Structural Brain Template in Cognitive Normals and Patients with Mild Cognitive Impairment and Alzheimer's Disease (정상노인 및 경도인지장애 및 알츠하이머성 치매 환자에서의 한국인 뇌 구조영상 표준판 개발)

  • Kim, Min-Ji;Jahng, Geon-Ho;Lee, Hack-Young;Kim, Sun-Mi;Ryu, Chang-Woo;Shin, Won-Chul;Lee, Soo-Yeol
    • Investigative Magnetic Resonance Imaging
    • /
    • v.14 no.2
    • /
    • pp.103-114
    • /
    • 2010
  • Purpose : To generate a Korean specific brain template, especially in patients with Alzheimer's disease (AD) by optimizing the voxel-based analysis. Materials and Methods : Three-dimensional T1-weighted images were obtained from 123 subjects who were 43 cognitively normal subjects and patients with 44 mild cognitive impairment (MCI) and 36 AD. The template and the corresponding aprior maps were created by using the matched pairs approach with considering differences of age, gender and differential diagnosis (DDX). We measured several characteristics in both our and the MNI templates, including in the ventricle size. Also, the fractions of gray matter and white matter voxels normalized by the total intracranial were evaluated. Results : The high resolution template and the corresponding aprior maps of gray matter, white matter (WM) and CSF were created with the voxel-size of $1{\times}1{\times}1\;mm$. Mean distance measures and the ventricle sizes differed between two templates. Our brain template had less gray matter and white matter areas than the MNI template. There were volume differences more in gray matter than in white matter. Conclusion : Gray matter and/or white matter integrity studies in populations of Korean elderly and patients with AD are needed to investigate with this template.

A Study of Adjustment for Beginning & Ending Points of Climbing Lanes (오르막차로 시.종점 위치의 보정에 관한 연구)

  • 김상윤;오흥운
    • Journal of Korean Society of Transportation
    • /
    • v.24 no.5 s.91
    • /
    • pp.35-44
    • /
    • 2006
  • Acceleration and deceleration curves have been used for design purposes worldwide. The curve in design level has been regarded as an single deterministic curve to be used for design of climb lanes. It should be noted that the curve was originally made using ideal driving truck and that the curve is applied during design based on the assumption of no difference between ideal and real driving conditions. However. observations show that aged vehicles and lazy behavioring drivers nay make lower performance of vehicles than the ideal performance. The present paper provides the results of truck speeds at climbing lanes then probabilistic variation of acceleration and deceleration corves. For these purposes. a study about identification of vehicle makers, and weights for trucks at freeway toll gates and then observation of vehicle-following speed were performed. The 85%ile results obtained were compared with the deterministic performance curves of 180, 200, and 220 Ib/hp. It was identified that the performance of 85%ile results obtained from vehicle-following-speed observations were lower than one from deterministic performance curves. From these results, it may be concluded that at the beginning Point of climbing lanes additional $16.19{\sim}67.94m$ is necessary and that at the end point of climbing lanes $53.12{\sim}103.24m$ of extension is necessary.

Consideration of Normal Variation of Perfusion Measurements in the Quantitative Analysis of Myocardial Perfusion SPECT: Usefulness in Assessment of Viable Myocardium (심근관류 SPECT의 정량적 분석에서 관류정량값 정상변이의 고려: 생존심근 평가에서의 유용성)

  • Paeng, Jin-Chul;Lim, Il-Han;Kim, Ki-Bong;Lee, Dong-Soo
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.4
    • /
    • pp.285-291
    • /
    • 2008
  • Purpose: Although automatic quantification software of myocardial perfusion SPECT provides highly objective and reproducible quantitative measurements, there is still some limitation in the direct use of quantitative measurements. In this study we derived parameters using normal variation of perfusion measurements, and tried to test the usefulness of these parameters. Materials and Methods: In order to calculate normal variation of perfusion measurements on myocardial perfusion SPECT, 55 patients (M:F = 28:27) of low-likelihood for coronary artery disease were enrolled and $^{201}TI$ rest/$^{99m}Tc$-MIBI stress SPECT studies were performed. Using 20-segment model, mean (m) and standard deviation (SD) of perfusion were calculated in each segment. As a myocardial viability assessment group, another 48 patients with known coronary artery disease, who underwent coronary artery bypass graft surgery (CABG) were enrolled. $^{201}TI$ rest/$^{99m}Tc$-MIBI stress / $^{201}TI$ 24-hr delayed SPECT was performed before CABG and SPECT was followed up 3 months after CABG. From the preoperative 24-hr delayed SPECT, $Q_{delay}$ (perfusion measurement), ${\Delta}_{delay}$ ($Q_{delay}$ - m) and $Z_{delay}$ (($Q_{delay}$ - m)/SD) were defined and diagnostic performances of them for myocardial viability were evaluated using area under curve (AUC) on receiver operating characteristic (ROC) curve analysis. Results: Segmental perfusion measurements showed considerable normal variations among segments. In men, the lowest segmental perfusion measurement was $51.8{\pm}6.5$ and the highest segmental perfusion was $87.0{\pm}5.9$, and they are $58.7{\pm}8.1$ and $87.3{\pm}6.0$, respectively in women. In the viability assessment $Q_{delay}$ showed AUC of 0.633, while those for ${\Delta}_{delay}$ and $Z_{delay}$ were 0.735 and 0.716, respectively. The AUCs of ${\Delta}_{delay}$ and $Z_{delay}$ were significantly higher than that of $Q_{delay}$ (p = 0.001 and 0.018, respectively). The diagnostic performance of ${\Delta}_{delay}$, which showed highest AUC, was 85% of sensitivity and 53% of specificity at the optimal cutoff of -24.7. Conclusion: On automatic quantification of myocardial perfusion SPECT, the normal variation of perfusion measurements were considerable among segments. In the viability assessment, the parameters considering normal variation showed better diagnostic performance than the direct perfusion measurement. This study suggests that consideration of normal variation is important in the analysis of measurements on quantitative myocardial perfusion SPECT.

Risk Ranking Analysis for the City-Gas Pipelines in the Underground Laying Facilities (지하매설물 중 도시가스 지하배관에 대한 위험성 서열화 분석)

  • Ko, Jae-Sun;Kim, Hyo
    • Fire Science and Engineering
    • /
    • v.18 no.1
    • /
    • pp.54-66
    • /
    • 2004
  • In this article, we are to suggest the hazard-assessing method for the underground pipelines, and find out the pipeline-maintenance schemes of high efficiency in cost. Three kinds of methods are applied in order to refer to the approaching methods of listing the hazards for the underground pipelines: the first is RBI(Risk Based Inspection), which firstly assess the effect of the neighboring population, the dimension, thickness of pipe, and working time. It enables us to estimate quantitatively the risk exposure. The second is the scoring system which is based on the environmental factors of the buried pipelines. Last we quantify the frequency of the releases using the present THOMAS' theory. In this work, as a result of assessing the hazard of it using SPC scheme, the hazard score related to how the gas pipelines erodes indicate the numbers from 30 to 70, which means that the assessing criteria define well the relative hazards of actual pipelines. Therefore. even if one pipeline region is relatively low score, it can have the high frequency of leakage due to its longer length. The acceptable limit of the release frequency of pipeline shows 2.50E-2 to 1.00E-l/yr, from which we must take the appropriate actions to have the consequence to be less than the acceptable region. The prediction of total frequency using regression analysis shows the limit operating time of pipeline is the range of 11 to 13 years, which is well consistent with that of the actual pipeline. Concludingly, the hazard-listing scheme suggested in this research will be very effectively applied to maintaining the underground pipelines.

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

A stratified random sampling design for paddy fields: Optimized stratification and sample allocation for effective spatial modeling and mapping of the impact of climate changes on agricultural system in Korea (농지 공간격자 자료의 층화랜덤샘플링: 농업시스템 기후변화 영향 공간모델링을 위한 국내 농지 최적 층화 및 샘플 수 최적화 연구)

  • Minyoung Lee;Yongeun Kim;Jinsol Hong;Kijong Cho
    • Korean Journal of Environmental Biology
    • /
    • v.39 no.4
    • /
    • pp.526-535
    • /
    • 2021
  • Spatial sampling design plays an important role in GIS-based modeling studies because it increases modeling efficiency while reducing the cost of sampling. In the field of agricultural systems, research demand for high-resolution spatial databased modeling to predict and evaluate climate change impacts is growing rapidly. Accordingly, the need and importance of spatial sampling design are increasing. The purpose of this study was to design spatial sampling of paddy fields (11,386 grids with 1 km spatial resolution) in Korea for use in agricultural spatial modeling. A stratified random sampling design was developed and applied in 2030s, 2050s, and 2080s under two RCP scenarios of 4.5 and 8.5. Twenty-five weather and four soil characteristics were used as stratification variables. Stratification and sample allocation were optimized to ensure minimum sample size under given precision constraints for 16 target variables such as crop yield, greenhouse gas emission, and pest distribution. Precision and accuracy of the sampling were evaluated through sampling simulations based on coefficient of variation (CV) and relative bias, respectively. As a result, the paddy field could be optimized in the range of 5 to 21 strata and 46 to 69 samples. Evaluation results showed that target variables were within precision constraints (CV<0.05 except for crop yield) with low bias values (below 3%). These results can contribute to reducing sampling cost and computation time while having high predictive power. It is expected to be widely used as a representative sample grid in various agriculture spatial modeling studies.

Comparison between Uncertainties of Cultivar Parameter Estimates Obtained Using Error Calculation Methods for Forage Rice Cultivars (오차 계산 방식에 따른 사료용 벼 품종의 품종모수 추정치 불확도 비교)

  • Young Sang Joh;Shinwoo Hyun;Kwang Soo Kim
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.3
    • /
    • pp.129-141
    • /
    • 2023
  • Crop models have been used to predict yield under diverse environmental and cultivation conditions, which can be used to support decisions on the management of forage crop. Cultivar parameters are one of required inputs to crop models in order to represent genetic properties for a given forage cultivar. The objectives of this study were to compare calibration and ensemble approaches in order to minimize the uncertainty of crop yield estimates using the SIMPLE crop model. Cultivar parameters were calibrated using Log-likelihood (LL) and Generic Composite Similarity Measure (GCSM) as an objective function for Metropolis-Hastings (MH) algorithm. In total, 20 sets of cultivar parameters were generated for each method. Two types of ensemble approach. First type of ensemble approach was the average of model outputs (Eem), using individual parameters. The second ensemble approach was model output (Epm) of cultivar parameter obtained by averaging given 20 sets of parameters. Comparison was done for each cultivar and for each error calculation methods. 'Jowoo' and 'Yeongwoo', which are forage rice cultivars used in Korea, were subject to the parameter calibration. Yield data were obtained from experiment fields at Suwon, Jeonju, Naju and I ksan. Data for 2013, 2014 and 2016 were used for parameter calibration. For validation, yield data reported from 2016 to 2018 at Suwon was used. Initial calibration indicated that genetic coefficients obtained by LL were distributed in a narrower range than coefficients obtained by GCSM. A two-sample t-test was performed to compare between different methods of ensemble approaches and no significant difference was found between them. Uncertainty of GCSM can be neutralized by adjusting the acceptance probability. The other ensemble method (Epm) indicates that the uncertainty can be reduced with less computation using ensemble approach.

SKU recommender system for retail stores that carry identical brands using collaborative filtering and hybrid filtering (협업 필터링 및 하이브리드 필터링을 이용한 동종 브랜드 판매 매장간(間) 취급 SKU 추천 시스템)

  • Joe, Denis Yongmin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.77-110
    • /
    • 2017
  • Recently, the diversification and individualization of consumption patterns through the web and mobile devices based on the Internet have been rapid. As this happens, the efficient operation of the offline store, which is a traditional distribution channel, has become more important. In order to raise both the sales and profits of stores, stores need to supply and sell the most attractive products to consumers in a timely manner. However, there is a lack of research on which SKUs, out of many products, can increase sales probability and reduce inventory costs. In particular, if a company sells products through multiple in-store stores across multiple locations, it would be helpful to increase sales and profitability of stores if SKUs appealing to customers are recommended. In this study, the recommender system (recommender system such as collaborative filtering and hybrid filtering), which has been used for personalization recommendation, is suggested by SKU recommendation method of a store unit of a distribution company that handles a homogeneous brand through a plurality of sales stores by country and region. We calculated the similarity of each store by using the purchase data of each store's handling items, filtering the collaboration according to the sales history of each store by each SKU, and finally recommending the individual SKU to the store. In addition, the store is classified into four clusters through PCA (Principal Component Analysis) and cluster analysis (Clustering) using the store profile data. The recommendation system is implemented by the hybrid filtering method that applies the collaborative filtering in each cluster and measured the performance of both methods based on actual sales data. Most of the existing recommendation systems have been studied by recommending items such as movies and music to the users. In practice, industrial applications have also become popular. In the meantime, there has been little research on recommending SKUs for each store by applying these recommendation systems, which have been mainly dealt with in the field of personalization services, to the store units of distributors handling similar brands. If the recommendation method of the existing recommendation methodology was 'the individual field', this study expanded the scope of the store beyond the individual domain through a plurality of sales stores by country and region and dealt with the store unit of the distribution company handling the same brand SKU while suggesting a recommendation method. In addition, if the existing recommendation system is limited to online, it is recommended to apply the data mining technique to develop an algorithm suitable for expanding to the store area rather than expanding the utilization range offline and analyzing based on the existing individual. The significance of the results of this study is that the personalization recommendation algorithm is applied to a plurality of sales outlets handling the same brand. A meaningful result is derived and a concrete methodology that can be constructed and used as a system for actual companies is proposed. It is also meaningful that this is the first attempt to expand the research area of the academic field related to the existing recommendation system, which was focused on the personalization domain, to a sales store of a company handling the same brand. From 05 to 03 in 2014, the number of stores' sales volume of the top 100 SKUs are limited to 52 SKUs by collaborative filtering and the hybrid filtering method SKU recommended. We compared the performance of the two recommendation methods by totaling the sales results. The reason for comparing the two recommendation methods is that the recommendation method of this study is defined as the reference model in which offline collaborative filtering is applied to demonstrate higher performance than the existing recommendation method. The results of this model are compared with the Hybrid filtering method, which is a model that reflects the characteristics of the offline store view. The proposed method showed a higher performance than the existing recommendation method. The proposed method was proved by using actual sales data of large Korean apparel companies. In this study, we propose a method to extend the recommendation system of the individual level to the group level and to efficiently approach it. In addition to the theoretical framework, which is of great value.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.