• Title/Summary/Keyword: Weight bias

Search Result 127, Processing Time 0.029 seconds

Evaluation of Digoxin Dosing Methods (DIGOXIN 용량결정 방법들의 평가)

  • Ryu, Yunmi;shin, Wan-Gyoon;Lee, Myung-kul;Lee, Min-Hwa
    • Korean Journal of Clinical Pharmacy
    • /
    • v.3 no.1
    • /
    • pp.15-20
    • /
    • 1993
  • The ability to precisely predict serum digoxin concentration using 7 published methods in a group of 50 patients was undertaken. Two methods of estimating creatinine clearance and two estimates of lean body weight were employed as input variables using the 7 dosing methods. TDX was used to determine the nadir SDCs(serum digoxin concentrations) in 50 in patients meeting predetermined study criteria. All patients, whose ages ranged 19-71 years, had steady-state digoxin levels, were in oral digoxin, and were free from liver dysfunction, thyroid dysfunction and renal failure. The correlation coefficients(r) of predicted versus observed SDCs were determined,. and mean error(ME) was determined for each method to reflect bias, respectively. No substantial differance in predictive reliabliity was evident among the methods studied in total group. Poor correlations existed between predicted and observed SDCs(r<0.4) and these correlations were not significantly affected by age and gender. But relatively higher correlation and lower ME was founded for the CHF group in Jelliffe method(r=0.5, p<0.05).

  • PDF

Evolutionary Computing Driven Extreme Learning Machine for Objected Oriented Software Aging Prediction

  • Ahamad, Shahanawaj
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.232-240
    • /
    • 2022
  • To fulfill user expectations, the rapid evolution of software techniques and approaches has necessitated reliable and flawless software operations. Aging prediction in the software under operation is becoming a basic and unavoidable requirement for ensuring the systems' availability, reliability, and operations. In this paper, an improved evolutionary computing-driven extreme learning scheme (ECD-ELM) has been suggested for object-oriented software aging prediction. To perform aging prediction, we employed a variety of metrics, including program size, McCube complexity metrics, Halstead metrics, runtime failure event metrics, and some unique aging-related metrics (ARM). In our suggested paradigm, extracting OOP software metrics is done after pre-processing, which includes outlier detection and normalization. This technique improved our proposed system's ability to deal with instances with unbalanced biases and metrics. Further, different dimensional reduction and feature selection algorithms such as principal component analysis (PCA), linear discriminant analysis (LDA), and T-Test analysis have been applied. We have suggested a single hidden layer multi-feed forward neural network (SL-MFNN) based ELM, where an adaptive genetic algorithm (AGA) has been applied to estimate the weight and bias parameters for ELM learning. Unlike the traditional neural networks model, the implementation of GA-based ELM with LDA feature selection has outperformed other aging prediction approaches in terms of prediction accuracy, precision, recall, and F-measure. The results affirm that the implementation of outlier detection, normalization of imbalanced metrics, LDA-based feature selection, and GA-based ELM can be the reliable solution for object-oriented software aging prediction.

Development of Machine Learning Model of LTPO Devices (LTPO 소자의 머신 러닝 모델 개발)

  • Jungsoo Eun;Jinsoo Ahn;Minseok Lee;Wooseok Kwak;Jonghwan Lee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.4
    • /
    • pp.179-184
    • /
    • 2023
  • We propose the modeling methodology of CMOS inverter made of LTPO TFT using a machine learning. LTPO can achieve advantages of LTPS TFT with high electron mobility as a driving TFT and IGZO TFT with low off-current as a switching TFT. However, since the unified model of both LTPS and IGZO TFTs is still lacking, it is necessary to develop a SPICE-compatible compact model to simulate the LTPO current-voltage characteristics. In this work, a generic framework for combining the existing formula of I-V characteristics with artificial neural network is presented. The weight and bias values of ANN for LTPS and IGZO TFTs is obtained and implemented into PSPICE circuit simulator to predict CMOS inverter. This methodology enables efficient modeling for predicting LTPO TFT circuit characteristics.

  • PDF

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

Effects for kangaroo care: systematic review & meta analysis (캥거루 케어가 미숙아와 어머니에게 미치는 효과 : 체계적 문헌고찰 및 메타분석)

  • Lim, Junghee;Kim, Gaeun;Shin, Yeonghee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.3
    • /
    • pp.599-610
    • /
    • 2016
  • This paper reports the results of a systematic review (SR) and meta-analysis research to compare the effect of Kangaroo care, targeting mothers and premature infants. A randomized clinical trial study was performed until February 2015. The domestic literature contained the non-randomized clinical trial research without restriction according to the level of the study design. A search of the Ovid-Medline, CINAHL, PubMed and KoreaMed, the National Library of KOREA, the National Assembly Library, NDSL, KISS and RISS. Through the KMbase we searched and combined the main term ((kangaroo OR KC OR skin-to-skin) AND (care OR contact)) AND (infant OR preterm OR Low Birth Weight OR LBW), ((kangaroo OR kangaroo OR kangaroo) AND (care OR nursing care OR management OR skin contact)) was made; these were all combined with a keywords search through the selection process. They were excluded in the final 25 studies (n=3051). A methodology checklist for randomized controlled trials (RCTs) designed by SIGN (Scottish Intercollegiate Guidelines Network) was utilized to assess the risk of bias. The overall risk of bias was regarded as low. In 16 studies that were evaluated as a grade of "++", 9 studies were evaluated as a grade of "+". As a result of meta-analysis, kangaroo care regarding the effects of premature mortality, severe infection/sepsis had an insignificant effect. Hyperthermia incidence, growth and development (height and weight), mother-infant attachment, hypothermia incidence, length of hospital days, breast feeding rate, sleeping, anxiety, confidence, and gratification of mothering role were considered significant. In satisfaction of the role performance, depression and stress presented contradictory research results for individual studies showing overall significant difference. This study has some limitations due to the few RCTs comparing kangaroo care in the country. Therefore, further RCTs comparing kangaroo care should be conducted.

Estimation of Drought Rainfall by Regional Frequency Analysis Using L and LH-Moments (II) - On the method of LH-moments - (L 및 LH-모멘트법과 지역빈도분석에 의한 가뭄우량의 추정 (II)- LH-모멘트법을 중심으로 -)

  • Lee, Soon-Hyuk;Yoon , Seong-Soo;Maeng , Sung-Jin;Ryoo , Kyong-Sik;Joo , Ho-Kil;Park , Jin-Seon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.46 no.5
    • /
    • pp.27-39
    • /
    • 2004
  • In the first part of this study, five homogeneous regions in view of topographical and geographically homogeneous aspects except Jeju and Ulreung islands in Korea were accomplished by K-means clustering method. A total of 57 rain gauges were used for the regional frequency analysis with minimum rainfall series for the consecutive durations. Generalized Extreme Value distribution was confirmed as an optimal one among applied distributions. Drought rainfalls following the return periods were estimated by at-site and regional frequency analysis using L-moments method. It was confirmed that the design drought rainfalls estimated by the regional frequency analysis were shown to be more appropriate than those by the at-site frequency analysis. In the second part of this study, LH-moment ratio diagram and the Kolmogorov-Smirnov test on the Gumbel (GUM), Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA) distributions were accomplished to get optimal probability distribution. Design drought rainfalls were estimated by both at-site and regional frequency analysis using LH-moments and GEV distribution, which was confirmed as an optimal one among applied distributions. Design rainfalls were estimated by at-site and regional frequency analysis using LH-moments, the observed and simulated data resulted from Monte Carlotechniques. Design drought rainfalls derived by regional frequency analysis using L1, L2, L3 and L4-moments (LH-moments) method have shown higher reliability than those of at-site frequency analysis in view of RRMSE (Relative Root-Mean-Square Error), RBIAS (Relative Bias) and RR (Relative Reduction) for the estimated design drought rainfalls. Relative efficiency were calculated for the judgment of relative merits and demerits for the design drought rainfalls derived by regional frequency analysis using L-moments and L1, L2, L3 and L4-moments applied in the first report and second report of this study, respectively. Consequently, design drought rainfalls derived by regional frequency analysis using L-moments were shown as more reliable than those using LH-moments. Finally, design drought rainfalls for the classified five homogeneous regions following the various consecutive durations were derived by regional frequency analysis using L-moments, which was confirmed as a more reliable method through this study. Maps for the design drought rainfalls for the classified five homogeneous regions following the various consecutive durations were accomplished by the method of inverse distance weight and Arc-View, which is one of GIS techniques.

Development of Biomass Allometric Equations for Pinus densiflora in Central Region and Quercus variabilis (중부지방소나무 및 굴참나무의 바이오매스 상대생장식 개발)

  • Son, Yeong-Mo;Lee, Kyeong-Hak;Pyo, Jung-Kee
    • Journal of agriculture & life science
    • /
    • v.45 no.4
    • /
    • pp.65-72
    • /
    • 2011
  • The objective of this research is to develop biomass allometric equation for Pinus densiflora in central region and Quercus variabilis. To develop the biomass allometric equation by species and tree component, data for Pinus densiflora in central region is collected to 30 plots (70 trees) and for Quercus variabilis is collected to 15 plots (32 trees). This study is used two independent values; (1) one based on diameter beast height, (2) the other, diameter beast height and height. And the equation forms were divided into exponential, logarithmic, and quadratic functions. The validation of biomass allometric equations were fitness index, standard error of estimate, and bias. From these methods, the most appropriate equations in estimating total tree biomass for each species are as follows: $W=aD^b$, $W=aD^bH^c$; fitness index were 0.937, 0.943 for Pinus densiflora in central region stands, and $W=a+bD+cD^2$, $W=aD^bH^c$; fitness index were 0.865, 0.874 for Quercus variabilis stands. in addition, the best performance of biomass allometric equation for Pinus densiflora in central region is $W=aD^b$, and Quercus variabilis is $W=a+bD+cD^2$. The results of this study could be useful to overcome the disadvantage of existing the biomass allometric equation and calculate reliable carbon stocks for Pinus densiflora in central region and Quercus variabilis in Korea.

A news visualization based on an algorithm by journalistic values (저널리즘 가치에 기초한 알고리즘을 이용한 뉴스 시각화)

  • Park, Daemin;Kim, Gi-Nam;Kang, Nam-Yong;Suh, Bongwon;Ha, Hyo-Ji;On, Byung-Won
    • Journal of the HCI Society of Korea
    • /
    • v.9 no.2
    • /
    • pp.5-12
    • /
    • 2014
  • There was widespread criticism of the online news services due to their bias toward sensational and soft news. Thus, news services based on journalist values are socially requested. News source network analysis(NSNA), an algorithm to cluster and weight news sources, quotes, and articles, is suggested as a method to emphasize on journalist values like facts, variety, depth, and criticism in the previous study. This study suggests 'News Sources' as a visualization tool of NSNA. 'News Sources' shows news as bar graphs, weighted by facts and criticism, and arranged by organizations and subjects. This study designed a beta version using KINDS, a news archive of Korean Press Foundation.

A Revised Benefit-Cost Analysis of the Korean TUR Program (우리나라 고독성물질 사용저감 규제의 수정 편익-비용분석)

  • Yoon, Daniel Jongsoo;Byun, Hun-Soo
    • Clean Technology
    • /
    • v.26 no.3
    • /
    • pp.168-176
    • /
    • 2020
  • The introduction of the Korea toxics use reduction (TUR) program to build a clean society is generally evaluated based on social economic criteria. Among various techniques, benefit-cost analysis is the most commonly used. This method is focused on the calculation and comparison of all the benefits and costs attributable to the TUR program. However, since it is reasonable to consider not only economic criteria but also policy criteria in the process of evaluation, it is necessary to reflect on the criteria weights found in the benefits and costs. This study aims at developing a new evaluation technique to achieve this purpose and apply it to the Korean TUR program to be implemented in 2020. This study selected competitiveness, toxic substances' emission reduction ratio, and health improvement as policy criteria. The Analytic Hierarchy Process (AHP) technique was initially used to calculate the weight and then, based on the results, the concept of information entropy introduced by Claude Shannon was used to eliminate subjective bias. As a result of the study, it was found that the revised benefit-cost analysis considering the weights of the policy criteria, as well as the existing economic criteria, could be a reasonable alternative in evaluating the feasibility of TUR regulations for highly toxic substances.

Evaluation of Coastal Sediment Budget on East Coast Maeongbang Beach by Wave Changes (파랑 변화에 따른 동해안 맹방 해수욕장 연안 표사수지 파악)

  • Kim, Gweon-Su;Ryu, Ha-Sang;Kim, Sang-Hoon
    • Journal of Ocean Engineering and Technology
    • /
    • v.33 no.6
    • /
    • pp.564-572
    • /
    • 2019
  • Numerical simulation of the sediment by the Delft3d model was conducted to examine the changes in the sediment budget transport caused by long-term wave changes at the Maengbang beach. Representative waves were generated with input reduction tools using NOAA NCEP wave data for about 40 years, i.e., from January 1979 to May 2019. To determine the adequacy of the model, wave and depth changes were compared and verified using wave and depth data observed for about 23 months beginning in March 2017. As a result of the error analysis, the bias was 0.05 and the root mean square error was 0.23, which indicated that the numerical wave results were satisfactory. Also, the observed change in depth and numerical result were similar. In addition, to examine the effect due to long-term changes in the waves, the NOAA wave data classified into each of the representative wave grades, and then the annual trend of the representative wave was analyzed. After deciding the weight of each wave class considering the changed wave environment in 2100, the amounts of sedimentation, deposition, and the sediment transport budget were reviewed for the same period. The results indicated that the sedimentation pattern did not change significantly compared to the current state, and the amount of the local sediment budget shown in the present state was slightly less. And there has been a local increase in the number of sediment budget transport, but there is no significant difference in the net and amount of sediment movements.