• Title/Summary/Keyword: Accuracies

Search Result 789, Processing Time 0.031 seconds

Development and Validation of Analytical Method and Antioxidant Effect for Berberine and Palmatine in P.amurense (황백의 지표성분 berberine과 palmatine의 분석법 개발과 검증 및 항산화 효능 평가)

  • Jang, Gill-Woong;Choi, Sun-Il;Han, Xionggao;Men, Xiao;Kwon, Hee-Yeon;Choi, Ye-Eun;Park, Byung-Woo;Kim, Jeong-Jin;Lee, Ok-Hwan
    • Journal of Food Hygiene and Safety
    • /
    • v.35 no.6
    • /
    • pp.544-551
    • /
    • 2020
  • The aim of this study was to develop and validate a simultaneous analytical method for berberine and palmatine, which are representative substances of Phellodendron amurense, and to evaluate the antioxidant activity. We evaluated the specificity, linearity, precision, accuracy, limit of detection (LOD), and limit of quantification (LOQ) of analytical methods for berberine and palmatine using high-performance liquid chromatography. Our result showed that the correlation coefficients of the calibration curve for berberine and palmatine exhibited 0.9999. The LODs for berberine and palmatine were 0.32 to 0.35 µg/mL and the LOQs were 0.97 to 1.06 µg/mL, respectively. The inter-day and intra-day precision values for berberine and palmatine were from 0.12 to 1.93 and 0.19 to 2.89%, respectively. The inter-day and intra-day accuracies were 98.43-101.45% and 92.39-100.60%, respectively. In addition, the simultaneous analytical method was validated for the detection of berberine and palmatine. Moreover, we conducted FRAP and NaNO2 scavenging activity assays to measure the antioxidant activities of berberine and palmatine, and both showed antioxidant activity. These results suggest that P.amurense could be a potential natural resource for antioxidant activity and that the efficacy can be confirmed by investigating the content of the berberine and palmatine.

Modification and Validation of an Analytical Method for Dieckol in Ecklonia Stolonifera Extract (곰피추출물의 지표성분 Dieckol의 분석법 개선 및 검증)

  • Han, Xionggao;Choi, Sun-Il;Men, Xiao;Lee, Se-jeong;Oh, Geon;Jin, Heegu;Oh, Hyun-Ji;Kim, Eunjin;Kim, Jongwook;Lee, Boo-Yong;Lee, Ok-Hwan
    • Journal of Food Hygiene and Safety
    • /
    • v.37 no.3
    • /
    • pp.143-148
    • /
    • 2022
  • This study was to investigate an analytical method for determining dieckol content in Ecklonia stolonifera extract. According to the guidelines of International Conference on Harmonization. Method validation was performed by measuring the specificity, linearity, precision, accuracy, limit of detection (LOD), and limit of quantification (LOQ) of dieckol using high-performance liquid chromatography-photodiode array. The results showed that the correlation coefficient of calibration curve (R2) for dieckol was 0.9997. The LOD and LOQ for dieckol were 0.18 and 0.56 ㎍/mL, respectively. The intra- and inter-day precision values of dieckol were approximately 1.58-4.39% and 1.37-4.64%, respectively. Moreover, intra- and inter-day accuracies of dieckol were approximately 96.91-102.33% and 98.41-105.71%, respectively. Thus, we successfully validated the analytical method for estimating dieckol content in E. stolonifera extract.

Clinical Characteristics and Comparison of the Various Methods Used for the Diagnosis of the New Influenza A Pandemic in Korea (한국에서의 2009 신종 인플루엔자 A의 임상양상과 다양한 진단 방법들의 비교)

  • Kwon, Min Jung;Lee, Chang Kyu;Roh, Kyoung Ho;Nam, Myung Hyun;Yoon, Soo Young;Lim, Chae Seung;Cho, Yun Jung;Kim, Young Kee;Lee, Kap No
    • Laboratory Medicine Online
    • /
    • v.1 no.1
    • /
    • pp.26-34
    • /
    • 2011
  • Background: Laboratory diagnosis of new influenza A (H1N1) is crucial for managing patients and establishing control and prevention measures. We compared the diagnostic accuracies of the real time RT-PCR (rRT-PCR) test recommended for the confirmation of the new flu and the viral culture method used conventionally for viral disease with that of the rapid antigen test (RAT). Methods: We performed RAT, R-mix culture, and real-time PCR by using 861 respiratory samples collected from December 2009 to January 2010 and evaluated the abilities of these methods to detect new influenza A. The relationship among the positive rates of RAT, grades of culture, and the cycle threshold (Ct) values of rRT-PCR was also evaluated. Results: Of the 861 patients, 308 (35.8%) were diagnosed with new influenza A. The sensitivities, specificities, positive predictive values, and negative predictive values of the tests were respectively as follows: 59.7%, 99.5%, 98.4%, and 81.6% for RAT; 93.2%, 100%, 100%, and 96.3% for R-mix culture; and 95.8%, 100%, 100%, and 97.7% for rRT-PCR. Samples with weak positive grade in culture and those with Ct values of 30-37 in rRT-PCR showed positivities as low as 25.3% and 2.3% in RAT, respectively. The hospitalization rate and death rate of the confirmed patients were 3.2% and 0.3%, respectively, and gastrointestinal symptoms were observed in 7.2% of the patients. Conclusions: R-mix culture and rRT-PCR tests showed excellent reliability in the diagnosis of new influenza A and could be very useful, especially for samples with low viral load.

Assessing the Sensitivity of Runoff Projections Under Precipitation and Temperature Variability Using IHACRES and GR4J Lumped Runoff-Rainfall Models (집중형 모형 IHACRES와 GR4J를 이용한 강수 및 기온 변동성에 대한 유출 해석 민감도 평가)

  • Woo, Dong Kook;Jo, Jihyeon;Kang, Boosik;Lee, Songhee;Lee, Garim;Noh, Seong Jin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.1
    • /
    • pp.43-54
    • /
    • 2023
  • Due to climate change, drought and flood occurrences have been increasing. Accurate projections of watershed discharges are imperative to effectively manage natural disasters caused by climate change. However, climate change and hydrological model uncertainty can lead to imprecise analysis. To address this issues, we used two lumped models, IHACRES and GR4J, to compare and analyze the changes in discharges under climate stress scenarios. The Hapcheon and Seomjingang dam basins were the study site, and the Nash-Sutcliffe efficiency (NSE) and the Kling-Gupta efficiency (KGE) were used for parameter optimizations. Twenty years of discharge, precipitation, and temperature (1995-2014) data were used and divided into training and testing data sets with a 70/30 split. The accuracies of the modeled results were relatively high during the training and testing periods (NSE>0.74, KGE>0.75), indicating that both models could reproduce the previously observed discharges. To explore the impacts of climate change on modeled discharges, we developed climate stress scenarios by changing precipitation from -50 % to +50 % by 1 % and temperature from 0 ℃ to 8 ℃ by 0.1 ℃ based on two decades of weather data, which resulted in 8,181 climate stress scenarios. We analyzed the yearly maximum, abundant, and ordinary discharges projected by the two lumped models. We found that the trends of the maximum and abundant discharges modeled by IHACRES and GR4J became pronounced as changes in precipitation and temperature increased. The opposite was true for the case of ordinary water levels. Our study demonstrated that the quantitative evaluations of the model uncertainty were important to reduce the impacts of climate change on water resources.

Water resources monitoring technique using multi-source satellite image data fusion (다종 위성영상 자료 융합 기반 수자원 모니터링 기술 개발)

  • Lee, Seulchan;Kim, Wanyub;Cho, Seongkeun;Jeon, Hyunho;Choi, Minhae
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.8
    • /
    • pp.497-508
    • /
    • 2023
  • Agricultural reservoirs are crucial structures for water resources monitoring especially in Korea where the resources are seasonally unevenly distributed. Optical and Synthetic Aperture Radar (SAR) satellites, being utilized as tools for monitoring the reservoirs, have unique limitations in that optical sensors are sensitive to weather conditions and SAR sensors are sensitive to noises and multiple scattering over dense vegetations. In this study, we tried to improve water body detection accuracy through optical-SAR data fusion, and quantitatively analyze the complementary effects. We first detected water bodies at Edong, Cheontae reservoir using the Compact Advanced Satellite 500(CAS500), Kompsat-3/3A, and Sentinel-2 derived Normalized Difference Water Index (NDWI), and SAR backscattering coefficient from Sentinel-1 by K-means clustering technique. After that, the improvements in accuracies were analyzed by applying K-means clustering to the 2-D grid space consists of NDWI and SAR. Kompsat-3/3A was found to have the best accuracy (0.98 at both reservoirs), followed by Sentinel-2(0.83 at Edong, 0.97 at Cheontae), Sentinel-1(both 0.93), and CAS500(0.69, 0.78). By applying K-means clustering to the 2-D space at Cheontae reservoir, accuracy of CAS500 was improved around 22%(resulting accuracy: 0.95) with improve in precision (85%) and degradation in recall (14%). Precision of Kompsat-3A (Sentinel-2) was improved 3%(5%), and recall was degraded 4%(7%). More precise water resources monitoring is expected to be possible with developments of high-resolution SAR satellites including CAS500-5, developments of image fusion and water body detection techniques.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

The Measurement of Sensitivity and Comparative Analysis of Simplified Quantitation Methods to Measure Dopamine Transporters Using [I-123]IPT Pharmacokinetic Computer Simulations ([I-123]IPT 약역학 컴퓨터시뮬레이션을 이용한 민감도 측정 및 간편화된 운반체 정량분석 방법들의 비교분석 연구)

  • Son, Hye-Kyung;Nha, Sang-Kyun;Lee, Hee-Kyung;Kim, Hee-Joung
    • The Korean Journal of Nuclear Medicine
    • /
    • v.31 no.1
    • /
    • pp.19-29
    • /
    • 1997
  • Recently, [I-123]IPT SPECT has been used for early diagnosis of Parkinson's patients(PP) by imaging dopamine transporters. The dynamic time activity curves in basal ganglia(BG) and occipital cortex(OCC) without blood samples were obtained for 2 hours. These data were then used to measure dopamine transporters by operationally defined ratio methods of (BG-OCC)/OCC at 2 hrs, binding potential $R_v=k_3/k_4$ using graphic method or $R_A$= (ABBG-ABOCC)/ABOCC for 2 hrs, where ABBG represents accumulated binding activity in basal ganglia(${\int}^{120min}_0$ BG(t)dt) and ABOCC represents accumulated binding activity in occipital cortex(${\int}^{120min}_0$ OCC(t)dt). The purpose of this study was to examine the IPT pharmacokinetics and investigate the usefulness of simplified methods of (BG-OCC)/OCC, $R_A$, and $R_v$ which are often assumed that these values reflect the true values of $k_3/k_4$. The rate constants $K_1,\;k_2\;k_3$ and $k_4$ to be used for simulations were derived using [I-123]IPT SPECT and aterialized blood data with a standard three compartmental model. The sensitivities and time activity curves in BG and OCC were computed by changing $K_l$ and $k_3$(only BG) for every 5min over 2 hours. The values (BG-OCC)/OCC, $R_A$, and $R_v$ were then computed from the time activity curves and the linear regression analysis was used to measure the accuracies of these methods. The late constants $K_l,\;k_2\;k_3\;k_4$ at BG and OCC were $1.26{\pm}5.41%,\;0.044{\pm}19.58%,\;0.031{\pm}24.36%,\;0.008{\pm}22.78%$ and $1.36{\pm}4.76%,\;0.170{\pm}6.89%,\;0.007{\pm}23.89%,\;0.007{\pm}45.09%$, respectively. The Sensitivities for ((${\Delta}S/S$)/(${\Delta}k_3/k_3$)) and ((${\Delta}S/S$)/(${\Delta}K_l/K_l$)) at 30min and 120min were measured as (0.19, 0.50) and (0.61, 0,23), respectively. The correlation coefficients and slopes of ((BG-OCC)/OCC, $R_A$, and $R_v$) with $k_3/k_4$ were (0.98, 1.00, 0.99) and (1.76, 0.47, 1.25), respectively. These simulation results indicate that a late [I-123]IPT SPECT image may represent the distribution of the dopamine transporters. Good correlations were shown between (3G-OCC)/OCC, $R_A$ or $R_v$ and true $k_3/k_4$, although the slopes between them were not unity. Pharmacokinetic computer simulations may be a very useful technique in studying dopamine transporter systems.

  • PDF

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Clustering Method based on Genre Interest for Cold-Start Problem in Movie Recommendation (영화 추천 시스템의 초기 사용자 문제를 위한 장르 선호 기반의 클러스터링 기법)

  • You, Tithrottanak;Rosli, Ahmad Nurzid;Ha, Inay;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.57-77
    • /
    • 2013
  • Social media has become one of the most popular media in web and mobile application. In 2011, social networks and blogs are still the top destination of online users, according to a study from Nielsen Company. In their studies, nearly 4 in 5active users visit social network and blog. Social Networks and Blogs sites rule Americans' Internet time, accounting to 23 percent of time spent online. Facebook is the main social network that the U.S internet users spend time more than the other social network services such as Yahoo, Google, AOL Media Network, Twitter, Linked In and so on. In recent trend, most of the companies promote their products in the Facebook by creating the "Facebook Page" that refers to specific product. The "Like" option allows user to subscribed and received updates their interested on from the page. The film makers which produce a lot of films around the world also take part to market and promote their films by exploiting the advantages of using the "Facebook Page". In addition, a great number of streaming service providers allows users to subscribe their service to watch and enjoy movies and TV program. They can instantly watch movies and TV program over the internet to PCs, Macs and TVs. Netflix alone as the world's leading subscription service have more than 30 million streaming members in the United States, Latin America, the United Kingdom and the Nordics. As the matter of facts, a million of movies and TV program with different of genres are offered to the subscriber. In contrast, users need spend a lot time to find the right movies which are related to their interest genre. Recent years there are many researchers who have been propose a method to improve prediction the rating or preference that would give the most related items such as books, music or movies to the garget user or the group of users that have the same interest in the particular items. One of the most popular methods to build recommendation system is traditional Collaborative Filtering (CF). The method compute the similarity of the target user and other users, which then are cluster in the same interest on items according which items that users have been rated. The method then predicts other items from the same group of users to recommend to a group of users. Moreover, There are many items that need to study for suggesting to users such as books, music, movies, news, videos and so on. However, in this paper we only focus on movie as item to recommend to users. In addition, there are many challenges for CF task. Firstly, the "sparsity problem"; it occurs when user information preference is not enough. The recommendation accuracies result is lower compared to the neighbor who composed with a large amount of ratings. The second problem is "cold-start problem"; it occurs whenever new users or items are added into the system, which each has norating or a few rating. For instance, no personalized predictions can be made for a new user without any ratings on the record. In this research we propose a clustering method according to the users' genre interest extracted from social network service (SNS) and user's movies rating information system to solve the "cold-start problem." Our proposed method will clusters the target user together with the other users by combining the user genre interest and the rating information. It is important to realize a huge amount of interesting and useful user's information from Facebook Graph, we can extract information from the "Facebook Page" which "Like" by them. Moreover, we use the Internet Movie Database(IMDb) as the main dataset. The IMDbis online databases that consist of a large amount of information related to movies, TV programs and including actors. This dataset not only used to provide movie information in our Movie Rating Systems, but also as resources to provide movie genre information which extracted from the "Facebook Page". Formerly, the user must login with their Facebook account to login to the Movie Rating System, at the same time our system will collect the genre interest from the "Facebook Page". We conduct many experiments with other methods to see how our method performs and we also compare to the other methods. First, we compared our proposed method in the case of the normal recommendation to see how our system improves the recommendation result. Then we experiment method in case of cold-start problem. Our experiment show that our method is outperform than the other methods. In these two cases of our experimentation, we see that our proposed method produces better result in case both cases.