• Title/Summary/Keyword: inference rate

Search Result 208, Processing Time 0.023 seconds

Implementation of Medical Information System for Korean by Tissue Mineral Analysis (모발분석 및 처리를 위한 한국형 의료 정보 시스템 구축)

  • 조영임
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.1
    • /
    • pp.148-160
    • /
    • 2003
  • TMA(Tissue Mineral Analysis) is very popular method in hair mineral analysis for health care professionals in over 48 countries medical center. Assesment of nutritional minerals and toxic elements in the hair is very important not only for determining adequacy, deficiencies and unbalance, but also for assessing their relative relationships in a body. In Korea, there are some problems in TMA method. Because of not haying a medical information database which is suitable for korean to do analyze, the requested TMA has to send to TEI-USA. However, as the TMA results from TEI-USA is composed of English documents and graphic files prohibited to open, its usability is very low and a lot of dollars has to be payed. Also, it can make some problems in the reliability of the TMA results, since the TMA results are based on the database of western health and mineral standards, To solve these problems, I developed the first Medical Information System of TMA in Korea here. The system can analyze the complex tissue mineral data with multiple stage decision tree classifier. It is also constructed with multiple fuzzy database and hence analyze the TMA data by fuzzy inference methods. The effectiveness test of this systems can be shown the increased business efficiency and satisfaction rate 86% and 92% respectively.

  • PDF

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

Comparison of realized volatilities reflecting overnight returns (장외시간 수익률을 반영한 실현변동성 추정치들의 비교)

  • Cho, Soojin;Kim, Doyeon;Shin, Dong Wan
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.1
    • /
    • pp.85-98
    • /
    • 2016
  • This study makes an empirical comparison of various realized volatilities (RVs) in terms of overnight returns. In financial asset markets, during overnight or holidays, no or few trading data are available causing a difficulty in computing RVs for a whole span of a day. A review will be made on several RVs reflecting overnight return variations. The comparison is made for forecast accuracies of several RVs for some financial assets: the US S&P500 index, the US NASDAQ index, the KOSPI (Korean Stock Price Index), and the foreign exchange rate of the Korea won relative to the US dollar. The RV of a day is compared with the square of the next day log-return, which is a proxy for the integrated volatility of the day. The comparison is made by investigating the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE). Statistical inference of MAE and RMSE is made by applying the model confidence set (MCS) approach and the Diebold-Mariano test. For the three index data, a specific RV emerges as the best one, which addresses overnight return variations by inflating daytime RV.

The Analysis of Relationship between Error Types of Word Problems and Problem Solving Process in Algebra (대수 문장제의 오류 유형과 문제 해결의 관련성 분석)

  • Kim, Jin-Ho;Kim, Kyung-Mi;Kwean, Hyuk-Jin
    • Communications of Mathematical Education
    • /
    • v.23 no.3
    • /
    • pp.599-624
    • /
    • 2009
  • The purpose of this study was to investigate the relationship between error types and Polya's problem solving process. For doing this, we selected 106 sophomore students in a middle school and gave them algebra word problem test. With this test, we analyzed the students' error types in solving algebra word problems. First, We analyzed students' errors in solving algebra word problems into the following six error types. The result showed that the rate of student's errors in each type is as follows: "misinterpreted language"(39.7%), "distorted theorem or solution"(38.2%), "technical error"(11.8%), "unverified solution"(7.4%), "misused data"(2.9%) and "logically invalid inference"(0%). Therefore, we found that the most of student's errors occur in "misinterpreted language" and "distorted theorem or solution" types. According to the analysis of the relationship between students' error types and Polya's problem-solving process, we found that students who made errors of "misinterpreted language" and "distorted theorem or solution" types had some problems in the stage of "understanding", "planning" and "looking back". Also those who made errors of "unverified solution" type showed some problems in "planing" and "looking back" steps. Finally, errors of "misused data" and "technical error" types were related in "carrying out" and "looking back" steps, respectively.

  • PDF

Development and Application of Convergence Education about Support Vector Machine for Elementary Learners (초등 학습자를 위한 서포트 벡터 머신 융합 교육 프로그램의 개발과 적용)

  • Yuri Hwang;Namje Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.95-103
    • /
    • 2023
  • This paper proposes an artificial intelligence convergence education program for teaching the main concept and principle of Support Vector Machines(SVM) at elementary schools. The developed program, based on Jeju's natural environment theme, explains the decision boundary and margin of SVM by vertical and parallel from 4th grade mathematics curriculum. As a result of applying the developed program to 3rd and 5th graders, most students intuitively inferred the location of the decision boundary. The overall performance accuracy and rate of reasonable inference of 5th graders were higher. However, in the self-evaluation of understanding, the average value was higher in the 3rd grade, contrary to the actual understanding. This was due to the fact that junior learners had a greater tendency to feel satisfaction and achievement. On the other hand, senior learners presented more meaningful post-class questions based on their motivation for further exploration. We would like to find effective ways for artificial intelligence convergence education for elementary school students.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Environmental Change of High Moor in Mt. Dae-Am of Korean Peninsula (대암산 고층습원의 환경변천)

  • Yoshioka, Takahito;Kang, Sang-Joon
    • Korean Journal of Ecology and Environment
    • /
    • v.38 no.1 s.110
    • /
    • pp.45-53
    • /
    • 2005
  • The environmental change of Yong-nup in Mt. Dae-Am, which is located at the northern part of Kangwon-Do, Korea, was assesed with peat sedimentary carbon and nitrogen isotope analysis. The surface layer of the peat (0 ${\sim}$ 5 cm) was 190 year BP, and the middle layers (30 ${\sim}$ 35 cm and 50 ${\sim}$ 55 cm) were 870 year BP and 1900 year BP, respectively. Bulk sedimentation rate was estimated to be about 0.4 mm $year^{-1}$ for 0 cm to 30 cm and 0.15 mm $year^{-1}$ for 35 cm to 50 cm. The $^{14}C$ age of the bottom sediment (75 ${\sim}$ 80 cm) collected and measured in this study was about 1900 year BP, although it was measured that the $^{14}C$ of the lowest bottom sediment in Yong-nup was 4105 ${\pm}$ 175 year BP (GX-23200). Since the $^{14}C$ ages for 50 ${\sim}$ 55 cm and 75 ${\sim}$ 80 cm layers were almost the same as 1890 ${\pm}$ 80 fear BP (NUTA 5364) and 1850 ${\pm}$ 90 year BP (NUTA 5462), respectively, we have estimated that the deep layers (55 ${\sim}$ 80 cm) in the high moor were the original forest soil. The low organic C and N contents in the deeper layers supported the inference. The sediment of 50 ${\sim}$ 55 cm layer contains much sandy material and showed very low organic content, suggesting the erosion (flooding) from the surrounding area. In this context, the Yong-nup, high moor, of Mt. Dae-Am, might have developed to the sampling site at about 1900 year BP. The ${\delta}^{13}C$ values of organic carbon and the ${\delta}^{15}N$ values of total nitrogen in the peat sediments fluctuated with the depths. The profile of ${\delta}^{13}C$ may indicate that the Yong-nup of Mt. Dae-Am have experienced the dry-wet and cool-warm period cycles during the development of the high moor. The ${\delta}^{15}N$ may indicate that the nitrogen cycling in the Yong-nup have changed from the closed (regeneration depending) system to the open (rain $NO_3\;^-$ and $N_2$ fixation depending) system during the development of the high moor.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.