• Title/Summary/Keyword: accuracy of index

Search Result 1,251, Processing Time 0.033 seconds

Study on the Application of Ultrasound Traits as Selection Trait in Hanwoo (한우 선발형질로써 초음파 형질의 활용방안 연구)

  • Choi, Tae Jeong;Choy, Yun Ho;Park, Byoungho;Cho, Kwang Hyun;Alam, M;Kang, Ha Yeon;Lee, Seung Soo;Lee, Jae Gu
    • Journal of agriculture & life science
    • /
    • v.51 no.2
    • /
    • pp.117-126
    • /
    • 2017
  • Hanwoo young bulls are selected based on performance test using the weight at 12 months and pedigree index comprising marbling score. Pedigree index was not based on the progeny tested data but the breeding value of the proven bulls; resulting a lower accuracy. The progeny testing of the young bulls was categorized into testing at farm and at the test station. The farm tested data was difficult to compare with those from test station data. Farm tested bulls had different slaughter ages than those for test station bulls. Therefore, this study had considered a different age at slaughter for respective records on ultrasound traits. Records on body weight at 12 months, ultrasound measures at 12 and 24 months(uIMF, uEMA, uBFT, and uRFT), and carcass traits(CWT, EMA, BFT, and MS) were collected from steers and bulls of Hanwoo national improvement scheme between 2008 and 2013. Fixed effects of batch, test date, test station, personnel for measurement, personnel for judging, and a linear covariate of weight at measurement were fitted in the animal models for ultrasound traits. The ranges of heritability estimates of the ultrasound traits at 12 and 24 months were 0.21-0.43 and 0.32-0.47, respectively. Ultrasound traits at 12 and 24 months between similar carcass traits was genetically correlated at 0.52-0.75 and 0.86-0.89, respectively.

A New Method For Measuring Acupoint Pigmentation After Cupping Using Cross Polarization (교차편광 촬영술(Cross Polarization Photographic Technique)를 이용한 부항요법의 배수혈 피부 색소 침착 변화 측정 평가)

  • Kim, Soo-Byeong;Jung, Byungjo;Shin, Tae-Min;Lee, Yong-Heum
    • Korean Journal of Acupuncture
    • /
    • v.30 no.4
    • /
    • pp.252-263
    • /
    • 2013
  • Objectives : Skin color deformation by cupping has been widely used as a diagnostic parameter in Traditional Korean Medicine(TKM). Skin color deformation such as ecchymoses and purpura is induced by local vacuum in a suction cup. Since existing studies have relied on a visual diagnostic method, there is a need to use the quantitative measurement method. Methods : We conducted an analysis of cross-polarization photographic images to assess the changes in skin color deformation. The skin color variation was analyzed using $L^*a^*b^*$ space and the skin erythema index(E.I.). The meridian theory in TKM indicates that the condition of primary internal organs is closely related to the skin color deformation at special acupoints. Before conducting these studies, it is necessary to evaluate whether or not skin color deformation is influenced by muscle condition. Hence, we applied cupping at BL13, BL15, BL18, BL20 and BL23 at Bladder Meridian(BL) and measured blood lactate at every acupoint. Results : We confirmed the high system measurement accuracy, and observed the diverse skin color deformations. Moreover, we confirmed that the $L^*$, $a^*$ and E.I. had not changed after 40 minutes(p>0.05). The distribution of blood lactate levels at each part was observed differently. Blood lactate level and skin color deformation at each part was independent of each other. Conclusions : The negative pressure produced by the suction cup induces a reduction in the volumetric fraction of melanosomes and subsequent reduction in epidermal thickness. The relationship between variations of tissue and skin properties and skin color deformation degree must be investigated prior to considering the relationship between internal organ dysfunction and skin color deformation.

The Effect of Domain Specificity on the Performance of Domain-Specific Pre-Trained Language Models (도메인 특수성이 도메인 특화 사전학습 언어모델의 성능에 미치는 영향)

  • Han, Minah;Kim, Younha;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.251-273
    • /
    • 2022
  • Recently, research on applying text analysis to deep learning has steadily continued. In particular, researches have been actively conducted to understand the meaning of words and perform tasks such as summarization and sentiment classification through a pre-trained language model that learns large datasets. However, existing pre-trained language models show limitations in that they do not understand specific domains well. Therefore, in recent years, the flow of research has shifted toward creating a language model specialized for a particular domain. Domain-specific pre-trained language models allow the model to understand the knowledge of a particular domain better and reveal performance improvements on various tasks in the field. However, domain-specific further pre-training is expensive to acquire corpus data of the target domain. Furthermore, many cases have reported that performance improvement after further pre-training is insignificant in some domains. As such, it is difficult to decide to develop a domain-specific pre-trained language model, while it is not clear whether the performance will be improved dramatically. In this paper, we present a way to proactively check the expected performance improvement by further pre-training in a domain before actually performing further pre-training. Specifically, after selecting three domains, we measured the increase in classification accuracy through further pre-training in each domain. We also developed and presented new indicators to estimate the specificity of the domain based on the normalized frequency of the keywords used in each domain. Finally, we conducted classification using a pre-trained language model and a domain-specific pre-trained language model of three domains. As a result, we confirmed that the higher the domain specificity index, the higher the performance improvement through further pre-training.

Comparison for the Optimal Pressure between Manual CPAP and APAP Titration with Obstructive Sleep Apnea Patients (한국인 폐쇄성 수면 무호흡 환자의 적정 양압을 위한 수동화 양압 측정법과 자동화 양압 측정법의 비교)

  • Kim, Dae Jin;Choi, Byoung Geol;Cho, Jae Wook;Mun, Sue Jean;Lee, Min Woo;Kim, Hyun-Woo
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.51 no.2
    • /
    • pp.191-197
    • /
    • 2019
  • Although auto-adjusting positive airway pressure (APAP) titration at home has several advantages over a CPAP titration in terms of convenience and time saving, there are still concerns as to whether it will show corresponding accuracy when compared to laboratory-based polysomnography (PSG) and CPAP titration. To obtain more evidence supporting home-based auto-titration, APAP titration was performed at home for patients who were presented with OSA on laboratory-based diagnostic PSG followed by CPAP titration. A total of 79 patients were included in the study. They all underwent split-night PSG with CPAP titration, and APAP titration for more than 7 days. The patients with successful titration at both situations were selected. The optimal pressure and apnea-hypopnea index (AHI) of CPAP and APAP titration were compared. The optimal pressure for CPAP and APAP titration were $7.0{\pm}1.8cmH_2O$ and $7.6{\pm}1.6cmH_2O$ (P<0.001), whereas the corresponding AHI were $1.3{\pm}1.5/h$ and $3.0{\pm}1.7/h$ (P<0.001). As a result, the achievement rates of optimal pressure for CPAP and APAP titration were 96.2% and 94.9% (r=-0.045, P=0.688), respectively. The results of this study did not differ with regard to the optimal pressure between CPAP and APAP titration. Overall, CPAP and APAP titrations should be chosen depending on a required situation.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

The Photography as Technological Aesthetics (데크놀로지 미학으로서의 사진)

  • Jin, Dong-Sun
    • Journal of Science of Art and Design
    • /
    • v.11
    • /
    • pp.221-249
    • /
    • 2007
  • Today, photography is facing to the crisis of identity and dilemma of ontology from the digital imaging process in the new technology form. It is very important points to say rethinking of the traditional photographic medium, that has changed the way we view the world and ourselves is perhaps an understatement and that photography has transformed our essential understanding of reality. Now, no longer are photographic images regarded as the true automatic recording, innocent evidence and the mirror to the reality. Rather, photography constructs the world for our entertainment, helping to create the comforting illusions by which we live. The recognition that photographs are not constructions and reflections of reality, is the basis for the actual presence within the contemporary photographic world. It is shock. This thesis's aim is to look for the problems of photographic identity and ontological crisis that is controlling and regulating digital photographic imagery, allowing the reproduction of the electronic simulations era. Photography loses its special aesthetic status and becomes no more true information and, exclusively evidence by traditional film and paper that appeared both as a technological accuracy and as a medium-specific aesthetic. The result, photography is facing two crises, one is the photographic ontology(the introduction of computerized digital images) and the other is photographic epistemology(having to do broader changes in ethics, knowledge and culture). Taken together, these crises apparently threaten us with the death of photography, with the 'end' of photography and the culture it sustains. The thesis's meaning is to look into the dilemma of photography's ontology and epistemology, especially, automatical index and digital codes from its origin, meaning, and identity as the technological medium. Thus, in particular, thesis focuses on the analog imagery presence, from the nature in the material world, and the digital imagery presence from the cultural situations in our society. And also thesis's aim is to examine the main issues of the history of photography has been concentrated on the ontological arguments since the discovery of photography in 1839. Photography has never been only one static technology form. Rather, its nearly two centuries of technological development have been marked by numerous, competing of technological innovation and self revolution from the dual aspects. This thesis examines recent account of photography by the analysis of the medium's concept, meaning, identity between film base image and digital base image from the aspects of photographic ontology and epistemology. Thus, the structure of thesis is fairy straightforward to examine what appear to be two opposing view of photographic conditions and ontological situations. Thesis' view contrasts that figure out the value of photography according to its fundamental characteristic as a medium. Also, it seeks a possible solution to the dilemma of photographic ontology through the medium's origin from the early years of the nineteenth century to the raising questions about the different meaning(analog/digital) of photography, now. Finally, this thesis emphasizes and concludes that the photographic ontological crisis reflects to the paradoxical dynamic structure, that unsolved the origins of the medium, itself. Moreover, even photography is not single identity of the photographic ontology, and also can not be understood as having a static identity or singular status from the dynamic field of technologies, practices, and images.

  • PDF

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

Quality Evaluation through Inter-Comparison of Satellite Cloud Detection Products in East Asia (동아시아 지역의 위성 구름탐지 산출물 상호 비교를 통한 품질 평가)

  • Byeon, Yugyeong;Choi, Sungwon;Jin, Donghyun;Seong, Noh-hun;Jung, Daeseong;Sim, Suyoung;Woo, Jongho;Jeon, Uujin;Han, Kyung-soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_2
    • /
    • pp.1829-1836
    • /
    • 2021
  • Cloud detection means determining the presence or absence of clouds in a pixel in a satellite image, and acts as an important factor affecting the utility and accuracy of the satellite image. In this study, among the satellites of various advanced organizations that provide cloud detection data, we intend to perform quantitative and qualitative comparative analysis on the difference between the cloud detection data of GK-2A/AMI, Terra/MODIS, and Suomi-NPP/VIIRS. As a result of quantitative comparison, the Proportion Correct (PC) index values in January were 74.16% for GK-2A & MODIS, 75.39% for GK-2A & VIIRS, and 87.35% for GK-2A & MODIS in April, and GK-2A & VIIRS showed that 87.71% of clouds were detected in April compared to January without much difference by satellite. As for the qualitative comparison results, when compared with RGB images, it was confirmed that the results corresponding to April rather than January detected clouds better than the previous quantitative results. However, if thin clouds or snow cover exist, each satellite were some differences in the cloud detection results.

Comparative assessment and uncertainty analysis of ensemble-based hydrologic data assimilation using airGRdatassim (airGRdatassim을 이용한 앙상블 기반 수문자료동화 기법의 비교 및 불확실성 평가)

  • Lee, Garim;Lee, Songhee;Kim, Bomi;Woo, Dong Kook;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.761-774
    • /
    • 2022
  • Accurate hydrologic prediction is essential to analyze the effects of drought, flood, and climate change on flow rates, water quality, and ecosystems. Disentangling the uncertainty of the hydrological model is one of the important issues in hydrology and water resources research. Hydrologic data assimilation (DA), a technique that updates the status or parameters of a hydrological model to produce the most likely estimates of the initial conditions of the model, is one of the ways to minimize uncertainty in hydrological simulations and improve predictive accuracy. In this study, the two ensemble-based sequential DA techniques, ensemble Kalman filter, and particle filter are comparatively analyzed for the daily discharge simulation at the Yongdam catchment using airGRdatassim. The results showed that the values of Kling-Gupta efficiency (KGE) were improved from 0.799 in the open loop simulation to 0.826 in the ensemble Kalman filter and to 0.933 in the particle filter. In addition, we analyzed the effects of hyper-parameters related to the data assimilation methods such as precipitation and potential evaporation forcing error parameters and selection of perturbed and updated states. For the case of forcing error conditions, the particle filter was superior to the ensemble in terms of the KGE index. The size of the optimal forcing noise was relatively smaller in the particle filter compared to the ensemble Kalman filter. In addition, with more state variables included in the updating step, performance of data assimilation improved, implicating that adequate selection of updating states can be considered as a hyper-parameter. The simulation experiments in this study implied that DA hyper-parameters needed to be carefully optimized to exploit the potential of DA methods.