• Title/Summary/Keyword: 데이터 평가 모델

Search Result 2,514, Processing Time 0.03 seconds

Effectiveness of Two-dose Varicella Vaccination: Bayesian Network Meta-analysis

  • Kwan Hong;Young June Choe;Young Hwa Lee;Yoonsun Yoon;Yun-Kyung Kim
    • Pediatric Infection and Vaccine
    • /
    • v.31 no.1
    • /
    • pp.55-63
    • /
    • 2024
  • Purpose: A 2-dose varicella vaccination strategy has been introduced in many countries worldwide, aiming to increase vaccine effectiveness (VE) against varicella infection. In this network meta-analysis, we aimed to provide a comprehensive evaluation and an overall estimated effect of varicella vaccination strategies, via a Bayesian model. Methods: For each eligible study, we collected trial characteristics, such as: 1-dose vs. 2-dose, demographic characteristics, and outcomes of interest. For studies involving different doses, we aggregated the data for the same number of doses delivered into one arm. The preventive effect of 1-dose vs. 2-dose of varicella vaccine were evaluated in terms of the odds ratio (OR) and corresponding equal-tailed 95% confidence interval (95% CI). Results: A total of 903 studies were retrieved during our literature search, and 25 interventional or observational studies were selected for the Bayesian network meta-analysis. A total of 49,265 observed individuals were included in this network meta-analysis. Compared to the 0-dose control group, the OR of all varicella infections were 0.087 (95% CI, 0.046-0.164) and 0.310 (95% CI, 0.198-0.484) for 2-doses and one-dose, respectively, which corresponded to VE of 69.0% (95% CI, 51.6-81.2) and VE of 91.3% (95% CI, 83.6-95.4) for 1- and 2-doses, respectively. Conclusions: A 2-dose vaccine strategy was able to significantly reduce varicella burden. The effectiveness of 2-dose vaccination on reducing the risk of infection was demonstrated by sound statistical evidence, which highlights the public health need for a 2-dose vaccine recommendation.

A User based Collaborative Filtering Recommender System with Recommendation Quantity and Repetitive Recommendation Considerations (추천 수량과 재 추천을 고려한 사용자 기반 협업 필터링 추천 시스템)

  • Jihoi Park;Kihwan Nam
    • Information Systems Review
    • /
    • v.19 no.2
    • /
    • pp.71-94
    • /
    • 2017
  • Recommender systems reduce information overload and enhance choice quality. This technology is used in many services and industry. Previous studies did not consider recommendation quantity and the repetitive recommendations of an item. This study is the first to examine recommender systems by considering recommendation quantity and repetitive recommendations. Only a limited number of items are displayed in offline stores because of their physical limitations. Determining the type and number of items that will be displayed is an important consideration. In this study, I suggest the use of a user-based recommender system that can recommend the most appropriate items for each store. This model is evaluated by MAE, Precision, Recall, and F1 measure, and shows higher performance than the baseline model. I also suggest a new performance evaluation measure that includes Quantity Precision, Quantity Recall, and Quantity F1 measure. This measure considers the penalty for short or excess recommendation quantity. Novelty is defined as the proportion of items in a recommendation list that consumers may not experience. I evaluate the new revenue creation effect of the suggested model using this novelty measure. Previous research focused on recommendations for customer online, but I expand the recommender system to cover stores offline.

A Study on Evaluation of Visual Factor for Measuring Subjective Virtual Realization (주관적인 가상 실감화 측정 방법에 대한 시각적 요소 평가 연구)

  • Won, Myeung-Ju;Park, Sang-In;Kim, Chi-Jung;Lee, Eui-Chul;Whang, Min-Cheol
    • Science of Emotion and Sensibility
    • /
    • v.15 no.3
    • /
    • pp.389-398
    • /
    • 2012
  • Virtual worlds have pursued reality as if they actually exist. In order to evaluate the sense of reality in the computer-simulated worlds, several subjective questionnaires, which include specific independent variables, have been proposed in the literature. However, the questionnaires lack reliability and validity necessary for defining and measuring the virtual realization. Few studies have been conducted to investigate the effect of visual factors on the sense of reality experienced by exposing to a virtual environment. Therefore, this study was aimed at reinvestigating the variables and proposing a more reliable and advisable questionnaire for evaluating the virtual realization, focusing on visual factors. Twenty-one questions were gleaned from the literature and subjective interviews with focused groups. Exploratory factor analysis with oblique rotation was performed on the data obtained from 200 participants(females: 100) after exposing to a virtual character image described in an extreme way. After removing poorly loading items, remained subsets were subjected to confirmatory factor analysis on the data obtained from the same participants. As a result, 3 significant factors were determined to efficiently measure the virtual realization. The determined factors included visual presence(3 subset items), visual immersion(7 subset items), and visual interactivity(4 subset items). The proposed factors were verified by conducting a subjective evaluation in which participants were asked to evaluate a 3D virtual eyeball model based on the visual presence. The results implicated that the measurement method was suitable for evaluating the degree of the virtual realization. The proposed method is expected to reasonably measure the degree of the virtual realization.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

Analysis of Systems Thinking Level of Pre-service Teachers about Carbon Cycle in Earth Systems using Rubrics of Evaluating Systems Thinking (시스템 사고 평가 루브릭을 활용한 예비교사들의 지구 시스템 내 탄소 순환에 대한 시스템 사고 수준 분석)

  • Park, Kyungsuk;Lee, Hyundong;Lee, Hyonyong;Jeon, Jaedon
    • Journal of The Korean Association For Science Education
    • /
    • v.39 no.5
    • /
    • pp.599-611
    • /
    • 2019
  • The purpose of this study is to analyze the systems thinking level of pre-service teachers using rubrics of evaluating systems thinking. For this purpose, systems thinking level model, which can be applied to education or science education, was selected through literature analysis. Eight pre-service teachers' systems thinking were investigated through the systems thinking analysis tool used in domestic research. The systems thinking presented by the pre-service teachers were transformed into the box type causal map using Sibley et al. (2007). Two researchers analyzed the systems thinking using rubrics of evaluating systems thinking. For data analysis, quantitative analysis was performed through correlation analysis using SPSS. In addition, the qualitative analysis of the box type causal map was conducted and the consistency with the quantitative analysis results was verified. The results indicated that the correlation between the 5-Likert systems thinking measurement instrument and the rubrics score was highly correlated with the Pearson product-moment of .762 (p <.05). In the hierarchical correlation of the systems thinking level, the STH model was analyzed with a very high correlation with the Pearson product-moment of .722~.791, and 4-step model was analyzed .381~.730. The qualitative analysis suggested the concept to be included in the low level of system thinking, the higher the level, the less the concept that is presented properly. In conclusion, the level of systems thinking can be derived as a result of research that there is clearly, a hierarchical part. Based on the results of this study, it is necessary to develop a systems thinking level model applicable to science education and develop and validate items that can measure the level of systems thinking.

Protective Effect of Enzymatically Modified Stevia on C2C12 Cell-based Model of Dexamethasone-induced Muscle Atrophy (덱사메타손으로 유도된 근위축 C2C12 모델에서 효소처리스테비아의 보호 효과)

  • Geon Oh;Sun-Il Choi;Xionggao Han;Xiao Men;Se-Jeong Lee;Ji-Hyun Im;Ho-Seong Lee;Hyeong-Dong Jung;Moon Jin La;Min Hee Kwon;Ok-Hwan Lee
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.2
    • /
    • pp.69-78
    • /
    • 2023
  • This study aimed to investigate the protective effect of enzymatically modified stevia (EMS) on C2C12 cell-based model of dexamethasone (DEX)-induced muscle atrophy to provide baseline data for utilizing EMS in functional health products. C2C12 cells with DEX-induced muscle atrophy were treated with EMS (10, 50, and 100 ㎍/mL) for 24 h. C2C12 cells were treated with EMS and DEX to test their effects on cell viability and myotube formation (myotube diameter and fusion index), and analyze the expression of muscle strengthening or degrading protein markers. Schisandra chinensis Extract, a common functional ingredient, was used as a positive control. EMS did not show any cytotoxic effect at all treatment concentrations. Moreover, it exerted protective effects on C2C12 cell-based model of DEX-induced muscle atrophy at all concentrations. In addition, the positive effect of EMS on myotube formation was confirmed based on the measurement and comparison of the fusion index and myotube diameter when compared with myotubes treated with DEX alone. EMS treatment reduced the expression of muscle cell degradation-related proteins Fbx32 and MuRF1, and increased the expression of muscle strengthening and synthesis related proteins SIRT1 and pAkt/Akt. Thus, EMS is a potential ingredient for developing functional health foods and should be further evaluated in preclinical models.

Development of a prototype simulator for dental education (치의학 교육을 위한 프로토타입 시뮬레이터의 개발)

  • Mi-El Kim;Jaehoon Sim;Aein Mon;Myung-Joo Kim;Young-Seok Park;Ho-Beom Kwon;Jaeheung Park
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.61 no.4
    • /
    • pp.257-267
    • /
    • 2023
  • Purpose. The purpose of the study was to fabricate a prototype robotic simulator for dental education, to test whether it could simulate mandibular movements, and to assess the possibility of the stimulator responding to stimuli during dental practice. Materials and methods. A virtual simulator model was developed based on segmentation of the hard tissues using cone-beam computed tomography (CBCT) data. The simulator frame was 3D printed using polylactic acid (PLA) material, and dentiforms and silicone face skin were also inserted. Servo actuators were used to control the movements of the simulator, and the simulator's response to dental stimuli was created by pressure and water level sensors. A water level test was performed to determine the specific threshold of the water level sensor. The mandibular movements and mandibular range of motion of the simulator were tested through computer simulation and the actual model. Results. The prototype robotic simulator consisted of an operational unit, an upper body with an electric device, a head with a temporomandibular joint (TMJ) and dentiforms. The TMJ of the simulator was capable of driving two degrees of freedom, implementing rotational and translational movements. In the water level test, the specific threshold of the water level sensor was 10.35 ml. The mandibular range of motion of the simulator was 50 mm in both computer simulation and the actual model. Conclusion. Although further advancements are still required to improve its efficiency and stability, the upper-body prototype simulator has the potential to be useful in dental practice education.

Region of Interest Extraction and Bilinear Interpolation Application for Preprocessing of Lipreading Systems (입 모양 인식 시스템 전처리를 위한 관심 영역 추출과 이중 선형 보간법 적용)

  • Jae Hyeok Han;Yong Ki Kim;Mi Hye Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.189-198
    • /
    • 2024
  • Lipreading is one of the important parts of speech recognition, and several studies have been conducted to improve the performance of lipreading in lipreading systems for speech recognition. Recent studies have used method to modify the model architecture of lipreading system to improve recognition performance. Unlike previous research that improve recognition performance by modifying model architecture, we aim to improve recognition performance without any change in model architecture. In order to improve the recognition performance without modifying the model architecture, we refer to the cues used in human lipreading and set other regions such as chin and cheeks as regions of interest along with the lip region, which is the existing region of interest of lipreading systems, and compare the recognition rate of each region of interest to propose the highest performing region of interest In addition, assuming that the difference in normalization results caused by the difference in interpolation method during the process of normalizing the size of the region of interest affects the recognition performance, we interpolate the same region of interest using nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation, and compare the recognition rate of each interpolation method to propose the best performing interpolation method. Each region of interest was detected by training an object detection neural network, and dynamic time warping templates were generated by normalizing each region of interest, extracting and combining features, and mapping the dimensionality reduction of the combined features into a low-dimensional space. The recognition rate was evaluated by comparing the distance between the generated dynamic time warping templates and the data mapped to the low-dimensional space. In the comparison of regions of interest, the result of the region of interest containing only the lip region showed an average recognition rate of 97.36%, which is 3.44% higher than the average recognition rate of 93.92% in the previous study, and in the comparison of interpolation methods, the bilinear interpolation method performed 97.36%, which is 14.65% higher than the nearest neighbor interpolation method and 5.55% higher than the bicubic interpolation method. The code used in this study can be found a https://github.com/haraisi2/Lipreading-Systems.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.