• Title/Summary/Keyword: random sets

Search Result 276, Processing Time 0.026 seconds

Development of Decision Tree Software and Protein Profiling using Surface Enhanced laser Desorption/lonization - Time of Flight - Mass Spectrometry (SELDI-TOF-MS) in Papillary Thyroid Cancer (의사결정트리 프로그램 개발 및 갑상선유두암에서 질량분석법을 이용한 단백질 패턴 분석)

  • Yoon, Joon-Kee;Lee, Jun;An, Young-Sil;Park, Bok-Nam;Yoon, Seok-Nam
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.4
    • /
    • pp.299-308
    • /
    • 2007
  • Purpose: The aim of this study was to develop a bioinformatics software and to test it in serum samples of papillary thyroid cancer using mass spectrometry (SELDI-TOF-MS). Materials and Methods: Development of 'Protein analysis' software performing decision tree analysis was done by customizing C4.5. Sixty-one serum samples from 27 papillary thyroid cancer, 17 autoimmune thyroiditis, 17 controls were applied to 2 types of protein chips, CM10 (weak cation exchange) and IMAC3 (metal binding - Cu). Mass spectrometry was performed to reveal the protein expression profiles. Decision trees were generated using 'Protein analysis' software, and automatically detected biomarker candidates. Validation analysis was performed for CM10 chip by random sampling. Results: Decision tree software, which can perform training and validation from profiling data, was developed. For CM10 and IMAC3 chips, 23 of 113 and 8 of 41 protein peaks were significantly different among 3 groups (p<0.05), respectively. Decision tree correctly classified 3 groups with an error rate of 3.3% for CM10 and 2.0% for IMAC3, and 4 and 7 biomarker candidates were detected respectively. In 2 group comparisons, all cancer samples were correctly discriminated from non-cancer samples (error rate = 0%) for CM10 by single node and for IMAC3 by multiple nodes. Validation results from 5 test sets revealed SELDI-TOF-MS and decision tree correctly differentiated cancers from non-cancers (54/55, 98%), while predictability was moderate in 3 group classification (36/55, 65%). Conclusion: Our in-house software was able to successfully build decision trees and detect biomarker candidates, therefore it could be useful for biomarker discovery and clinical follow up of papillary thyroid cancer.

Development of prevotella intermedia ATCC 49046 Strain-Specific PCR Primer Based on a Pig6 DNA Probe (Pig6 DNA probe를 기반으로 하는 Prevotella intermedia ATCC 49046 균주-특이 PCR primer 개발)

  • Jeong Seung-U;Yoo So-Young;Kang Sook-Jin;Kim Mi-Kwang;Jang Hyun-Seon;Lee Kwang-Yong;Kim Byung-Ok;Kook Joong-Ki
    • Korean Journal of Microbiology
    • /
    • v.42 no.2
    • /
    • pp.89-94
    • /
    • 2006
  • The purpose of this study is to develop the strain-specific PCR primers for the identification of prevotella inter-media ATCC 49046 which is frequently used in the pathogenesis studies of periodontitis. The Hind III-digested genomic DNA of P. intermedia ATCC 49046 were cloned by random cloning method. The specificity of cloned DNA fragments were determined by Southern blot analysis. The nucleotide sequence of cloned DNA probes was determined by chain termination method. The PCR primers were designed based on the nucleotide sequence of cloned DNA fragment. The data showed that Pig6 DNA probe were hybridized with the genomic DNA from P. intermedia strains (ATCC $25611^T$ and 49046) isolated from the Westerns, not the strains isolated from Koreans. The Pig6 DNA probe were consisted of 813 bp. Pig6-F3 and Pig6-R3 primers, designed base on the nucleotide Sequences Of Pig6 DNA Probe, were 3150 specific to the only both P. intermedia ATCC $25611^T$ and P. intermedia ATCC 49046. In the other hand, Pig6-60F and Pig6-770R primers were specific to the only P. intermedia ATCC 49046. The two PCR primer sets could detect as little as 4 pg of chromosomal DNA of P. intermedia. These results indicate that Pig6-60F and Pig6-770R primers have proven useful for the identification of P. intermedia ATCC 49046, especially with regard to the maintenance of the strain.

A Reflectance Normalization Via BRDF Model for the Korean Vegetation using MODIS 250m Data (한반도 식생에 대한 MODIS 250m 자료의 BRDF 효과에 대한 반사도 정규화)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.6
    • /
    • pp.445-456
    • /
    • 2005
  • The land surface parameters should be determined with sufficient accuracy, because these play an important role in climate change near the ground. As the surface reflectance presents strong anisotropy, off-nadir viewing results a strong dependency of observations on the Sun - target - sensor geometry. They contribute to the random noise which is produced by surface angular effects. The principal objective of the study is to provide a database of accurate surface reflectance eliminated the angular effects from MODIS 250m reflective channel data over Korea. The MODIS (Moderate Resolution Imaging Spectroradiometer) sensor has provided visible and near infrared channel reflectance at 250m resolution on a daily basis. The successive analytic processing steps were firstly performed on a per-pixel basis to remove cloudy pixels. And for the geometric distortion, the correction process were performed by the nearest neighbor resampling using 2nd-order polynomial obtained from the geolocation information of MODIS Data set. In order to correct the surface anisotropy effects, this paper attempted the semiempirical kernel-driven Bi- directional Reflectance Distribution Function(BRDF) model. The algorithm yields an inversion of the kernel-driven model to the angular components, such as viewing zenith angle, solar zenith angle, viewing azimuth angle, solar azimuth angle from reflectance observed by satellite. First we consider sets of the model observations comprised with a 31-day period to perform the BRDF model. In the next step, Nadir view reflectance normalization is carried out through the modification of the angular components, separated by BRDF model for each spectral band and each pixel. Modeled reflectance values show a good agreement with measured reflectance values and their RMSE(Root Mean Square Error) was totally about 0.01(maximum=0.03). Finally, we provide a normalized surface reflectance database consisted of 36 images for 2001 over Korea.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

The Validity Test of Statistical Matching Simulation Using the Data of Korea Venture Firms and Korea Innovation Survey (벤처기업정밀실태조사와 한국기업혁신조사 데이터를 활용한 통계적 매칭의 타당성 검증)

  • An, Kyungmin;Lee, Young-Chan
    • Knowledge Management Research
    • /
    • v.24 no.1
    • /
    • pp.245-271
    • /
    • 2023
  • The change to the data economy requires a new analysis beyond ordinary research in the management field. Data matching refers to a technique or processing method that combines data sets collected from different samples with the same population. In this study, statistical matching was performed using random hotdeck and Mahalanobis distance functions using 2020 Survey of Korea Venture Firms and 2020 Korea Innovation Survey datas. Among the variables used for statistical matching simulation, the industry and the number of workers were set to be completely consistent, and region, business power, listed market, and sales were set as common variables. Simulation verification was confirmed by mean test and kernel density. As a result of the analysis, it was confirmed that statistical matching was appropriate because there was a difference in the average test, but a similar pattern was shown in the kernel density. This result attempted to expand the spectrum of the research method by experimenting with a data matching research methodology that has not been sufficiently attempted in the management field, and suggests implications in terms of data utilization and diversity.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.