• Title/Summary/Keyword: High Dimensionality Data

Search Result 121, Processing Time 0.031 seconds

Network Graph Analysis of Gene-Gene Interactions in Genome-Wide Association Study Data

  • Lee, Sungyoung;Kwon, Min-Seok;Park, Taesung
    • Genomics & Informatics
    • /
    • v.10 no.4
    • /
    • pp.256-262
    • /
    • 2012
  • Most common complex traits, such as obesity, hypertension, diabetes, and cancers, are known to be associated with multiple genes, environmental factors, and their epistasis. Recently, the development of advanced genotyping technologies has allowed us to perform genome-wide association studies (GWASs). For detecting the effects of multiple genes on complex traits, many approaches have been proposed for GWASs. Multifactor dimensionality reduction (MDR) is one of the powerful and efficient methods for detecting high-order gene-gene ($G{\times}G$) interactions. However, the biological interpretation of $G{\times}G$ interactions identified by MDR analysis is not easy. In order to aid the interpretation of MDR results, we propose a network graph analysis to elucidate the meaning of identified $G{\times}G$ interactions. The proposed network graph analysis consists of three steps. The first step is for performing $G{\times}G$ interaction analysis using MDR analysis. The second step is to draw the network graph using the MDR result. The third step is to provide biological evidence of the identified $G{\times}G$ interaction using external biological databases. The proposed method was applied to Korean Association Resource (KARE) data, containing 8838 individuals with 327,632 single-nucleotide polymorphisms, in order to perform $G{\times}G$ interaction analysis of body mass index (BMI). Our network graph analysis successfully showed that many identified $G{\times}G$ interactions have known biological evidence related to BMI. We expect that our network graph analysis will be helpful to interpret the biological meaning of $G{\times}G$ interactions.

Physical Database Design for DFT-Based Multidimensional Indexes in Time-Series Databases (시계열 데이터베이스에서 DFT-기반 다차원 인덱스를 위한 물리적 데이터베이스 설계)

  • Kim, Sang-Wook;Kim, Jin-Ho;Han, Byung-ll
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1505-1514
    • /
    • 2004
  • Sequence matching in time-series databases is an operation that finds the data sequences whose changing patterns are similar to that of a query sequence. Typically, sequence matching hires a multi-dimensional index for its efficient processing. In order to alleviate the dimensionality curse problem of the multi-dimensional index in high-dimensional cases, the previous methods for sequence matching apply the Discrete Fourier Transform(DFT) to data sequences, and take only the first two or three DFT coefficients as organizing attributes of the multi-dimensional index. This paper first points out the problems in such simple methods taking the firs two or three coefficients, and proposes a novel solution to construct the optimal multi -dimensional index. The proposed method analyzes the characteristics of a target database, and identifies the organizing attributes having the best discrimination power based on the analysis. It also determines the optimal number of organizing attributes for efficient sequence matching by using a cost model. To show the effectiveness of the proposed method, we perform a series of experiments. The results show that the Proposed method outperforms the previous ones significantly.

  • PDF

Development of Traffic State Classification Technique (교통상황 분류를 위한 클러스터링 기법 개발)

  • Woojin Kang;Youngho Kim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.1
    • /
    • pp.81-92
    • /
    • 2023
  • Traffic state classification is crucial for time-of-day (TOD) traffic signal control. This paper proposed a traffic state classification technique applying Deep-Embedded Clustering (DEC) method that uses a high dimensional traffic data observed at all signalized intersections in a traffic signal control sub area (SA). So far, signal timing plan has been determined based on the traffic data observed at the critical intersection in SA. The current method has a limitation that it cannot consider a comprehensive traffic situation in SA. The proposed method alleviates the curse of dimensionality and turns out to overcome the shortcomings of the current signal timing plan.

A Comparison and Analysis on High-Dimensional Clustering Techniques for Data Mining (데이터 마이닝을 위한 고차원 클러스터링 기법에 관한 비교 분석 연구)

  • 김홍일;이혜명
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.12
    • /
    • pp.887-900
    • /
    • 2003
  • Many applications require the clustering of large amounts of high dimensional data. Most automated clustering techniques have been developed but they do not work effectively and/or efficiently on high dimensional (numerical) data, which is due to the so-called “curse of dimensionality”. Moreover, the high dimensional data often contain a significant amount of noise, which causes additional ineffectiveness of algorithms. Therefore, it is necessary to look over the structure and various characteristics of high dimensional data and to develop algorithm that support clustering adapted to applications of the high dimensional database. In this paper, we investigate and classify the existing high dimensional clustering methods by analyzing the strength and weakness of each method for specific applications and comparing them. Especially, in terms of efficiency and effectiveness, we compare the traditional algorithms with CLIP which are developed by us. This study will contribute to develop more advanced algorithms than the current algorithms.

  • PDF

High-resolution 1H NMR Spectroscopy of Green and Black Teas

  • Jeong, Ji-Ho;Jang, Hyun-Jun;Kim, Yongae
    • Journal of the Korean Chemical Society
    • /
    • v.63 no.2
    • /
    • pp.78-84
    • /
    • 2019
  • High-resolution $^1H$ NMR spectroscopic technique has been widely used as one of the most powerful analytical tools in food chemistry as well as to define molecular structure. The $^1H$ NMR spectra-based metabolomics has focused on classification and chemometric analysis of complex mixtures. The principal component analysis (PCA), an unsupervised clustering method and used to reduce the dimensionality of multivariate data, facilitates direct peak quantitation and pattern recognition. Using a combination of these techniques, the various green teas and black teas brewed were investigated via metabolite profiling. These teas were characterized based on the leaf size and country of cultivation, respectively.

Comparison of Hierarchical and Marginal Likelihood Estimators for Binary Outcomes

  • Yun, Sung-Cheol;Lee, Young-Jo;Ha, Il-Do;Kang, Wee-Chang
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.05a
    • /
    • pp.79-84
    • /
    • 2003
  • Likelihood estimation in random-effect models is often complicated because the marginal likelihood involves an analytically intractable integral. Numerical integration such as Gauss-Hermite quadrature is an option, but is generally not recommended when the dimensionality of the integral is high. An alternative is the use of hierarchical likelihood, which avoids such burdensome numerical integration. These two approaches for fitting binary data are compared and the advantages of using the hierarchical likelihood are discussed. Random-effect models for binary outcomes and for bivariate binary-continuous outcomes are considered.

  • PDF

Text Classification on Social Network Platforms Based on Deep Learning Models

  • YA, Chen;Tan, Juan;Hoekyung, Jung
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • The natural language on social network platforms has a certain front-to-back dependency in structure, and the direct conversion of Chinese text into a vector makes the dimensionality very high, thereby resulting in the low accuracy of existing text classification methods. To this end, this study establishes a deep learning model that combines a big data ultra-deep convolutional neural network (UDCNN) and long short-term memory network (LSTM). The deep structure of UDCNN is used to extract the features of text vector classification. The LSTM stores historical information to extract the context dependency of long texts, and word embedding is introduced to convert the text into low-dimensional vectors. Experiments are conducted on the social network platforms Sogou corpus and the University HowNet Chinese corpus. The research results show that compared with CNN + rand, LSTM, and other models, the neural network deep learning hybrid model can effectively improve the accuracy of text classification.

Exploring trends in blockchain publications with topic modeling: Implications for forecasting the emergence of industry applications

  • Jeongho Lee;Hangjung Zo;Tom Steinberger
    • ETRI Journal
    • /
    • v.45 no.6
    • /
    • pp.982-995
    • /
    • 2023
  • Technological innovation generates products, services, and processes that can disrupt existing industries and lead to the emergence of new fields. Distributed ledger technology, or blockchain, offers novel transparency, security, and anonymity characteristics in transaction data that may disrupt existing industries. However, research attention has largely examined its application to finance. Less is known of any broader applications, particularly in Industry 4.0. This study investigates academic research publications on blockchain and predicts emerging industries using academia-industry dynamics. This study adopts latent Dirichlet allocation and dynamic topic models to analyze large text data with a high capacity for dimensionality reduction. Prior studies confirm that research contributes to technological innovation through spillover, including products, processes, and services. This study predicts emerging industries that will likely incorporate blockchain technology using insights from the knowledge structure of publications.

Comparison of the Performance of Clustering Analysis using Data Reduction Techniques to Identify Energy Use Patterns

  • Song, Kwonsik;Park, Moonseo;Lee, Hyun-Soo;Ahn, Joseph
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.559-563
    • /
    • 2015
  • Identification of energy use patterns in buildings has a great opportunity for energy saving. To find what energy use patterns exist, clustering analysis has been commonly used such as K-means and hierarchical clustering method. In case of high dimensional data such as energy use time-series, data reduction should be considered to avoid the curse of dimensionality. Principle Component Analysis, Autocorrelation Function, Discrete Fourier Transform and Discrete Wavelet Transform have been widely used to map the original data into the lower dimensional spaces. However, there still remains an ongoing issue since the performance of clustering analysis is dependent on data type, purpose and application. Therefore, we need to understand which data reduction techniques are suitable for energy use management. This research aims find the best clustering method using energy use data obtained from Seoul National University campus. The results of this research show that most experiments with data reduction techniques have a better performance. Also, the results obtained helps facility managers optimally control energy systems such as HVAC to reduce energy use in buildings.

  • PDF

Polyclass in Data Mining (데이터 마이닝에서의 폴리클라스)

  • 구자용;박헌진;최대우
    • The Korean Journal of Applied Statistics
    • /
    • v.13 no.2
    • /
    • pp.489-503
    • /
    • 2000
  • Data mining means data analysis and model selection using various types of data in order to explore useful information and knowledge for making decisions. Examples of data mining include scoring for credit analysis of a new customer and scoring for churn management, where the customers with high scores are given special attention. In this paper, scoring is interpreted as a modeling process of the conditional probability and polyclass scoring method is described. German credit data, a PC communication company data and a mobile communication company data are used to compare the performance of polyclass scoring method with that of the scoring method based on a tree model.

  • PDF