• Title/Summary/Keyword: Data 분석

Search Result 63,486, Processing Time 0.072 seconds

A FCA-based Classification Approach for Analysis of Interval Data (구간데이터분석을 위한 형식개념분석기반의 분류)

  • Hwang, Suk-Hyung;Kim, Eung-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.1
    • /
    • pp.19-30
    • /
    • 2012
  • Based on the internet-based infrastructures such as various information devices, social network systems and cloud computing environments, distributed and sharable data are growing explosively. Recently, as a data analysis and mining technique for extracting, analyzing and classifying the inherent and useful knowledge and information, Formal Concept Analysis on binary or many-valued data has been successfully applied in many diverse fields. However, in formal concept analysis, there has been little research conducted on analyzing interval data whose attributes have some interval values. In this paper, we propose a new approach for classification of interval data based on the formal concept analysis. We present the development of a supporting tool(iFCA) that provides the proposed approach for the binarization of interval data table, concept extraction and construction of concept hierarchies. Finally, with some experiments over real-world data sets, we demonstrate that our approach provides some useful and effective ways for analyzing and mining interval data.

Visualizing Article Material using a Big Data Analytical Tool R Language (빅데이터 분석 도구 R 언어를 이용한 논문 데이터 시각화)

  • Nam, Soo-Tai;Shin, Seong-Yoon;Jin, Chan-Yong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.326-327
    • /
    • 2021
  • Newly, big data utilization has been widely interested in a wide variety of industrial fields. Big data analysis is the process of discovering meaningful new correlations, patterns, and trends in large volumes of data stored in data stores and creating new value. Thus, most big data analysis technology methods include data mining, machine learning, natural language processing, and pattern recognition used in existing statistical computer science. Also, using the R language, a big data tool, we can express analysis results through various visualization functions using pre-processing text data. The data used in this study were analyzed for 29 papers in a specific journal. In the final analysis results, the most frequently mentioned keyword was "Research", which ranked first 743 times. Therefore, based on the results of the analysis, the limitations of the study and theoretical implications are suggested.

  • PDF

Study on Application of Big Data in Packaging (패키징(Packaging) 분야에서의 빅데이터(Big data) 적용방안 연구)

  • Kang, WookGeon;Ko, Euisuk;Shim, Woncheol;Lee, Hakrae;Kim, Jaineung
    • KOREAN JOURNAL OF PACKAGING SCIENCE & TECHNOLOGY
    • /
    • v.23 no.3
    • /
    • pp.201-209
    • /
    • 2017
  • The Big Data, the element of the Fourth Industrial Revolution, is drawing attention as the 4th Industrial Revolution is mentioned in the 2016 World Economic Forum. Big Data is being used in various fields because it predicts the near future and can create new business. However, utilization and research in the field of packaging are lacking. Today packaging has been demanded marketing elements that effect on consumer choice. Big data is actively used in marketing. In the marketing field, big data can be used to analyze sales information and consumer reactions to produce meaningful results. Therefore, this study proposed a method of applying big data in the field of packaging focusing on marketing. In this study suggest that try to utilize the private data and community data to analyze interaction between consumers and products. Using social big data will enable to understand the preferred packaging and consumer perceptions and emotions in the same product line. It can also be used to analyze the effects of packaging among various components of the product. Packaging is one of the many components of the product. Therefore, it is not easy to understand the impact of a single packaging element. However, this study presents the possibility of using Big Data to analyze the perceptions and feelings of consumers about packaging.

A Study on Recognition of Artificial Intelligence Utilizing Big Data Analysis (빅데이터 분석을 활용한 인공지능 인식에 관한 연구)

  • Nam, Soo-Tai;Kim, Do-Goan;Jin, Chan-Yong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.129-130
    • /
    • 2018
  • Big data analysis is a technique for effectively analyzing unstructured data such as the Internet, social network services, web documents generated in the mobile environment, e-mail, and social data, as well as well formed structured data in a database. The most big data analysis techniques are data mining, machine learning, natural language processing, and pattern recognition, which were used in existing statistics and computer science. Global research institutes have identified analysis of big data as the most noteworthy new technology since 2011. Therefore, companies in most industries are making efforts to create new value through the application of big data. In this study, we analyzed using the Social Matrics which a big data analysis tool of Daum communications. We analyzed public perceptions of "Artificial Intelligence" keyword, one month as of May 19, 2018. The results of the big data analysis are as follows. First, the 1st related search keyword of the keyword of the "Artificial Intelligence" has been found to be technology (4,122). This study suggests theoretical implications based on the results.

  • PDF

A Method for Microarray Data Analysis based on Bayesian Networks using an Efficient Structural learning Algorithm and Data Dimensionality Reduction (효율적 구조 학습 알고리즘과 데이타 차원축소를 통한 베이지안망 기반의 마이크로어레이 데이타 분석법)

  • 황규백;장정호;장병탁
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.775-784
    • /
    • 2002
  • Microarray data, obtained from DNA chip technologies, is the measurement of the expression level of thousands of genes in cells or tissues. It is used for gene function prediction or cancer diagnosis based on gene expression patterns. Among diverse methods for data analysis, the Bayesian network represents the relationships among data attributes in the form of a graph structure. This property enables us to discover various relations among genes and the characteristics of the tissue (e.g., the cancer type) through microarray data analysis. However, most of the present microarray data sets are so sparse that it is difficult to apply general analysis methods, including Bayesian networks, directly. In this paper, we harness an efficient structural learning algorithm and data dimensionality reduction in order to analyze microarray data using Bayesian networks. The proposed method was applied to the analysis of real microarray data, i.e., the NC160 data set. And its usefulness was evaluated based on the accuracy of the teamed Bayesian networks on representing the known biological facts.

The Method for Extracting Meaningful Patterns Over the Time of Multi Blocks Stream Data (시간의 흐름과 위치 변화에 따른 멀티 블록 스트림 데이터의 의미 있는 패턴 추출 방법)

  • Cho, Kyeong-Rae;Kim, Ki-Young
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.10
    • /
    • pp.377-382
    • /
    • 2014
  • Analysis techniques of the data over time from the mobile environment and IoT, is mainly used for extracting patterns from the collected data, to find meaningful information. However, analytical methods existing, is based to be analyzed in a state where the data collection is complete, to reflect changes in time series data associated with the passage of time is difficult. In this paper, we introduce a method for analyzing multi-block streaming data(AM-MBSD: Analysis Method for Multi-Block Stream Data) for the analysis of the data stream with multiple properties, such as variability of pattern and large capacitive and continuity of data. The multi-block streaming data, define a plurality of blocks of data to be continuously generated, each block, by using the analysis method of the proposed method of analysis to extract meaningful patterns. The patterns that are extracted, generation time, frequency, were collected and consideration of such errors. Through analysis experiments using time series data.

Firework plot as a graphical exploratory data analysis tool for evaluating the impact of outliers in skewness and kurtosis of univariate data (일변량 자료의 왜도와 첨도에서 특이점의 영향을 평가하기 위한 탐색적 자료분석 그림도구로서의 불꽃그림)

  • Moon, Sungho
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.2
    • /
    • pp.355-368
    • /
    • 2016
  • Outliers and influential data points distort many data analysis measures. Jang and Anderson-Cook (2014) proposed a graphical method called a rework plot for exploratory analysis purpose so that there could be a possible visualization of the trace of the impact of the possible outlying and/or influential data points on the univariate/bivariate data analysis and regression. They developed 3-D plot as well as pairwise plot for the appropriate measures of interest. This paper further extends their approach to identify its strength. We can use rework plots as a graphical exploratory data analysis tool to evaluate the impact of outliers in skewness and kurtosis of univariate data.

Design of a Platform for Collecting and Analyzing Agricultural Big Data (농업 빅데이터 수집 및 분석을 위한 플랫폼 설계)

  • Nguyen, Van-Quyet;Nguyen, Sinh Ngoc;Kim, Kyungbaek
    • Journal of Digital Contents Society
    • /
    • v.18 no.1
    • /
    • pp.149-158
    • /
    • 2017
  • Big data have been presenting us with exciting opportunities and challenges in economic development. For instance, in the agriculture sector, mixing up of various agricultural data (e.g., weather data, soil data, etc.), and subsequently analyzing these data deliver valuable and helpful information to farmers and agribusinesses. However, massive data in agriculture are generated in every minute through multiple kinds of devices and services such as sensors and agricultural web markets. It leads to the challenges of big data problem including data collection, data storage, and data analysis. Although some systems have been proposed to address this problem, they are still restricted either in the type of data, the type of storage, or the size of data they can handle. In this paper, we propose a novel design of a platform for collecting and analyzing agricultural big data. The proposed platform supports (1) multiple methods of collecting data from various data sources using Flume and MapReduce; (2) multiple choices of data storage including HDFS, HBase, and Hive; and (3) big data analysis modules with Spark and Hadoop.

A Divisive Clustering for Mixed Feature-Type Symbolic Data (혼합형태 심볼릭 데이터의 군집분석방법)

  • Kim, Jaejik
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.6
    • /
    • pp.1147-1161
    • /
    • 2015
  • Nowadays we are considering and analyzing not only classical data expressed by points in the p-dimensional Euclidean space but also new types of data such as signals, functions, images, and shapes, etc. Symbolic data also can be considered as one of those new types of data. Symbolic data can have various formats such as intervals, histograms, lists, tables, distributions, models, and the like. Up to date, symbolic data studies have mainly focused on individual formats of symbolic data. In this study, it is extended into datasets with both histogram and multimodal-valued data and a divisive clustering method for the mixed feature-type symbolic data is introduced and it is applied to the analysis of industrial accident data.

A Review of Time Series Analysis for Environmental and Ecological Data (환경생태 자료 분석을 위한 시계열 분석 방법 연구)

  • Mo, Hyoung-ho;Cho, Kijong;Shin, Key-Il
    • Korean Journal of Environmental Biology
    • /
    • v.34 no.4
    • /
    • pp.365-373
    • /
    • 2016
  • Much of the data used in the analysis of environmental ecological data is being obtained over time. If the number of time points is small, the data will not be given enough information, so repeated measurements or multiple survey points data should be used to perform a comprehensive analysis. The method used for that case is longitudinal data analysis or mixed model analysis. However, if the amount of information is sufficient due to the large number of time points, repetitive data are not needed and these data are analyzed using time series analysis technique. In particular, with a large number of data points in the current situation, when we want to predict how each variable affects each other, or what trends will be expected in the future, we should analyze the data using time series analysis techniques. In this study, we introduce univariate time series analysis, intervention time series model, transfer function model, and multivariate time series model and review research papers studied in Korea. We also introduce an error correction model, which can be used to analyze environmental ecological data.