• 제목/요약/키워드: Data Extraction Techniques

검색결과 335건 처리시간 0.026초

원천 시스템 환경을 고려한 데이터 추출 방식의 비교 및 Index DB를 이용한 추출 방식의 구현 -ㅅ 은행 사례를 중심으로- (A Comparison of Data Extraction Techniques and an Implementation of Data Extraction Technique using Index DB -S Bank Case-)

  • 김기운
    • 경영과학
    • /
    • 제20권2호
    • /
    • pp.1-16
    • /
    • 2003
  • Previous research on data extraction and integration for data warehousing has concentrated mainly on the relational DBMS or partly on the object-oriented DBMS. Mostly, it describes issues related with the change data (deltas) capture and the incremental update by using the triggering technique of active database systems. But, little attention has been paid to data extraction approaches from other types of source systems like hierarchical DBMS, etc. and from source systems without triggering capability. This paper argues, from the practical point of view, that we need to consider not only the types of information sources and capabilities of ETT tools but also other factors of source systems such as operational characteristics (i.e., whether they support DBMS log, user log or no log, timestamp), and DBMS characteristics (i.e., whether they have the triggering capability or not, etc), in order to find out appropriate data extraction techniques that could be applied to different source systems. Having applied several different data extraction techniques (e.g., DBMS log, user log, triggering, timestamp-based extraction, file comparison) to S bank's source systems (e.g., IMS, DB2, ORACLE, and SAM file), we discovered that data extraction techniques available in a commercial ETT tool do not completely support data extraction from the DBMS log of IMS system. For such IMS systems, a new date extraction technique is proposed which first creates Index database and then updates the data warehouse using the Index database. We illustrates this technique using an example application.

데이터마이닝의 자동 데이터 규칙 추출 방법론 개발 : 계층적 클러스터링 알고리듬과 러프 셋 이론을 중심으로 (Development of Automatic Rule Extraction Method in Data Mining : An Approach based on Hierarchical Clustering Algorithm and Rough Set Theory)

  • 오승준;박찬웅
    • 한국컴퓨터정보학회논문지
    • /
    • 제14권6호
    • /
    • pp.135-142
    • /
    • 2009
  • 테이터 마이닝은 대용량의 데이터 셋을 분석하기 위하여 새로운 이론, 기법, 분석 툴을 제공하는 전산 지능분야의 새로운 영역중 하나이다. 데이터 마이닝의 주요 기법으로는 연관규칙 탐사, 분류, 클러스터링 등이 있다. 그러나 이들 기법을 기존 연구 방법들처럼 개별적으로 사용하는 것보다는 통합화하여 규칙들을 자동적으로 발견해내는 방법론이 필요하다. 이런 데이터 규칙 추출 방법론은 대량의 데이터들을 분석하여 성공적인 의사결정을 내리는데 도움을 줄 수 있기에 많은 분야에 이용될 수 있다. 본 논문에서는 계층적 클러스터링 알고리듬과 러프셋 이론을 이용하여 대량의 데이터로부터 의미 있는 규칙들을 발견해 내는 자동적인 규칙 추출 방법론을 제안한다. 또한 UCI KDD 아카이브에 포함되어 있는 데이터 셋을 이용하여 제안하는 방법에 대하여 실험을 수행하였으며, 실제 생성된 규칙들을 예시하였다. 이들 자동 생성된 규칙들은 효율적인 의사결정에 도움을 준다.

트랜잭션 기반 머신러닝에서 특성 추출 자동화를 위한 딥러닝 응용 (A Deep Learning Application for Automated Feature Extraction in Transaction-based Machine Learning)

  • 우덕채;문현실;권순범;조윤호
    • 한국IT서비스학회지
    • /
    • 제18권2호
    • /
    • pp.143-159
    • /
    • 2019
  • Machine learning (ML) is a method of fitting given data to a mathematical model to derive insights or to predict. In the age of big data, where the amount of available data increases exponentially due to the development of information technology and smart devices, ML shows high prediction performance due to pattern detection without bias. The feature engineering that generates the features that can explain the problem to be solved in the ML process has a great influence on the performance and its importance is continuously emphasized. Despite this importance, however, it is still considered a difficult task as it requires a thorough understanding of the domain characteristics as well as an understanding of source data and the iterative procedure. Therefore, we propose methods to apply deep learning for solving the complexity and difficulty of feature extraction and improving the performance of ML model. Unlike other techniques, the most common reason for the superior performance of deep learning techniques in complex unstructured data processing is that it is possible to extract features from the source data itself. In order to apply these advantages to the business problems, we propose deep learning based methods that can automatically extract features from transaction data or directly predict and classify target variables. In particular, we applied techniques that show high performance in existing text processing based on the structural similarity between transaction data and text data. And we also verified the suitability of each method according to the characteristics of transaction data. Through our study, it is possible not only to search for the possibility of automated feature extraction but also to obtain a benchmark model that shows a certain level of performance before performing the feature extraction task by a human. In addition, it is expected that it will be able to provide guidelines for choosing a suitable deep learning model based on the business problem and the data characteristics.

Framework for Content-Based Image Identification with Standardized Multiview Features

  • Das, Rik;Thepade, Sudeep;Ghosh, Saurav
    • ETRI Journal
    • /
    • 제38권1호
    • /
    • pp.174-184
    • /
    • 2016
  • Information identification with image data by means of low-level visual features has evolved as a challenging research domain. Conventional text-based mapping of image data has been gradually replaced by content-based techniques of image identification. Feature extraction from image content plays a crucial role in facilitating content-based detection processes. In this paper, the authors have proposed four different techniques for multiview feature extraction from images. The efficiency of extracted feature vectors for content-based image classification and retrieval is evaluated by means of fusion-based and data standardization-based techniques. It is observed that the latter surpasses the former. The proposed methods outclass state-of-the-art techniques for content-based image identification and show an average increase in precision of 17.71% and 22.78% for classification and retrieval, respectively. Three public datasets - Wang; Oliva and Torralba (OT-Scene); and Corel - are used for verification purposes. The research findings are statistically validated by conducting a paired t-test.

웹 기반 데이터베이스로부터의 유용한 데이터 추출 기법의 설계 및 응용 (Design and application of effective data extraction technique from Web databases)

  • 황두성
    • 한국산학기술학회논문지
    • /
    • 제6권4호
    • /
    • pp.309-314
    • /
    • 2005
  • 본 논문에서는 생명공학 정보를 포함하는 분산 웹 데이터베이스들로부터 관련성에 기반하여 목표 데이터를 추출하는 기법들을 분석한다. 더불어 이 분석을 기본으로 단백질 데이터의 지식 확장 방법의 설계 및 구현을 제안한다. 웹 데이터베이스를 위한 데이터 추출기는 수동 추출, 반자동 추출, 자동 추출 방법 등의 구현방법이 가능하다. 웹 데이터 추출기는 해당 웹 페이지에서 목표 데이터를 검색 및 추출하기 위하여 식별자를 이용하는 것이 일반적이다. 본 논문은 웹 데이터 추출 기법을 이용한 유기체 단백질 관련 데이터베이스 시스템의 설계와 구현을 기술한다.

  • PDF

Web Page Segmentation

  • Ahmad, Mahmood;Lee, Sungyoung
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2014년도 추계학술발표대회
    • /
    • pp.1087-1090
    • /
    • 2014
  • This paper describes an overview and research work related to web page segmentation. Over a period of time, various techniques have been used and proposed to extract meaningful information from web pages automatically. Due to voluminous amount of data this extraction demanded state of the art techniques that segment the web pages just like or close to humans. Motivation behind this is to facilitate applications that rely on the meaningful data acquired from multiple web pages. Information extraction, search engines, re-organized web display for small screen devices are few strong candidate areas where web page extraction has adequate potential and utility of usage.

A Development Method of Framework for Collecting, Extracting, and Classifying Social Contents

  • Cho, Eun-Sook
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권1호
    • /
    • pp.163-170
    • /
    • 2021
  • 빅데이터가 여러 분야에서 다양하게 접목됨에 따라 빅데이터 시장이 하드웨어로부터 시작해서 서비스 소프트웨어 부문으로 확장되고 있다. 특히 빅데이터 의미 파악 및 이해 능력, 분석 결과 등 총체적이고 직관적인 시각화를 위하여 애플리케이션을 제공하는 거대 플랫폼 시장으로 확대되고 있다. 그 중에서 SNS(Social Network Service) 등과 같은 소셜 미디어를 활용한 빅데이터 추출 및 분석에 대한 수요가 기업 뿐만 아니라 개인에 이르기까지 매우 활발히 진행되고 있다. 그러나 이처럼 사용자 트렌드 분석과 마케팅을 위한 소셜 미디어 데이터의 수집 및 분석에 대한 많은 수요에도 불구하고, 다양한 소셜 미디어 서비스 인터페이스의 이질성으로 인한 동적 연동의 어려움과 소프트웨어 플랫폼 구축 및 운영의 복잡성을 해결하기 위한 연구가 미흡한 상태이다. 따라서 본 논문에서는 소셜 미디어 데이터의 수집에서 추출 및 분류에 이르는 과정을 하나로 통합하여 운영할 수 있는 프레임워크를 개발하는 방법에 대해 제시한다. 제시된 프레임워크는 이질적인 소셜 미디어 데이터 수집 채널의 문제를 어댑터 패턴을 통해 해결하고, 의미 연관성 기반 추출 기법과 주제 연관성 기반 분류 기법을 통해 소셜 토픽 추출과 분류의 정확성을 높였다.

지하수 부존 가능지역 추출을 위한 LANDSAT TM 자료와 GIS의 통합(I) - LANDSAT TM 자료에 의한 지하수 부존 가능지역 추출 - (The Integration of GIS with LANDSAT TM Data for Ground Water Potential Area Mapping (I) - Extraction of the Ground Water Potential Area using LANDSAT TM Data -)

  • 지종훈
    • 대한원격탐사학회지
    • /
    • 제7권1호
    • /
    • pp.29-43
    • /
    • 1991
  • The study was performed to extraction the ground water potential area using LANDSAT TM data. The image processing techniques developed for the study are contrast transformation, differential filtering and pseudo stereoscopic image methods. These were examined for lineament extraction, lineament interpretation and the integration of vertor data with LANDSAT data. The differential filtering method is much usefull for lineament extraction, and all direction lineaments are clearly shown on the band 5 image of LANDSAT TM. The pseudo stereoscopic image are made in which color differential method is adopted, the pair images are usefull for the lineament interpretation. The results of the analysis are as follows. 1) there is a close correlation between lineament and cased well in the study area, because 33 wells of the developed 45 cased wells coincide with the lineaments. 2) 21 sites in the study area were selected for pumping test, and as a result 11 sites of them produces over than 200 ton/day.

FERET DATA SET에서의 PCA와 ICA의 비교

  • Kim, Sung-Soo;Moon, Hyeon-Joon;Kim, Jaihie
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2355-2358
    • /
    • 2003
  • The purpose of this paper is to investigate two major feature extraction techniques based on generic modular face recognition system. Detailed algorithms are described for principal component analysis (PCA) and independent component analysis (ICA). PCA and ICA ate statistical techniques for feature extraction and their incorporation into a face recognition system requires numerous design decisions. We explicitly state the design decisions by introducing a modular-based face recognition system since some of these decision are not documented in the literature. We explored different implementations of each module, and evaluate the statistical feature extraction algorithms based on the FERET performance evaluation protocol (the de facto standard method for evaluating face recognition algorithms). In this paper, we perform two experiments. In the first experiment, we report performance results on the FERET database based on PCA. In the second experiment, we examine performance variations based on ICA feature extraction algorithm. The experimental results are reported using four different categories of image sets including front, lighting, and duplicate images.

  • PDF

Application Consideration of Machine Learning Techniques in Satellite Systems

  • Jin-keun Hong
    • International journal of advanced smart convergence
    • /
    • 제13권2호
    • /
    • pp.48-60
    • /
    • 2024
  • With the exponential growth of satellite data utilization, machine learning has become pivotal in enhancing innovation and cybersecurity in satellite systems. This paper investigates the role of machine learning techniques in identifying and mitigating vulnerabilities and code smells within satellite software. We explore satellite system architecture and survey applications like vulnerability analysis, source code refactoring, and security flaw detection, emphasizing feature extraction methodologies such as Abstract Syntax Trees (AST) and Control Flow Graphs (CFG). We present practical examples of feature extraction and training models using machine learning techniques like Random Forests, Support Vector Machines, and Gradient Boosting. Additionally, we review open-access satellite datasets and address prevalent code smells through systematic refactoring solutions. By integrating continuous code review and refactoring into satellite software development, this research aims to improve maintainability, scalability, and cybersecurity, providing novel insights for the advancement of satellite software development and security. The value of this paper lies in its focus on addressing the identification of vulnerabilities and resolution of code smells in satellite software. In terms of the authors' contributions, we detail methods for applying machine learning to identify potential vulnerabilities and code smells in satellite software. Furthermore, the study presents techniques for feature extraction and model training, utilizing Abstract Syntax Trees (AST) and Control Flow Graphs (CFG) to extract relevant features for machine learning training. Regarding the results, we discuss the analysis of vulnerabilities, the identification of code smells, maintenance, and security enhancement through practical examples. This underscores the significant improvement in the maintainability and scalability of satellite software through continuous code review and refactoring.