• 제목/요약/키워드: data extract

검색결과 3,913건 처리시간 0.026초

데이터 마이닝을 위한 생산공정 데이터 추출 (Data Extraction of Manufacturing Process for Data Mining)

  • 박홍균;이근안;최석우;이형욱;배성민
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2005년도 춘계학술대회 논문집
    • /
    • pp.118-122
    • /
    • 2005
  • Data mining is the process of autonomously extracting useful information or knowledge from large data stores or sets. For analyzing data of manufacturing processes obtained from database using data mining, source data should be collected form production process and transformed to appropriate form. To extract those data from database, a computer program should be made for each database. This paper presents a program to extract easily data form database in industry. The advantage of this program is that user can extract data from all types of database and database table and interface with Teamcenter Manufacturing.

  • PDF

SNS상의 비정형 빅데이터로부터 감성정보 추출 기법 (An Extraction Method of Sentiment Infromation from Unstructed Big Data on SNS)

  • 백봉현;하일규;안병철
    • 한국멀티미디어학회논문지
    • /
    • 제17권6호
    • /
    • pp.671-680
    • /
    • 2014
  • Recently, with the remarkable increase of social network services, it is necessary to extract interesting information from lots of data about various individual opinions and preferences on SNS(Social Network Service). The sentiment information can be applied to various fields of society such as politics, public opinions, economics, personal services and entertainments. To extract sentiment information, it is necessary to use processing techniques that store a large amount of SNS data, extract meaningful data from them, and search the sentiment information. This paper proposes an efficient method to extract sentiment information from various unstructured big data on social networks using HDFS(Hadoop Distributed File System) platform and MapReduce functions. In experiments, the proposed method collects and stacks data steadily as the number of data is increased. When the proposed functions are applied to sentiment analysis, the system keeps load balancing and the analysis results are very close to the results of manual work.

Designing Summary Tables for Mining Web Log Data

  • Ahn, Jeong-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권1호
    • /
    • pp.157-163
    • /
    • 2005
  • In the Web, the data is generally gathered automatically by Web servers and collected in server or access logs. However, as users access larger and larger amounts of data, query response times to extract information inevitably get slower. A method to resolve this issue is the use of summary tables. In this short note, we design a prototype of summary tables that can efficiently extract information from Web log data. We also present the relative performance of the summary tables against a sampling technique and a method that uses raw data.

  • PDF

Extraction of similar XML data based on XML structure and processing unit

  • Park, Jong-Hyun
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권4호
    • /
    • pp.59-65
    • /
    • 2017
  • XML has established itself as the format for data exchange on the internet and the volume of its instance is large scale. Therefore, to extract similar information from XML instance is one of research topics but is insufficient. In this paper, we extract similar information from various kind of XML instances according to the same goal. Also we use only the structure information of XML instance for information extraction because some of XML instance is described without its schema. In order to efficiently extract similar information, we propose a minimum unit of processing and two approaches for finding the unit. The one is a structure-based method which uses only the structure information of XML instance and another is a measure-based method which finds a unit by numerical formula. Our two approaches can be applied to any application that needs the extraction of similar information based on XML data. Also the approach can be used for HTML instance.

하이브리드 데이터마이닝 메커니즘에 기반한 전문가 지식 추출 (Extraction of Expert Knowledge Based on Hybrid Data Mining Mechanism)

  • 김진성
    • 한국지능시스템학회논문지
    • /
    • 제14권6호
    • /
    • pp.764-770
    • /
    • 2004
  • This paper presents a hybrid data mining mechanism to extract expert knowledge from historical data and extend expert systems' reasoning capabilities by using fuzzy neural network (FNN)-based learning & rule extraction algorithm. Our hybrid data mining mechanism is based on association rule extraction mechanism, FNN learning and fuzzy rule extraction algorithm. Most of traditional data mining mechanisms are depended ()n association rule extraction algorithm. However, the basic association rule-based data mining systems has not the learning ability. Therefore, there is a problem to extend the knowledge base adaptively. In addition, sequential patterns of association rules can`t represent the complicate fuzzy logic in real-world. To resolve these problems, we suggest the hybrid data mining mechanism based on association rule-based data mining, FNN learning and fuzzy rule extraction algorithm. Our hybrid data mining mechanism is consisted of four phases. First, we use general association rule mining mechanism to develop an initial rule base. Then, in the second phase, we adopt the FNN learning algorithm to extract the hidden relationships or patterns embedded in the historical data. Third, after the learning of FNN, the fuzzy rule extraction algorithm will be used to extract the implicit knowledge from the FNN. Fourth, we will combine the association rules (initial rule base) and fuzzy rules. Implementation results show that the hybrid data mining mechanism can reflect both association rule-based knowledge extraction and FNN-based knowledge extension.

건조 방법에 따른 계피 Extract의 품질 변화 (Quality Change of Cinnamon Extract Prepared with Various Drying Methods)

  • 김나미;김동희
    • 한국식품영양학회지
    • /
    • 제13권2호
    • /
    • pp.152-157
    • /
    • 2000
  • In order to select the optimum drying method for the production of cinnamon extract, water extract and 70% ethanol extract of cinnamon were prepared. And then several drying method of oven drying, vacuum evaporation, spray drying and freeze drying were performed. Content of cinnamic acid, cinnamic aldehyde, eugenol, tannin and free sugar, and antioxidant activity, degree of browning, pH, color value, turbity and solubility were compared. In water extract, contents of cinnamic acid, cinnamic aldehyde, eugenol were 29.45mg/100g, 94.86mg/100g, 120.75mg/100g and decreased to 4.76%∼44.21%, 5.30%∼48.05%, 3.66%∼21.83% by oven dyring, vaccum drying, spray drying respectively, but freeze drying showed a little decrease of those components. In 70% ethanol extract, effectual components decreased to 76.05%∼88.38% and 26.86%∼78.76% by freeze drying and vacuum evaporation respectively. Antioxidant activity decreased by drying and decreasing rate in 70% ethanol extract was lower than water extract. Degree of browning increased as the drying temperature increased. Tannin and free sugars were little affected by drying temperature. Solubility decreased in oven drying and 70% ethanol extract. Overall data suggested that optimum drying methods of cinnamon extract were freeze drying in case of water extract and freeze drying and vaccum drying in case of 70% ethanol extract.

  • PDF

MEMS 설계용 2차원 데이터의 중복요소 제거를 통한 3차원 CAD 모델로의 변환 (Data Translation from 2D MEMS Design Data by the Removal of Superposed Entity to the 3D CAD Model)

  • 김용식;김준환
    • 한국CDE학회논문집
    • /
    • 제11권6호
    • /
    • pp.447-454
    • /
    • 2006
  • Although there are many needs to use 3D models in MEMS field, it is not easy to generate 3D models based on MEMS CAD. This is because MEMS CAD is based on 2D and their popular format-GDSII file format- has its own limits and problems. The differences between GDSII file format and 3D CAD system, such as (1) superposed modeling, (2) duplicated entity, (3) restricted of entity type, give rise to several problems in data exchange. These limits and problems in GDSII file format have prevented 3D CAD system from generating 3D models from the MEMS CAD. To remove these limits and solve problems, it is important to extract the silhouette of data in the MEMS CAD. The proposed method has two main processes to extract silhouette; one is to extract the pseudo-silhouette from the original 2D MEMS data and the other is to remove useless objects to complete the silhouette. The paper reports on the experience gained in data exchange between 2D MEMS data and 3D models by the proposed method and a case study is presented, which employs the proposed method using MEMS CAD IntelliMask and Solidworks.

효모성장 측정을 이용한 인삼추출물의 생물학적 검정 (A Bioassay of Ginseng Extract s Based on Yeast Growth Determination)

  • Jung, Noh-Pal
    • Journal of Ginseng Research
    • /
    • 제5권1호
    • /
    • pp.24-34
    • /
    • 1981
  • For bioassay of the various extracts of ginseng, the growth determination method using Saccharomyces cerevisiae which was cultured with various doses of the extracts, was studied The water extract, Powder. and ethanol extract were more effective (about 45∼ 110% increase) than saponins or its fractions (about 20∼35% increase). The cold methanol residue showed a increase effect but it was not significant. The bioassay curves for the water extract, ethanol extract, the butanol extracted saponins and the cold methanol- residue were made from the experimental data. From these curves it is possible to find the relation between dose and effectiveness and the optimal doses of various ginseng extracts, and the amount of extract in a sample can be estimated The .angers of sample amount were 0.01% (100ppm) ∼0.32% (3200ppm) fo. the water extract, 0.025% (150ppm)∼0.1% (1000ppm) for the ethanol extract, and 0.008% (80ppm)∼0.016% (160ppm) for the saponins. It was impossible to determine the range for the cold methanol- residue, The acceleration effects on the cell proliferation by a only 0.0008% (8ppm) of the diol- and triol-saponin were measurable in earlier Period (24 hour treatment).

  • PDF

감성분석을 위한 병렬적 HDFS와 맵리듀스 함수 (A Parallel HDFS and MapReduce Functions for Emotion Analysis)

  • 백봉현;류윤규
    • 한국정보컨버전스학회논문지
    • /
    • 제7권2호
    • /
    • pp.49-57
    • /
    • 2014
  • 최근 대량의 SNS(Social Network Service) 데이터로부터 유용한 정보를 추출하고 사용자의 진의 정보를 평가하기 위한 오피니언 마이닝(opinion mning)이 소개되고 있다. 오피니언 마이닝은 대량의 SNS 데이터로부터 빠른 기간 내에 데이터를 수집하고 분석하여 목적에 적합한 정보를 추출하는 효율적인 기법이 필요하다. SNS에서 발생되는 다양한 비정형 데이터로부터 감성정보를 추출하기 위해, 본 논문에서는 하둡(Hadoop) 시스템 기반의 병렬적 HDFS(Hadoop Distributed File System)와 맵리듀스(MapReduce) 기반 감성분석 함수를 제안한다. 실험결과로 제안한 시스템과 함수는 데이터 수집과 적재시간에 대해 O(n)보다 빠르게 처리하며, 메모리와 CPU 자원에 대해 안정적인 부하분산이 이루어지는 것을 확인하였다.

  • PDF

Twitter Crawling System

  • Ganiev, Saydiolim;Nasridinov, Aziz;Byun, Jeong-Yong
    • Journal of Multimedia Information System
    • /
    • 제2권3호
    • /
    • pp.287-294
    • /
    • 2015
  • We are living in epoch of information when Internet touches all aspects of our lives. Therefore, it provides a plenty of services each of which benefits people in different ways. Electronic Mail (E-mail), File Transfer Protocol (FTP), Voice/Video Communication, Search Engines are bright examples of Internet services. Between them Social Network Services (SNS) continuously gain its popularity over the past years. Most popular SNSs like Facebook, Weibo and Twitter generate millions of data every minute. Twitter is one of SNS which allows its users post short instant messages. They, 100 million, posted 340 million tweets per day (2012)[1]. Often big amount of data contains lots of noisy data which can be defined as uninteresting and unclassifiable data. However, researchers can take advantage of such huge information in order to analyze and extract meaningful and interesting features. The way to collect SNS data as well as tweets is handled by crawlers. Twitter crawler has recently emerged as a great tool to crawl Twitter data as well as tweets. In this project, we develop Twitter Crawler system which enables us to extract Twitter data. We implemented our system in Java language along with MySQL. We use Twitter4J which is a java library for communicating with Twitter API. The application, first, connects to Twitter API, then retrieves tweets, and stores them into database. We also develop crawling strategies to efficiently extract tweets in terms of time and amount.