• 제목/요약/키워드: Large Data Set

검색결과 1,061건 처리시간 0.029초

대용량 RDF 데이터의 처리 성능 개선을 위한 효율적인 저장구조 설계 및 구현 (A Design and Implementation of Efficient Storage Structure for a Large RDF Data Processing)

  • 문현정;성정환;김영지;우용태
    • 한국전자거래학회지
    • /
    • 제12권3호
    • /
    • pp.251-268
    • /
    • 2007
  • 본 논문에서는 대용량 RDF의 효율적인 저장을 위하여 관계 정보와 데이터 정보를 분리한 새로운 방식의 저장 구조를 제안하였다. 제안 방식은 기존의 저장 방식에 비해 데이터의 중복을 최소화하여 대량의 RDF 데이터를 효율적으로 저장할 수 있다. 또한 본 논문에서 제안한 저장 방식을 이용하여 트리플 형태의 관계 정보 릴레이션과 데이터 정보 릴레이션에서 필요한 데이터를 분리 검색하여 결합하는 방식에 의해 RDF 데이터에 대한 질의 성능을 개선할 수 있다. 본 연구 결과는 RDF 데이타를 이용한 전자상거래, 시맨틱 웹, 지식관리 등과 같은 응용 분야에서 대량의 RDF 데이터의 효율적인 관리를 통하여 질의 성능을 개선할 수 있는 기반 기술로 사용할 수 있다.

  • PDF

Parallel Multithreaded Processing for Data Set Summarization on Multicore CPUs

  • Ordonez, Carlos;Navas, Mario;Garcia-Alvarado, Carlos
    • Journal of Computing Science and Engineering
    • /
    • 제5권2호
    • /
    • pp.111-120
    • /
    • 2011
  • Data mining algorithms should exploit new hardware technologies to accelerate computations. Such goal is difficult to achieve in database management system (DBMS) due to its complex internal subsystems and because data mining numeric computations of large data sets are difficult to optimize. This paper explores taking advantage of existing multithreaded capabilities of multicore CPUs as well as caching in RAM memory to efficiently compute summaries of a large data set, a fundamental data mining problem. We introduce parallel algorithms working on multiple threads, which overcome the row aggregation processing bottleneck of accessing secondary storage, while maintaining linear time complexity with respect to data set size. Our proposal is based on a combination of table scans and parallel multithreaded processing among multiple cores in the CPU. We introduce several database-style and hardware-level optimizations: caching row blocks of the input table, managing available RAM memory, interleaving I/O and CPU processing, as well as tuning the number of working threads. We experimentally benchmark our algorithms with large data sets on a DBMS running on a computer with a multicore CPU. We show that our algorithms outperform existing DBMS mechanisms in computing aggregations of multidimensional data summaries, especially as dimensionality grows. Furthermore, we show that local memory allocation (RAM block size) does not have a significant impact when the thread management algorithm distributes the workload among a fixed number of threads. Our proposal is unique in the sense that we do not modify or require access to the DBMS source code, but instead, we extend the DBMS with analytic functionality by developing User-Defined Functions.

How to improve oil consumption forecast using google trends from online big data?: the structured regularization methods for large vector autoregressive model

  • Choi, Ji-Eun;Shin, Dong Wan
    • Communications for Statistical Applications and Methods
    • /
    • 제29권1호
    • /
    • pp.41-51
    • /
    • 2022
  • We forecast the US oil consumption level taking advantage of google trends. The google trends are the search volumes of the specific search terms that people search on google. We focus on whether proper selection of google trend terms leads to an improvement in forecast performance for oil consumption. As the forecast models, we consider the least absolute shrinkage and selection operator (LASSO) regression and the structured regularization method for large vector autoregressive (VAR-L) model of Nicholson et al. (2017), which select automatically the google trend terms and the lags of the predictors. An out-of-sample forecast comparison reveals that reducing the high dimensional google trend data set to a low-dimensional data set by the LASSO and the VAR-L models produces better forecast performance for oil consumption compared to the frequently-used forecast models such as the autoregressive model, the autoregressive distributed lag model and the vector error correction model.

객체지향 데이타베이스를 이용한 주식데이타 관리에 관한 연구 (A Study on the Management of Stock Data with an Object Oriented Database Management System)

  • 허순영;김형민
    • 한국경영과학회지
    • /
    • 제21권3호
    • /
    • pp.197-214
    • /
    • 1996
  • Financial analysis of stock data usually involves extensive computation of large amount of time series data sets. To handle the large size of the data sets and complexity of the analyses, database management systems have been increasingly adaopted for efficient management of stock data. Specially, relational database management system is employed more widely due to its simplistic data management approach. However, the normalized two-dimensional tables and the structured query language of the relational system turn out to be less effective than expected in accommodating time series stock data as well as the various computational operations. This paper explores a new data management approach to stock data management on the basis of an object-oriented database management system (ODBMS), and proposes a data model supporting times series data storage and incorporating a set of financial analysis functions. In terms of functional stock data analysis, it specially focuses on a primitive set of operations such as variance of stock data. In accomplishing this, we first point out the problems of a relational approach to the management of stock data and show the strength of the ODBMS. We secondly propose an object model delineating the structural relationships among objects used in the stock data management and behavioral operations involved in the financial analysis. A prototype system is developed using a commercial ODBMS.

  • PDF

태평양 수역 우리나라 다랑어선망어업의 조업 특성 및 해양환경에 따른 어장 변동 (Changes in fishing characteristics and distributions of Korean tuna purse seine fishery by oceanographic conditions in the Pacific Ocean)

  • 이미경;이성일;이춘우;김장근;구정은
    • 수산해양기술연구
    • /
    • 제52권2호
    • /
    • pp.149-161
    • /
    • 2016
  • Fishing characteristics of Korean tuna purse seine fishery in the Pacific Ocean were investigated using logbook data compiled from captain onboard and the statistical data from 1980 to 2014. Changes in fishing ground and correlation between marine environmental factors and fishing patterns were investigated using Oceanographic index. The proportion of unassociated set was higher than that of associated set. The catch proportion of yellowfin was higher in the unassociated set, while that of skipjack and bigeye was higher in the associated set. Due to vessels, fishing gears and Korean captains' high-level of skills in fishing technology optimized for the unassociated set and preference of large fishes, especially large yellowfin tuna, it showed unique fishing characteristics focusing on the unassociated set. As for fishing distributions of Korean tuna purse seine fishery and impacts of oceanographic conditions on the fishery, the main fishing ground was concentrated on the area of $5^{\circ}N{\sim}10^{\circ}S$, $140^{\circ}E{\sim}180^{\circ}$ through the decades. When stronger El-nino occurred, the range of fishing ground tended to expand and main fishing ground moved to the eastern part of western and central Pacific Ocean. During this season, yellowfin tuna had high CPUE and catch proportion of yellowfin tuna in the eastern part also increased. As for the proportion of fishing effort by set type, proportion of log associated set was high during El-nino season while that of FAD associated set was high during La-nina season.

Predictive Analysis of Financial Fraud Detection using Azure and Spark ML

  • Priyanka Purushu;Niklas Melcher;Bhagyashree Bhagwat;Jongwook Woo
    • Asia pacific journal of information systems
    • /
    • 제28권4호
    • /
    • pp.308-319
    • /
    • 2018
  • This paper aims at providing valuable insights on Financial Fraud Detection on a mobile money transactional activity. We have predicted and classified the transaction as normal or fraud with a small sample and massive data set using Azure and Spark ML, which are traditional systems and Big Data respectively. Experimenting with sample dataset in Azure, we found that the Decision Forest model is the most accurate to proceed in terms of the recall value. For the massive data set using Spark ML, it is found that the Random Forest classifier algorithm of the classification model proves to be the best algorithm. It is presented that the Spark cluster gets much faster to build and evaluate models as adding more servers to the cluster with the same accuracy, which proves that the large scale data set can be predictable using Big Data platform. Finally, we reached a recall score with 0.73, which implies a satisfying prediction quality in predicting fraudulent transactions.

Debiasing Technique for Numerical Weather Prediction using Artificial Neural Network

  • Kang, Boo-Sik;Ko, Ick-Hwan
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2006년도 학술발표회 논문집
    • /
    • pp.51-56
    • /
    • 2006
  • Biases embedded in numerical weather precipitation forecasts by the RDAPS model was determined, quantified and corrected. The ultimate objective is to eventually enhance the reliability of reservoir operation by Korean Water Resources Corporation (KOWACO), which is based on precipitation-driven forecasts of stream flow. Statistical post-processing, so called MOS (Model Output Statistics) was applied to RDAPS to improve their performance. The Artificial Neural Nwetwork (ANN) model was applied for 4 cases of 'Probability of Precipitation (PoP) for wet and dry season' and 'Quantitative Precipitation Forecasts (QPF) for wet and dry season'. The reduction on the large systematic bias was especially remarkable. The performance of both networks may be improved by retraining, probably every month. In addition, it is expected that performance of the networks will improve once atmospheric profile data are incorporated in the analysis. The key to the optimal performance of ANN is to have a large data set relevant to the predictand variable. The more complex the process to be modeled by the ANN, the larger the data set needs to be.

  • PDF

실내 3D 게임 장면의 잠재적 가시 집합을 위한 효과적인 하드웨어 압축 구조 (An Effective Structure of Hardware Compression for Potentially Visible Set of Indoor 3D Game Scenes)

  • 김영식
    • 한국게임학회 논문지
    • /
    • 제14권6호
    • /
    • pp.29-38
    • /
    • 2014
  • 대규모 실내 3D 게임 장면에서 차폐 컬링 정보를 미리 계산하는 잠재적 가시 집합(potentially visible set: PVS)은 데이터를 처리하고 저장해야하는 양이 상당히 크지만 많은 부분이 0으로 표현된다. 본 논문에서는 모바일 환경의 3D 게임 장면 트리 구성 중에 PVS 데이터를 ZRLE (zero run length encoding) 방식으로 압축하는 효과적인 하드웨어 압축 구조를 설계한다. 3D 게임 시뮬레이션을 통하여 제안하는 구조의 PVS 데이터 압축 비율, PVS 컬링과 절두체 컬링에 따른 렌더링 속도 (frame per second: FPS)를 분석하였다.

이동 컴퓨팅 환경에서 데이타 방송을 위한 동시성 제어 기법 (A Concurrency Control Method for Data Broadcasting in Mobile Computing Environment)

  • 윤혜숙;김영국
    • 한국정보과학회논문지:데이타베이스
    • /
    • 제31권2호
    • /
    • pp.140-149
    • /
    • 2004
  • 수많은 이동 클라이언트가 있는 이동 사용자 환경에서 데이타 방송 기법은 매우 효과적인 데이타 전달 방식으로 주목을 받고 있다. 이 방식에서 데이타베이스 서버는 데이타를 무선채널을 통해 주기적으로 배포하며 클라이언트는 필요한 데이타를 선택적으로 액세스하는 읽기 전용 트랜잭션을 수행한다. 한편, 서버에서는 데이타 방송과 병행해서 데이타베이스 갱신도 수행하므로 플라이언트가 일관성 있는 데이타를 액세스할 수 있으려면 동시성제어 문제가 해결되어야 한다. 본 연구에서는 이러한 동시성제어 문제를 효율적으로 해결하는 알고리즘인 SCDSC(Serialization Checking with DirtySet on Commit) 기법을 제안한다. SCDSC는 이동 클라이언트에서 다중 데이타를 요구하는 읽기 트랜잭션을 커미트할 때 일반 데이타와 함께 방송된 DirtySet을 점검하여 일관성을 유지하는 일종의 낙관적 동시성 제어기법이다. DirtySet은 일정 방송주기 동안 변경된 데이타 집합으로 방송주기가 바뀔 때마다 슬라이딩 윈도우 방식으로 서버에서 갱신되어 배포된다. 또한, 제안하는 알고리즘의 성능을 데이타 일관성(data consistency) 및 현재성(data currency) 관점에서 분석하고 시뮬레이션을 통해 알아본다.

벡터 양자화에서 시간 평균 왜곡치의 수렴 특성 I. 대수 법칙에 근거한 이론 (The Convergence Characteristics of The Time- Averaged Distortion in Vector Quantization: Part I. Theory Based on The Law of Large Numbers)

  • 김동식
    • 전자공학회논문지B
    • /
    • 제33B권7호
    • /
    • pp.107-115
    • /
    • 1996
  • The average distortio of the vector quantizer is calcualted using a probability function F of the input source for a given codebook. But, since the input source is unknown in geneal, using the sample vectors that is realized from a random vector having probability function F, a time-average opeation is employed so as to obtain an approximation of the average distortion. In this case the size of the smple set should be large so that the sample vectors represent true F reliably. The theoretical inspection about the approximation, however, is not perfomed rigorously. Thus one might use the time-average distortion without any verification of the approximation. In this paper, the convergence characteristics of the time-average distortions are theoretically investigated when the size of sample vectors or the size of codebook gets large. It has been revealed that if codebook size is large enough, then small sample set is enough to obtain the average distortion by approximatio of the calculated tiem-averaged distortion. Experimental results on synthetic data, which are supporting the analysis, are also provided and discussed.

  • PDF