• Title/Summary/Keyword: Time-series matching

Search Result 111, Processing Time 0.037 seconds

Fast Index Construction in Distortion-Free Time-Series Subsequence Matching (왜곡 제거 시계열 서브시퀀스 매칭에서 빠른 인덱스 구성법)

  • Gil, Myeong-Seon;Kim, Bum-Soo;Moon, Yang-Sae;Kim, Jin-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06a
    • /
    • pp.73-76
    • /
    • 2011
  • 본 논문에서는 기존 단일 인덱스 기반의 왜곡 제거 시계열 서브시퀀스 매칭의 인덱스 구성 알고리즘을 분석하여 보다 효율적인 인덱스 구성 알고리즘을 제안하였다. 기존 왜곡 제거 시계열 서브시퀀스 매칭의 단일 인덱스 구성 알고리즘은 대용량 시계열 데이터인 경우 왜곡 제거를 고려해야 되는 많은 윈도우로 인해 실제 인덱스 생성에 매우 많은 시간이 걸린다. 본 논문에서는 기존 선형 제거 서브시퀀스 매칭의 인덱스 구성 알고리즘을 예로서 인덱스를 구성하는 각 과정을 체계적으로 분석하여, 각 과정에서 필요한 연산 횟수를 줄이는 방법을 제안한다. 이를 위해, 저차원 변환하는 과정에서 발생하는 중복되는 연산들을 한 번씩 미리 수행하여 배열에 저장한 후 재사용하는 DF-버컷(DF-bucket)씨의 개념을 제시한다. 실험 결과, 저장 후 재사용 원칙에 따라 인덱스 구성의 효율성을 증대시킨 접근법이 그렇지 않은 접근법에 비해서 인덱스 구성 시간을 평균 32% 에서 55% 까지 줄인 것으로 나타났다.

Optimal Construction of Multiple Indexes for Time-Series Subsequence Matching (시계열 서브시퀀스 매칭을 위한 최적의 다중 인덱스 구성 방안)

  • Lim Seung-Hwan;Park Hee-Jin;Kim Sang-Wook
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.193-195
    • /
    • 2005
  • 서브시퀀스 매칭은 주어진 질의 시퀀스와 변화의 추세가 유사한 서브시퀀스들을 시계열 데이터베이스로부터 검색하는 연산이다. 본 논문에서는 크기 효과로 인한 서브시퀀스 매칭의 심각한 성능 저하 현상을 정량적으로 관찰하여, 하나의 윈도우 크기를 대상으로 만든 단 하나의 인덱스만을 이용하는 것은 실제 응용에서 만족할만한 성능을 제공할 수 없다는 것을 규명하였다. 또한, 이러한 문제로 인해 다양한 윈도우 크기를 기반으로 다수의 인덱스들을 구성하여 서브시퀀스 매칭을 수행하는 인덱스 보간법의 응용이 필요함을 보였다. 인덱스 보간법을 응용하여 서비시퀀스 매칭을 수행하기 위해서는 먼저 다수의 인덱스들을 위한 윈도우 크기들을 결정해야 한다. 본 연구에서는 물리적 데이터베이스 설계방식을 이용하여 이러한 최적의 다수의 윈도우 크기들을 선정하는 문제를 해결하였다. 이를 위하여 시계열 데이터베이스에서 수행될 예정인 질의 시퀀스들의 집합과 인덱스 구성의 기반이 되는 윈도우들의 크기의 집합이 주어질 때, 전체 서브시퀀스 매칭들을 수행하는 데에 소요되는 비용을 예측할 수 있는 공식을 산출하였다. 또한, 이 비용 공식을 이용하여 전체 서브시퀀스 매칭들의 성능을 극대화 할 수 있는 최적의 윈도우 크기들을 결정하는 알고리즘을 제안하였으며, 이 알고리즘의 최적성과 효율성을 이론적으로 규명하였다. 끝으로, 실험에 의한 성능 평가를 제안된 기법의 우수성을 제시하였다.

  • PDF

4D full-field measurements over the entire loading history: Evaluation of different temporal interpolations

  • Ana Vrgoc;Viktor Kosin;Clement Jailin;Benjamin Smaniotto;Zvonimir Tomicevic;Francois Hild
    • Coupled systems mechanics
    • /
    • v.12 no.6
    • /
    • pp.503-517
    • /
    • 2023
  • Standard Digital Volume Correlation (DVC) approaches are based on pattern matching between two reconstructed volumes acquired at different stages. Such frameworks are limited by the number of scans (due to acquisition duration), and time-dependent phenomena can generally not be captured. Projection-based Digital Volume Correlation (P-DVC) measures displacement fields from series of 2D radiographs acquired at different angles and loadings, thus resulting in richer temporal sampling (compared to standard DVC). The sought displacement field is decomposed over a basis of separated variables, namely, temporal and spatial modes. This study utilizes an alternative route in which spatial modes are con-structed via scan-wise DVC, and thus only the temporal amplitudes are sought via P-DVC. This meth-od is applied to a glass fiber mat reinforced polymer specimen containing a machined notch, subjected to in-situ cyclic tension, and imaged via X-Ray Computed Tomography. Different temporal interpolations are exploited. It is shown that utilizing only one DVC displacement field (as spatial mode) was sufficient to properly capture the complex kinematics up to specimen failure.

Multiple Camera Based Imaging System with Wide-view and High Resolution and Real-time Image Registration Algorithm (다중 카메라 기반 대영역 고해상도 영상획득 시스템과 실시간 영상 정합 알고리즘)

  • Lee, Seung-Hyun;Kim, Min-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.10-16
    • /
    • 2012
  • For high speed visual inspection in semiconductor industries, it is essential to acquire two-dimensional images on regions of interests with a large field of view (FOV) and a high resolution simultaneously. In this paper, an imaging system is newly proposed to achieve high quality image in terms of precision and FOV, which is composed of single lens, a beam splitter, two camera sensors, and stereo image grabbing board. For simultaneously acquired object images from two camera sensors, Zhang's camera calibration method is applied to calibrate each camera first of all. Secondly, to find a mathematical mapping function between two images acquired from different view cameras, the matching matrix from multiview camera geometry is calculated based on their image homography. Through the image homography, two images are finally registered to secure a large inspection FOV. Here the inspection system of using multiple images from multiple cameras need very fast processing unit for real-time image matching. For this purpose, parallel processing hardware and software are utilized, such as Compute Unified Device Architecture (CUDA). As a result, we can obtain a matched image from two separated images in real-time. Finally, the acquired homography is evaluated in term of accuracy through a series of experiments, and the obtained results shows the effectiveness of the proposed system and method.

Chaotic Disaggregation of Daily Rainfall Time Series (카오스를 이용한 일 강우자료의 시간적 분해)

  • Kyoung, Min-Soo;Sivakumar, Bellie;Kim, Hung-Soo;Kim, Byung-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.9
    • /
    • pp.959-967
    • /
    • 2008
  • Disaggregation techniques are widely used to transform observed daily rainfall values into hourly ones, which serve as important inputs for flood forecasting purposes. However, an important limitation with most of the existing disaggregation techniques is that they treat the rainfall process as a realization of a stochastic process, thus raising questions on the lack of connection between the structure of the models on one hand and the underlying physics of the rainfall process on the other. The present study introduces a nonlinear deterministic (and specifically chaotic) framework to study the dynamic characteristics of rainfall distributions across different temporal scales (i.e. weights between scales), and thus the possibility of rainfall disaggregation. Rainfall data from the Seoul station (recorded by the Korea Meteorological Administration) are considered for the present investigation, and weights between only successively doubled resolutions (i.e., 24-hr to 12-hr, 12-hr to 6-hr, 6-hr to 3-hr) are analyzed. The correlation dimension method is employed to investigate the presence of chaotic behavior in the time series of weights, and a local approximation technique is employed for rainfall disaggregation. The results indicate the presence of chaotic behavior in the dynamics of weights between the successively doubled scales studied. The modeled (disaggregated) rainfall values are found to be in good agreement with the observed ones in their overall matching (e.g. correlation coefficient and low mean square error). While the general trend (rainfall amount and time of occurrence) is clearly captured, an underestimation of the maximum values are found.

Evaluation of Applicability of Impulse function-based Algorithm for Modification of Ground Motion to Match Target Response Spectrum (Impulse 함수 기반 목표응답스펙트럼 맞춤형 지진파 보정 알고리즘의 적용성 평가)

  • Kim, Hyun-Kwan;Park, Duhee
    • Journal of the Korean GEO-environmental Society
    • /
    • v.12 no.4
    • /
    • pp.53-63
    • /
    • 2011
  • Selection or generation of appropriate input ground motion is very important in performing a dynamic analysis. In Korea, it is a common practice to use recorded strong ground motions or artificial motions. The recorded motions show non-stationary characteristics, which is a distinct property of all earthquake motions, but have the problem of not matching the design response spectrum. The artificial motions match the design spectrum, but show stationary characteristics. This study generated ground motions that preserve the non-stationary characteristics of a real earthquake motion, but also matches the design spectrum. In the process, an impulse function-based algorithm that adjusts a given time series in time domain such that it matches the target response spectrum is used. Application of the algorithm showed that it can successfully adjust any recorded motions to match the target spectrum and also preserve the non-stationary characteristics. The modified motions are used to perform a series of nonlinear site response analyses. It is shown that the results using the adjusted motions result in more reliable estimates of ground vibration. It is thus recommended that the newly adjusted motions be used in practice instead of original recorded motions.

Modeling and Estimation of Cardiac Conduction System using Hidden Markov Model (HMM을 이용한 심장 전도 시스템의 모델화와 추정)

  • Halm, Zee-Hun;Park, Kwang-Suk
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.222-227
    • /
    • 1997
  • To diagnose cardiac arrhythmia owing to reentry mechanism, cardiac conduction system was modeled by modified Hidden Markov modeled by evaluated. First, simulation of transient conduction states and output waves were made with initially assumed parametric values of cardiac muscle repolariztion time, conduction velocity and its automaticity. The output was a series of onset time and the name of the wave. Parameters determined the rate of beating, lengths of wave intervals, rate of abnormal beats, and the like. Several parameter sets were found to simulate normal sinus rhythm, supraventricular /ventricular tachycardia, atrial /vetricular extrasystole, etc. Then, utilizing the estimation theorems of Hidden Markov Model, the best conduction path was estimated given the previous output. With this modified estimation method, close matching between the simulated conduction path and the estimated one was confirmed.

  • PDF

Development Of Qualitative Traffic Condition Decision Algorithm On Urban Streets (도시부도로 정성적 소통상황 판단 알고리즘 개발)

  • Cho, Jun-Han;Kim, Jin-Soo;Kim, Seong-Ho;Kang, Weon-Eui
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.6
    • /
    • pp.40-52
    • /
    • 2011
  • This paper develops a traffic condition decision algorithm to improve the reliability of traffic information on urban streets. This research is reestablished the criteria of qualitative traffic condition categorization and proposed a new qualitative traffic condition decision types and decision measures. The developed algorithm can be classified into 9 types for qualitative traffic condition in consideration of historical time series of speed changes and traffic patterns. The performance of the algorithm is verified through individual matching analysis using the radar detector data in Ansan city. The results of this paper is expected to help promotion of the traffic information processing system, real-time traffic flow monitoring and management, use of historical traffic information, etc.

Spatial Locality Preservation Metric for Constructing Histogram Sequences (히스토그램 시퀀스 구성을 위한 공간 지역성 보존 척도)

  • Lee, Jeonggon;Kim, Bum-Soo;Moon, Yang-Sae;Choi, Mi-Jung
    • Journal of Information Technology and Architecture
    • /
    • v.10 no.1
    • /
    • pp.79-91
    • /
    • 2013
  • This paper proposes a systematic methodology that could be used to decide which one shows the best performance among space filling curves (SFCs) in applying lower-dimensional transformations to histogram sequences. A histogram sequence represents a time-series converted from an image by the given SFC. Due to the high-dimensionality nature, histogram sequences are very difficult to be stored and searched in their original form. To solve this problem, we generally use lower-dimensional transformations, which produce lower bounds among high dimensional sequences, but the tightness of those lower-bounds is highly affected by the types of SFC. In this paper, we attack a challenging problem of evaluating which SFC shows the better performance when we apply the lower-dimensional transformation to histogram sequences. For this, we first present a concept of spatial locality, which comes from an intuition of "if the entries are adjacent in a histogram sequence, their corresponding cells should also be adjacent in its original image." We also propose spatial locality preservation metric (slpm in short) that quantitatively evaluates spatial locality and present its formal computation method. We then evaluate five SFCs from the perspective of slpm and verify that this evaluation result concurs with the performance evaluation of lower-dimensional transformations in real image matching. Finally, we perform k-NN (k-nearest neighbors) search based on lower-dimensional transformations and validate accuracy of the proposed slpm by providing that the Hilbert-order with the highest slpm also shows the best performance in k-NN search.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.