• Title/Summary/Keyword: time-warping

Search Result 292, Processing Time 0.027 seconds

Optimization of Subsequence Matching Under Time-Warping in Time-Series Databases (시계열 데이터베이스에서 타임 워핑 하의 서브시퀀스 매칭의 성능 최적화)

  • Kim, Man-Soon;Kim, Sang-Wook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.117-120
    • /
    • 2004
  • 본 논문에서는 시계열 데이터베이스에서 타임 워핑 하의 서브시퀀스 매칭을 효과적으로 처리하는 방안에 관하여 논의한다. 타임 워핑은 데이터베이스내 시퀀스들의 길이가 서로 다른 경우에도 유사한 패턴을 갖는 시퀀스들을 찾을 수 있도록 해 준다. 본 논문에서는 타임 워핑 하의 서브시퀀스 매칭을 위한 기존의 기본 처리 방식인 Naive-Scan의 CPU 처리 과정을 최적화하는 새로운 기법을 제안한다. 제안된 기법은 질의 시퀀스와 서브시퀀스들 간의 타임 워핑 거리들을 계산하는 과정에서 발생하는 중복 작업들을 사전에 제거함으로써 CPU 처리 성능을 극대화한다. 제안된 기법이 착오 기각을 발생시키지 않음과 Naive-Scan을 처리하기 위한 최적의 기법임을 이론적으로 규명한다. 또한, 다양한 실험을 통한 성능 평가에 의하여 제안된 최적화 기법이 가져오는 성능 개선 효과를 정량적으로 검증한다. 아울러, 제안된 기법이 기존의 여과 단계를 포함하는 방식인 LB-Scan과 ST-Filter의 후처리 단계에도 성공적으로 적용될 수 있음을 보인다.

  • PDF

Analyzing Growth Factors of Alley Markets Using Time-Series Clustering and Logistic Regression (시계열 군집분석과 로지스틱 회귀분석을 이용한 골목상권 성장요인 연구)

  • Kang, Hyun Mo;Lee, Sang-Kyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.535-543
    • /
    • 2019
  • Recently, growing social interest in alley markets, which have shown rapid growth like Gyeonglidan-gil street in Seoul, has led to the need for an analysis of growth factors. This paper aims at exploring growing alley markets through time-series clustering using DTW (Dynamic Time Warping) and examining the growth factors through logistic regression. According to cluster analysis, the number of growing markets of the Northeast, the Southwest, and the Southeast were much more than the Northwest but the proportion in region of the Northwest, the Northeast, and the Southwest were much more than the Southeast. Logistic regression results show that people in 20s and 30s have a lower impact on sales than those in 50s, but have a greater impact on growth of alley market. Alley markets located in high-income areas often reached their growth limits, indicating a tendency to stagnate or decline. The proximity of a subway station effected positive on sales but negative on growth. This research is an advanced study in that the causes of sales growth of alley markets is examined, which has not been examined in the preceding study.

Clustering of Smart Meter Big Data Based on KNIME Analytic Platform (KNIME 분석 플랫폼 기반 스마트 미터 빅 데이터 클러스터링)

  • Kim, Yong-Gil;Moon, Kyung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.13-20
    • /
    • 2020
  • One of the major issues surrounding big data is the availability of massive time-based or telemetry data. Now, the appearance of low cost capture and storage devices has become possible to get very detailed time data to be used for further analysis. Thus, we can use these time data to get more knowledge about the underlying system or to predict future events with higher accuracy. In particular, it is very important to define custom tailored contract offers for many households and businesses having smart meter records and predict the future electricity usage to protect the electricity companies from power shortage or power surplus. It is required to identify a few groups with common electricity behavior to make it worth the creation of customized contract offers. This study suggests big data transformation as a side effect and clustering technique to understand the electricity usage pattern by using the open data related to smart meter and KNIME which is an open source platform for data analytics, providing a user-friendly graphical workbench for the entire analysis process. While the big data components are not open source, they are also available for a trial if required. After importing, cleaning and transforming the smart meter big data, it is possible to interpret each meter data in terms of electricity usage behavior through a dynamic time warping method.

Fixed Pattern Noise Reduction in Infrared Videos Based on Joint Correction of Gain and Offset (적외선 비디오에서 Gain과 Offset 결합 보정을 통한 고정패턴잡음 제거기법)

  • Kim, Seong-Min;Bae, Yoon-Sung;Jang, Jae-Ho;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.35-44
    • /
    • 2012
  • Most recent infrared (IR) sensors have a focal-plane array (FPA) structure. Spatial non-uniformity of a FPA structure, however, introduces unwanted fixed pattern noise (FPN) to images. This non-uniformity correction (NUC) of a FPA can be categorized into target-based and scene-based approaches. In a target-based approach, FPN can be separated by using a uniform target such as a black body. Since the detector response randomly drifts along the time axis, however, several scene-based algorithms on the basis of a video sequence have been proposed. Among those algorithms, the state-of-the-art one based on Kalman filter uses one-directional warping for motion compensation and only compensates for offset non-uniformity of IR camera detectors. The system model using one-directional warping cannot correct the boundary region where a new scene is being introduced in the next video frame. Furthermore, offset-only correction approaches may not completely remove the FPN in images if it is considerably affected by gain non-uniformity. Therefore, for FPN reduction in IR videos, we propose a joint correction algorithm of gain and offset based on bi-directional warping. Experiment results using simulated and real IR videos show that the proposed scheme can provide better performance compared with the state-of-the art in FPN reduction.

Temporally-Consistent High-Resolution Depth Video Generation in Background Region (배경 영역의 시간적 일관성이 향상된 고해상도 깊이 동영상 생성 방법)

  • Shin, Dong-Won;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.414-420
    • /
    • 2015
  • The quality of depth images is important in the 3D video system to represent complete 3D contents. However, the original depth image from a depth camera has a low resolution and a flickering problem which shows vibrating depth values in terms of temporal meaning. This problem causes an uncomfortable feeling when we look 3D contents. In order to solve a low resolution problem, we employ 3D warping and a depth weighted joint bilateral filter. A temporal mean filter can be applied to solve the flickering problem while we encounter a residual spectrum problem in the depth image. Thus, after classifying foreground andbackground regions, we use an upsampled depth image for a foreground region and temporal mean image for background region.Test results shows that the proposed method generates a time consistent depth video with a high resolution.

3D Cloud Animation using Cloud Modeling Method of 2D Meteorological Satellite Images (2차원 기상 위성 영상의 구름 모델링 기법을 이용한 3차원 구름 애니메이션)

  • Lee, Jeong-Jin;Kang, Moon-Koo;Lee, Ho;Shin, Byeong-Seok
    • Journal of Korea Game Society
    • /
    • v.10 no.1
    • /
    • pp.147-156
    • /
    • 2010
  • In this paper, we propose 3D cloud animation by cloud modeling method of 2D images retrieved from a meteorological satellite. First, on the satellite images, we locate numerous control points to perform thin-plate spline warping analysis between consecutive frames for the modeling of cloud motion. In addition, the spectrum channels of visible and infrared wavelengths are used to determine the amount and altitude of clouds for 3D cloud image reconstruction. Pre-integrated volume rendering method is used to achieve seamless inter-laminar shades in real-time using small number of slices of the volume data. The proposed method could successfully construct continuously moving 3D clouds from 2D satellite images at an acceptable speed and image quality.

The Implementation of Video Library using VR (가상현실을 이용한 동화상 도서관의 구현)

  • 김동현
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.7
    • /
    • pp.1456-1461
    • /
    • 2003
  • Recently, the quantity of using information go on increasing geometric-progression. At the same time, the management of information is effected on the most organization's effective operation so that many user call for the powerful equipment which expound. access more information. As information searching technology is concentrated about the object of information based on a letter mainly, an effective searching technology for the object of multimedia such as a still image, a video and a sound must be studied. As a monitor of computer is 2-D, it difficult for one to grasp the whole aspect at a look glance like a library. Accordingly, some condition is necessary. First, it acquired the virtual video, turning a camera around by 30 degrees with a camera of 15mm lens, giving a warping and distortion. Second, it improved the books for user to search easily, adding to the video in existing books information system. The original text suggests some way which can embody the video searching technology under the base of personal computer.

Black-Litterman Portfolio with K-shape Clustering (K-shape 군집화 기반 블랙-리터만 포트폴리오 구성)

  • Yeji Kim;Poongjin Cho
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.63-73
    • /
    • 2023
  • This study explores modern portfolio theory by integrating the Black-Litterman portfolio with time-series clustering, specificially emphasizing K-shape clustering methodology. K-shape clustering enables grouping time-series data effectively, enhancing the ability to plan and manage investments in stock markets when combined with the Black-Litterman portfolio. Based on the patterns of stock markets, the objective is to understand the relationship between past market data and planning future investment strategies through backtesting. Additionally, by examining diverse learning and investment periods, it is identified optimal strategies to boost portfolio returns while efficiently managing associated risks. For comparative analysis, traditional Markowitz portfolio is also assessed in conjunction with clustering techniques utilizing K-Means and K-Means with Dynamic Time Warping. It is suggested that the combination of K-shape and the Black-Litterman model significantly enhances portfolio optimization in the stock market, providing valuable insights for making stable portfolio investment decisions. The achieved sharpe ratio of 0.722 indicates a significantly higher performance when compared to other benchmarks, underlining the effectiveness of the K-shape and Black-Litterman integration in portfolio optimization.

Decoupled Parametric Motion Synthesis Based on Blending (상.하체 분리 매개화를 통한 블렌딩 기반의 모션 합성)

  • Ha, Dong-Wook;Han, Jung-Hyun
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.439-444
    • /
    • 2008
  • The techniques, which locate example motions in abstract parameter space and interpolate them to generate new motion with given parameters, are widely used in real-time animation system for its controllability and efficiency However, as the dimension of parameter space increases for more complex control, the number of example motions for parameterization increases exponentially. This paper proposes a method that uses two different parameter spaces to obtain decoupled control over upper-body and lower-body motion. At each frame time, each parameterized motion space produces a source frame, which satisfies the constraints involving the corresponding body part. Then, the target frame is synthesized by splicing the upper body of one source frame onto the lower body of the other. To generate corresponding source frames to each other, we present a novel scheme for time-warping. This decoupled parameterization alleviates the problems caused by dimensional complexity of the parameter space and provides users with layered control over the character. However, when the examples are parameterized based on their upper body's spatial properties, the parameters of the examples are varied individually with every change of its lower body. To handle this, we provide an approximation technique to change the positions of the examples rapidly in the parameter space.

  • PDF

Recurrent Neural Network Modeling of Etch Tool Data: a Preliminary for Fault Inference via Bayesian Networks

  • Nawaz, Javeria;Arshad, Muhammad Zeeshan;Park, Jin-Su;Shin, Sung-Won;Hong, Sang-Jeen
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.02a
    • /
    • pp.239-240
    • /
    • 2012
  • With advancements in semiconductor device technologies, manufacturing processes are getting more complex and it became more difficult to maintain tighter process control. As the number of processing step increased for fabricating complex chip structure, potential fault inducing factors are prevail and their allowable margins are continuously reduced. Therefore, one of the key to success in semiconductor manufacturing is highly accurate and fast fault detection and classification at each stage to reduce any undesired variation and identify the cause of the fault. Sensors in the equipment are used to monitor the state of the process. The idea is that whenever there is a fault in the process, it appears as some variation in the output from any of the sensors monitoring the process. These sensors may refer to information about pressure, RF power or gas flow and etc. in the equipment. By relating the data from these sensors to the process condition, any abnormality in the process can be identified, but it still holds some degree of certainty. Our hypothesis in this research is to capture the features of equipment condition data from healthy process library. We can use the health data as a reference for upcoming processes and this is made possible by mathematically modeling of the acquired data. In this work we demonstrate the use of recurrent neural network (RNN) has been used. RNN is a dynamic neural network that makes the output as a function of previous inputs. In our case we have etch equipment tool set data, consisting of 22 parameters and 9 runs. This data was first synchronized using the Dynamic Time Warping (DTW) algorithm. The synchronized data from the sensors in the form of time series is then provided to RNN which trains and restructures itself according to the input and then predicts a value, one step ahead in time, which depends on the past values of data. Eight runs of process data were used to train the network, while in order to check the performance of the network, one run was used as a test input. Next, a mean squared error based probability generating function was used to assign probability of fault in each parameter by comparing the predicted and actual values of the data. In the future we will make use of the Bayesian Networks to classify the detected faults. Bayesian Networks use directed acyclic graphs that relate different parameters through their conditional dependencies in order to find inference among them. The relationships between parameters from the data will be used to generate the structure of Bayesian Network and then posterior probability of different faults will be calculated using inference algorithms.

  • PDF