• Title/Summary/Keyword: 주행시간 분해기법

Search Result 2, Processing Time 0.019 seconds

Application of Residual Statics to Land Seismic Data: traveltime decomposition vs stack-power maximization (육상 탄성파자료에 대한 나머지 정적보정의 효과: 주행시간 분해기법과 겹쌓기제곱 최대화기법)

  • Sa, Jinhyeon;Woo, Juhwan;Rhee, Chulwoo;Kim, Jisoo
    • Geophysics and Geophysical Exploration
    • /
    • v.19 no.1
    • /
    • pp.11-19
    • /
    • 2016
  • Two representative residual static methods of traveltime decomposition and stack-power maximization are discussed in terms of application to land seismic data. For the model data with synthetic shot/receiver statics (time shift) applied and random noises added, continuities of reflection event are much improved by stack-power maximization method, resulting the derived time-shifts approximately equal to the synthetic statics. Optimal parameters (maximum allowable shift, correlation window, iteration number) for residual statics are effectively chosen with diagnostic displays of CSP (common shot point) stack and CRP (common receiver point) stack as well as CMP gather. In addition to removal of long-wavelength time shift by refraction statics, prior to residual statics, processing steps of f-k filter, predictive deconvolution and time variant spectral whitening are employed to attenuate noises and thereby to minimize the error during the correlation process. The reflectors including horizontal layer of reservoir are more clearly shown in the variable-density section through repicking the velocities after residual statics and inverse NMO correction.

Multi-DNN Acceleration Techniques for Embedded Systems with Tucker Decomposition and Hidden-layer-based Parallel Processing (터커 분해 및 은닉층 병렬처리를 통한 임베디드 시스템의 다중 DNN 가속화 기법)

  • Kim, Ji-Min;Kim, In-Mo;Kim, Myung-Sun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.842-849
    • /
    • 2022
  • With the development of deep learning technology, there are many cases of using DNNs in embedded systems such as unmanned vehicles, drones, and robotics. Typically, in the case of an autonomous driving system, it is crucial to run several DNNs which have high accuracy results and large computation amount at the same time. However, running multiple DNNs simultaneously in an embedded system with relatively low performance increases the time required for the inference. This phenomenon may cause a problem of performing an abnormal function because the operation according to the inference result is not performed in time. To solve this problem, the solution proposed in this paper first reduces the computation by applying the Tucker decomposition to DNN models with big computation amount, and then, make DNN models run in parallel as much as possible in the unit of hidden layer inside the GPU. The experimental result shows that the DNN inference time decreases by up to 75.6% compared to the case before applying the proposed technique.