• Title/Summary/Keyword: Matching algorithm

Search Result 2,269, Processing Time 0.033 seconds

Modelling of Fault Deformation Induced by Fluid Injection using Hydro-Mechanical Coupled 3D Particle Flow Code: DECOVALEX-2019 Task B (수리역학적연계 3차원 입자유동코드를 사용한 유체주입에 의한 단층변형 모델링: DECOVALEX-2019 Task B)

  • Yoon, Jeoung Seok;Zhou, Jian
    • Tunnel and Underground Space
    • /
    • v.30 no.4
    • /
    • pp.320-334
    • /
    • 2020
  • This study presents an application of hydro-mechanical coupled Particle Flow Code 3D (PFC3D) to simulation of fluid injection induced fault slip experiment conducted in Mont Terri Switzerland as a part of a task in an international research project DECOVALEX-2019. We also aimed as identifying the current limitations of the modelling method and issues for further development. A fluid flow algorithm was developed and implemented in a 3D pore-pipe network model in a 3D bonded particle assembly using PFC3D v5, and was applied to Mont Terri Step 2 minor fault activation experiment. The simulated results showed that the injected fluid migrates through the permeable fault zone and induces fault deformation, demonstrating a full hydro-mechanical coupled behavior. The simulated results were, however, partially matching with the field measurement. The simulated pressure build-up at the monitoring location showed linear and progressive increase, whereas the field measurement showed an abrupt increase associated with the fault slip We conclude that such difference between the modelling and the field test is due to the structure of the fault in the model which was represented as a combination of damage zone and core fractures. The modelled fault is likely larger in size than the real fault in Mont Terri site. Therefore, the modelled fault allows several path ways of fluid flow from the injection location to the pressure monitoring location, leading to smooth pressure build-up at the monitoring location while the injection pressure increases, and an early start of pressure decay even before the injection pressure reaches the maximum. We also conclude that the clay filling in the real fault could have acted as a fluid barrier which may have resulted in formation of fluid over-pressurization locally in the fault. Unlike the pressure result, the simulated fault deformations were matching with the field measurements. A better way of modelling a heterogeneous clay-filled fault structure with a narrow zone should be studied further to improve the applicability of the modelling method to fluid injection induced fault activation.

The Study on New Radiating Structure with Multi-Layered Two-Dimensional Metallic Disk Array for Shaping flat-Topped Element Pattern (구형 빔 패턴 형성을 위한 다층 이차원 원형 도체 배열을 갖는 새로운 방사 구조에 대한 연구)

  • 엄순영;스코벨레프;전순익;최재익;박한규
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.13 no.7
    • /
    • pp.667-678
    • /
    • 2002
  • In this paper, a new radiating structure with a multi-layered two-dimensional metallic disk array was proposed for shaping the flat-topped element pattern. It is an infinite periodic planar array structure with metallic disks finitely stacked above the radiating circular waveguide apertures. The theoretical analysis was in detail performed using rigid full-wave analysis, and was based on modal representations for the fields in the partial regions of the array structure and for the currents on the metallic disks. The final system of linear algebraic equations was derived using the orthogonal property of vector wave functions, mode-matching method, boundary conditions and Galerkin's method, and also their unknown modal coefficients needed for calculation of the array characteristics were determined by Gauss elimination method. The application of the algorithm was demonstrated in an array design for shaping the flat-topped element patterns of $\pm$20$^{\circ}$ beam width in Ka-band. The optimal design parameters normalized by a wavelength for general applications are presented, which are obtained through optimization process on the basis of simulation and design experience. A Ka-band experimental breadboard with symmetric nineteen elements was fabricated to compare simulation results with experimental results. The metallic disks array structure stacked above the radiating circular waveguide apertures was realized using ion-beam deposition method on thin polymer films. It was shown that the calculated and measured element patterns of the breadboard were in very close agreement within the beam scanning range. The result analysis for side lobe and grating lobe was done, and also a blindness phenomenon was discussed, which may cause by multi-layered metallic disk structure at the broadside. Input VSWR of the breadboard was less than 1.14, and its gains measured at 29.0 GHz. 29.5 GHz and 30 GHz were 10.2 dB, 10.0 dB and 10.7 dB, respectively. The experimental and simulation results showed that the proposed multi-layered metallic disk array structure could shape the efficient flat-topped element pattern.

A Study on Matching Method of Hull Blocks Based on Point Clouds for Error Prediction (선박 블록 정합을 위한 포인트 클라우드 기반의 오차예측 방법에 대한 연구)

  • Li, Runqi;Lee, Kyung-Ho;Lee, Jung-Min;Nam, Byeong-Wook;Kim, Dae-Seok
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.29 no.2
    • /
    • pp.123-130
    • /
    • 2016
  • With the development of fast construction mode in shipbuilding market, the demand on accuracy management of hull is becoming higher and higher in shipbuilding industry. In order to enhance production efficiency and reduce manufacturing cycle time in shipbuilding industry, it is important for shipyards to have the accuracy of ship components evaluated efficiently during the whole manufacturing cycle time. In accurate shipbuilding process, block accuracy is the key part, which has significant meaning in shortening the period of shipbuilding process, decreasing cost and improving the quality of ship. The key of block accuracy control is to create a integrate block accuracy controlling system, which makes great sense in implementing comprehensive accuracy controlling, increasing block accuracy, standardization of proceeding of accuracy controlling, realizing "zero-defect transferring" and advancing non-allowance shipbuilding. Generally, managers of accuracy control measure the vital points at section surface of block by using the heavy total station, which is inconvenient and time-consuming for measurement of vital points. In this paper, a new measurement method based on point clouds technique has been proposed. This method is to measure the 3D coordinates values of vital points at section surface of block by using 3D scanner, and then compare the measured point with design point based on ICP algorithm which has an allowable error check process that makes sure that whether or not the error between design point and measured point is within the margin of error.

Detecting Cadastral Discrepancy Method based on MMAS (MMAS 기법에 의한 지적불부합지 탐색기법)

  • Cho, Sung-Hwan;Huh, Yong
    • Journal of Cadastre & Land InformatiX
    • /
    • v.45 no.2
    • /
    • pp.149-160
    • /
    • 2015
  • This paper suggests the MMAS(Map Matching using Additional Surveying) method to improve the cadastral discrepancy search algorithm that currently does not include corrections of mis-represented parcel data. The MMAS is a method to search for cadastral discrepancy after correcting mis-represented parcel data using nearby anchor points confirmed by surveys. The MMAS first transforms the coordinate system of the digital cadastral map by overlaying anchor points obtained in the field surveying process over the corresponding edges of buildings and facility points on the digital topographic map. Then, it searches for cadastral discrepancy by checking if the area differences exceed the tolerance limit. This method improves the current method for searching for cadastral discrepancy by performing the process after correcting extortion of the digital cadastral map. This helps to identify cadastral discrepancies that are not detectable within the distorted digital cadastral map. With our experiment, this method identified more discrepancies compared to the method without the correcting the distortion of the digital cadastral map. We believe this method will be able to help the national cadastral re-survey by identifying potential cadastral discrepancy more accurately.

Development of Information Technology Infrastructures through Construction of Big Data Platform for Road Driving Environment Analysis (도로 주행환경 분석을 위한 빅데이터 플랫폼 구축 정보기술 인프라 개발)

  • Jung, In-taek;Chong, Kyu-soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.669-678
    • /
    • 2018
  • This study developed information technology infrastructures for building a driving environment analysis platform using various big data, such as vehicle sensing data, public data, etc. First, a small platform server with a parallel structure for big data distribution processing was developed with H/W technology. Next, programs for big data collection/storage, processing/analysis, and information visualization were developed with S/W technology. The collection S/W was developed as a collection interface using Kafka, Flume, and Sqoop. The storage S/W was developed to be divided into a Hadoop distributed file system and Cassandra DB according to the utilization of data. Processing S/W was developed for spatial unit matching and time interval interpolation/aggregation of the collected data by applying the grid index method. An analysis S/W was developed as an analytical tool based on the Zeppelin notebook for the application and evaluation of a development algorithm. Finally, Information Visualization S/W was developed as a Web GIS engine program for providing various driving environment information and visualization. As a result of the performance evaluation, the number of executors, the optimal memory capacity, and number of cores for the development server were derived, and the computation performance was superior to that of the other cloud computing.

Low Complexity Motion Estimation Based on Spatio - Temporal Correlations (시간적-공간적 상관성을 이용한 저 복잡도 움직임 추정)

  • Yoon Hyo-Sun;Kim Mi-Young;Lee Guee-Sang
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1142-1149
    • /
    • 2004
  • Motion Estimation(ME) has been developed to reduce temporal redundancy in digital video signals and increase data compression ratio. ME is an Important part of video encoding systems, since it can significantly affect the output quality of encoded sequences. However, ME requires high computational complexity, it is difficult to apply to real time video transmission. for this reason, motion estimation algorithms with low computational complexity are viable solutions. In this paper, we present an efficient method with low computational complexity based on spatial and temporal correlations of motion vectors. The proposed method uses temporally and spatially correlated motion information, the motion vector of the block with the same coordinate in the reference frame and the motion vectors of neighboring blocks around the current block in the current frame, to decide the search pattern and the location of search starting point adaptively. Experiments show that the image quality improvement of the proposed method over MVFAST (Motion Vector Field Adaptive Search Technique) and PMVFAST (Predictive Motion Vector Field Adaptive Search Technique) is 0.01~0.3(dB) better and the speedup improvement is about 1.12~l.33 times faster which resulted from lower computational complexity.

Design and Analysis of Data File Protection based on the Stream Cipher (데이터파일의 보호를 위한 스트림 암호방식 설계와 해석)

  • 이경원;이중한;김정호;오창석
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.1
    • /
    • pp.55-66
    • /
    • 2004
  • Recently, as the personal computers are supplied rapidly, they formed the nucleus of the computer system. But, because of the easiness that anyone uses them to go near easily, it is the fact that the security of personal computer is weakness. So, in the paper, 1 propose the technical method that minimizes the loss and leakage of important data. This paper implemented a crypto system for security of data file on personal computer and assistance storage medium. The way of encryption/decryption is applied by complexity method which mixed Diffie-Hellman key exchange protocol, a typical RC4(Rivest Cipher version 4) algorithm of stream cipher and a typical MD5(Message Digest version 5) of Hash Function. For valuation implemented crypto system, three criteria is presented, which are crypto complexity, processing time and pattern matching. And according to analysis the three criteria the crypto system is verified the security, efficiency and usefulness. The crypto system is programmed with Visual C++ language of Microsoft. And so, as this is software system, we shall have a technical security system at a minimum cost for all personal computer.

  • PDF

Why Gabor Frames? Two Fundamental Measures of Coherence and Their Role in Model Selection

  • Bajwa, Waheed U.;Calderbank, Robert;Jafarpour, Sina
    • Journal of Communications and Networks
    • /
    • v.12 no.4
    • /
    • pp.289-307
    • /
    • 2010
  • The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper studies non-asymptotic model selection for the general case of arbitrary (random or deterministic) design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence-termed as the worst-case coherence and the average coherence-among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property, which is termed as the coherence property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. In particular, this part of the analysis in the paper implies that an Alltop Gabor frame together with OST can successfully carry out model selection and recovery of sparse signals irrespective of the phases of the nonzero entries even if the number of nonzero entries scales almost linearly with the number of rows of the Alltop Gabor frame.

A Link Travel Time Estimation Algorithm Based on Point and Interval Detection Data over the National Highway Section (일반국도의 지점 및 구간검지기 자료의 융합을 통한 통행시간 추정 알고리즘 개발)

  • Kim, Sung-Hyun;Lim, Kang-Won;Lee, Young-Ihn
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.5 s.83
    • /
    • pp.135-146
    • /
    • 2005
  • Up to now studies on the fusion of travel time from various detectors have been conducted based on the variance raito of the intermittent data mainly collected by GPS or probe vehicles. The fusion model based on the variance ratio of intermittent data is not suitable for the license plate recognition AVIs which can deal with vast amount of data. This study was carried out to develop the fusion model based on travel time acquired from the license plate recognition AVIs and the point detectors. In order to fuse travel time acquired from the point detectors and the license plate recognition AVIs, the optimized fusion model and the proportional fusion model were developed in this study. As a result of verification, the optimized fusion model showed the superior estimation performance. The optimized fusion model is the dynamic fusion ratio estimation model on real time base, which calculates fusion weights based on real time historic data and applies them to the current time period. The results of this study are expected to be used effectively for National Highway Traffic Management System to provide traffic information in the future. However, there should be further studies on the Proper distance for the establishment of the AVIs and the license plate matching rate according to the lanes for AVIs to be established.

On-line Handwriting Chinese Character Recognition for PDA Using a Unit Reconstruction Method (유닛 재구성 방법을 이용한 PDA용 온라인 필기체 한자 인식)

  • Chin, Won;Kim, Ki-Doo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.1
    • /
    • pp.97-107
    • /
    • 2002
  • In this paper, we propose the realization of on-line handwritten Chinese character recognition for mobile personal digital assistants (PDA). We focus on the development of an algorithm having a high recognition performance under the restriction that PDA requires small memory storage and less computational complexity in comparison with PC. Therefore, we use index matching method having computational advantage for fast recognition and we suggest a unit reconstruction method to minimize the memory size to store the character models and to accomodate the various changes in stroke order and stroke number of each person in handwriting Chinese characters. We set up standard model consisting of 1800 characters using a set of pre-defined units. Input data are measured by similarity among candidate characters selected on the basis of stroke numbers and region features after preprocessing and feature extracting. We consider 1800 Chinese characters adopted in the middle and high school in Korea. We take character sets of five person, written in printed style, irrespective of stroke ordering and stroke numbers. As experimental results, we obtained an average recognition time of 0.16 second per character and the successful recognition rate of 94.3% with MIPS R4000 CPU in PDA.