• Title/Summary/Keyword: fault estimation

Search Result 393, Processing Time 0.026 seconds

Analysis of MT Data Acquired in Victoria, Australia (호주 Victoria주 MT 탐사 자료 해석)

  • Lee, Seong-Kon;Lee, Tae-Jong;Uchida, Toshihiro;Park, In-Hwa;Song, Yoon-Ho;Cull, Jim
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.3
    • /
    • pp.184-196
    • /
    • 2008
  • We perform MT soundings in Bendigo, the northern part of Victoria, Australia, to investigate the deep subsurface geologic structure. The primary purpose of this survey is to figure out whether the discontinuity such as faults extends northward. The time series of MT signal were measured over 11 days at 71 measurement stations together with at remote reference, which help enhance the quality of impedance estimation and its interpretation. The impedances are estimated by robust processing using remote reference technique and then inverted with 2D MT 2D inversion. We can see that known faults are clearly imaged in MT 2D inversion. Comparing resistivity images from MT 2D inversion with interpreted boundary from reflection seismic exploration, two interpretations match well each other.

Joint Overlapped Block Motion Compensation Using Eight-Neighbor Block Motion Vectors for Frame Rate Up-Conversion

  • Li, Ran;Wu, Minghu;Gan, Zongliang;Cui, Ziguan;Zhu, Xiuchang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.10
    • /
    • pp.2448-2463
    • /
    • 2013
  • The traditional block-based motion compensation methods in frame rate up-conversion (FRUC) only use a single uniquely motion vector field. However, there will always be some mistakes in the motion vector field whether the advanced motion estimation (ME) and motion vector analysis (MA) algorithms are performed or not. Once the motion vector field has many mistakes, the quality of the interpolated frame is severely affected. In order to solve the problem, this paper proposes a novel joint overlapped block motion compensation method (8J-OBMC) which adopts motion vectors of the interpolated block and its 8-neighbor blocks to jointly interpolate the target block. Since the smoothness of motion filed makes the motion vectors of 8-neighbor blocks around the interpolated block quite close to the true motion vector of the interpolated block, the proposed compensation algorithm has the better fault-tolerant capability than traditional ones. Besides, the annoying blocking artifacts can also be effectively suppressed by using overlapped blocks. Experimental results show that the proposed method is not only robust to motion vectors estimated wrongly, but also can to reduce blocking artifacts in comparison with existing popular compensation methods.

The Analysis of Underground Utility Tunnel Positions using Lineament and GPR (선구조와 지하 투과 레이더를 이용한 지하공동구 위치 해석)

  • Jang, Ho-Sik;Seo, Dong-Ju
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.9 no.4
    • /
    • pp.142-150
    • /
    • 2006
  • In this study, GPR and lineament methods are used for the effective construction. GPR method is non-destructive testing to understand underground utility tunnel while lineament method is to understand locational environment. First, soil condition of the subject area is surveyed by location analysis. As the result of GPR survey, small-scale and large-scale of underground utility tunnel's location and scale were estimated. From the result of estimation, it is found that the main cause of underground utility tunnel's generation was not the effect of landslide or disturbed foundation from the excavation work but crack of shear & tension from the effect of fault movement which grew by insulation surroundings. From now on, this investigation method would be very useful in the survey and design stage on site for the effective construction and maintenance.

  • PDF

Seismic Design of Structures in Low Seismicity Regions

  • Lee, Dong-Guen;Cho, So-Hoon;Ko, Hyun
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.11 no.4
    • /
    • pp.53-63
    • /
    • 2007
  • Seismic design codes are developed mainly based on the observation of the behavior of structures in the high seismicity regions where structures may experience significant amount of inelastic deformations and major earthquakes may result in structural damages in a vast area. Therefore, seismic loads are reduced in current design codes for building structures using response modification factors which depend on the ductility capacity and overstrength of a structural system. However, structures in low seismicity regions, subjected to a minor earthquake, will behave almost elastically because of the larger overstrength of structures in low seismicity regions such as Korea. Structures in low seismicity regions may have longer periods since they are designed to smaller seismic loads and main target of design will be minor or moderate earthquakes occurring nearby. Ground accelerations recorded at stations near the epicenter may have somewhat different response spectra from those of distant station records. Therefore, it is necessary to verify if the seismic design methods based on high seismicity would he applicable to low seismicity regions. In this study, the adequacy of design spectra, period estimation and response modification factors are discussed for the seismic design in low seismicity regions. The response modification factors are verified based on the ductility and overstrength of building structures estimated from the farce-displacement relationship. For the same response modification factor, the ductility demand in low seismicity regions may be smaller than that of high seismicity regions because the overstrength of structures may be larger in low seismicity regions. The ductility demands in example structures designed to UBC97 for high, moderate and low seismicity regions were compared. Demands of plastic rotation in connections were much lower in low seismicity regions compared to those of high seismicity regions when the structures are designed with the same response modification factor. Therefore, in low seismicity regions, it would be not required to use connection details with large ductility capacity even for structures designed with a large response modification factor.

Condition Assessment for Wind Turbines with Doubly Fed Induction Generators Based on SCADA Data

  • Sun, Peng;Li, Jian;Wang, Caisheng;Yan, Yonglong
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.2
    • /
    • pp.689-700
    • /
    • 2017
  • This paper presents an effective approach for wind turbine (WT) condition assessment based on the data collected from wind farm supervisory control and data acquisition (SCADA) system. Three types of assessment indices are determined based on the monitoring parameters obtained from the SCADA system. Neural Networks (NNs) are used to establish prediction models for the assessment indices that are dependent on environmental conditions such as ambient temperature and wind speed. An abnormal level index (ALI) is defined to quantify the abnormal level of the proposed indices. Prediction errors of the prediction models follow a normal distribution. Thus, the ALIs can be calculated based on the probability density function of normal distribution. For other assessment indices, the ALIs are calculated by the nonparametric estimation based cumulative probability density function. A Back-Propagation NN (BPNN) algorithm is used for the overall WT condition assessment. The inputs to the BPNN are the ALIs of the proposed indices. The network structure and the number of nodes in the hidden layer are carefully chosen when the BPNN model is being trained. The condition assessment method has been used for real 1.5 MW WTs with doubly fed induction generators. Results show that the proposed assessment method could effectively predict the change of operating conditions prior to fault occurrences and provide early alarming of the developing faults of WTs.

A comparative study on learning effects based on the reliability model depending on Makeham distribution (Makeham분포에 의존한 신뢰성모형에 근거한 학습효과 특성에 관한 비교 연구)

  • Kim, Hee-Cheul;Cheul, Shin-Hyun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.5
    • /
    • pp.496-502
    • /
    • 2016
  • In this study, we investigated the comparative NHPP software model based on learning techniques that operators in the process of software testing and development of software products that can be applied to software test tool. The life distribution was applied Makeham distribution based on finite fault NHPP. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. As a result, the learning factor is larger than automatic error that is usually well-organized model could be established. This paper, a trust characterization of applying using time among failures and parameter approximation using maximum likelihood estimation, after the effectiveness of the data through trend examination model selection were well-organized using the mean square error and $R^2$. From this paper, the software operators must be considered life distribution by the basic knowledge of the software to confirm failure modes which may be helped.

Design of Kinematic Position-Domain DGNSS Filters (차분 위성 항법을 위한 위치영역 필터의 설계)

  • Lee, Hyung Keun;Jee, Gyu-In;Rizos, Chris
    • Journal of Advanced Navigation Technology
    • /
    • v.8 no.1
    • /
    • pp.26-37
    • /
    • 2004
  • Consistent and realistic error covariance information is important for position estimation, error analysis, fault detection, and integer ambiguity resolution for differential GNSS. In designing a position domain carrier-smoothed-code filter where incremental carrier phases are used for time-propagation, formulation of consistent error covariance information is not easy due to being bounded and temporal correlation of propagation noises. To provide consistent and correct error covariance information, this paper proposes two recursive filter algorithms based on carrier-smoothed-code techniques: (a) the stepwise optimal position projection filter and (b) the stepwise unbiased position projection filter. A Monte-Carlo simulation result shows that the proposed filter algorithms actually generate consistent error covariance information and the neglection of carrier phase noise induces optimistic error covariance information. It is also shown that the stepwise unbiased position projection filter is attractive since its performance is good and its computational burden is moderate.

  • PDF

Switch-Level Binary Decision Diagram(SLBDD) for Circuit Design Verification) (회로 설계 검증을 위한 스위치-레벨 이진 결정 다이어그램)

  • 김경기;이동은;김주호
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.5
    • /
    • pp.1-12
    • /
    • 1999
  • A new algorithm of constructing binary decision diagram(BDD) for design verification of switch-level circuits is proposed in this paper. In the switch-level circuit, functions are characterized by serial and parallel connections of switches and the final logic values may have high-impedance and unstable states in addition to the logic values of 0 and 1. We extend the BDD to represent functions of switch-level circuits as acyclic graphs so called switch-level binary decision diagram (SLBDD). The function representation of the graph is in the worst case, exponential to the number of inputs. Thus, the ordering of decision variables plays a major role in graph sizes. Under the existence of pass-transistors and domino-logic of precharging circuitry, we also propose an input ordering algorithm for the efficiency in graph sizes. We conducted several experiments on various benchmark circuits and the results show that our algorithm is efficient enough to apply to functional simulation, power estimation, and fault-simulation of switch-level design.

  • PDF

Estimation of Tsunami Risk Zoning on the Coasts Adjacent to the East Sea from Hypothetical Earthquakes (공백역 지진에 의한 동해에 연한 해안에서의 지진해일 위험도 산정)

  • 최병호;에핌페리놉스키;이제신;우승범
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.6 no.5
    • /
    • pp.1-17
    • /
    • 2002
  • Prognostic characteristics of hypothetical tsunamis in the East Sea are further discussed based on numerical simulations using linear long wave theory than the last paper(Choi et al). As for choice of source zones, we used 28 cases based on fault parameters of hypothetical earthquakes and 76 cases based on simple initial surface shapes of tsunamigenic earthquakes selected by the seismic gap theory. As a result, the wave heights along the whole coastline adjacent to the East See of tsunamis due to these hypothetical earthquake are presented. Analyses also lead us to conclude that the selection of geographical zones with low risk of tsunamis can be used as a tool for coastal disaster mitigation planning.

Improved full-waveform inversion of normalised seismic wavefield data (정규화된 탄성파 파동장 자료의 향상된 전파형 역산)

  • Kim, Hee-Joon;Matsuoka, Toshifumi
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.1
    • /
    • pp.86-92
    • /
    • 2006
  • The full-waveform inversion algorithm using normalised seismic wavefields can avoid potential inversion errors due to source estimation required in conventional full-waveform inversion methods. In this paper, we have modified the inversion scheme to install a weighted smoothness constraint for better resolution, and to implement a staged approach using normalised wavefields in order of increasing frequency instead of inverting all frequency components simultaneously. The newly developed scheme is verified by using a simple two-dimensional fault model. One of the most significant improvements is based on introducing weights in model parameters, which can be derived from integrated sensitivities. The model-parameter weighting matrix is effective in selectively relaxing the smoothness constraint and in reducing artefacts in the reconstructed image. Simultaneous multiple-frequency inversion can almost be replicated by multiple single-frequency inversions. In particular, consecutively ordered single-frequency inversion, in which lower frequencies are used first, is useful for computation efficiency.