• Title/Summary/Keyword: Laplace

Search Result 754, Processing Time 0.024 seconds

A Comparative Study on the Infinite NHPP Software Reliability Model Following Chi-Square Distribution with Lifetime Distribution Dependent on Degrees of Freedom (수명분포가 자유도에 의존한 카이제곱분포를 따르는 무한고장 NHPP 소프트웨어 신뢰성 모형에 관한 비교연구)

  • Kim, Hee-Cheul;Kim, Jae-Wook
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.5
    • /
    • pp.372-379
    • /
    • 2017
  • Software reliability factor during the software development process is elementary. Case of the infinite failure NHPP for identifying software failure, the occurrence rates per fault (hazard function) have the characteristic point that is constant, increases and decreases. In this paper, we propose a reliability model using the chi - square distribution which depends on the degree of freedom that represents the application efficiency of software reliability. Algorithm to estimate the parameters used to the maximum likelihood estimator and bisection method, a model selection based on the mean square error (MSE) and coefficient of determination($R^2$), for the sake of the efficient model, were employed. For the reliability model using the proposed degree of freedom of the chi - square distribution, the failure analysis using the actual failure interval data was applied. Fault data analysis is compared with the intensity function using the degree of freedom of the chi - square distribution. For the insurance about the reliability of a data, the Laplace trend test was employed. In this study, the chi-square distribution model depends on the degree of freedom, is also efficient about reliability because have the coefficient of determination is 90% or more, in the ground of the basic model, can used as a applied model. From this paper, the software development designer must be applied life distribution by the applied basic knowledge of the software to confirm failure modes which may be applied.

Exploration of underground utilities using method predicting an anomaly (이상대 판정기법을 활용한 지하매설물 탐사)

  • Ryu, Hee-Hwan;Kim, Kyoung-Yul;Lee, Kang-Ryel;Lee, Dae-Soo;Cho, Gye-Chun
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.17 no.3
    • /
    • pp.205-214
    • /
    • 2015
  • Rapid urbanization and industrialization have caused increased demand for underground structures such as cable, and other utility tunnels. Recently, it has become very difficult to construct new underground structures in downtown areas because of civil complaints, and engineering problems related to insufficient information about existing underground structures, cable tunnels in particular. This lack of information about the location and direction-of-travel of cable tunnels is causing many problems. To solve these problems, this study was focused on the use of geophysical exploration of the ground in a way that is theoretically, different from previous electrical resistivity surveys. An electric field analysis was performed on the ground with cable tunnels using Gauss' law and the Laplace equation. The electrical resistivity equation, which is a function of the cable tunnel direction, the cable tunnel location, and the electrical conductivity of the cable tunnel, can be obtained through electrical field analysis. A field test was performed for the verification of this theoretical approach. A field test results provided meaningful data.

Derivation of the Instantaneous Unit Hydrograph and Estimation of the Direct Runoff by Using the Geomorphologic Parameters (지상인자에 의한 순간단위도 유도와 유출량 예측)

  • 천만복;서승덕
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.32 no.3
    • /
    • pp.87-101
    • /
    • 1990
  • The purpose of this study is to estimate the flood discharge and runoff volume at a stream by using geomorphologic parameters obtained from the topographic maps following the law of stream classification and ordering by Horton and Strahier. The present model is modified from Cheng' s model which derives the geomorphologic instantaneous unit hydrograph. The present model uses the results of Laplace transformation and convolution intergral of probability density function of the travel time at each state. The stream flow velocity parameters are determined as a function of the rainfall intensity, and the effective rainfall is calculated by the SCS method. The total direct runoff volume until the time to peak is estimated by assuming a triangular hydrograph. The model is used to estimate the time to peak, the flood discharge, and the direct runoff at Andong, Imha. Geomchon, and Sunsan basin in the Nakdong River system. The results of the model application are as follows : 1.For each basin, as the rainfall intensity doubles form 1 mm/h to 2 mm/h with the same rainfall duration of 1 hour, the hydrographs show that the runoff volume doubles while the duration of the base flow and the time to peak are the same. This aggrees with the theory of the unit hydrograph. 2.Comparisions of the model predicted and observed values show that small relative errors of 0.44-7.4% of the flood discharge, and 1 hour difference in time to peak except the Geomchon basin which shows 10.32% and 2 hours respectively. 3.When the rainfall intensity is small, the error of flood discharge estimated by using this model is relatively large. The reason of this might be because of introducing the flood velocity concept in the stream flow velocity. 4.Total direct runoff volume until the time to peak estimated by using this model has small relative error comparing with the observed data. 5.The sensitivity analysis of velocity parameters to flood discharge shows that the flood discharge is sensitive to the velocity coefficient while it is insensitive to the ratio of arrival time of moving portion to that of storage portion of a stream and to the ratio of arrival time of stream to that of overland flow.

  • PDF

Simulation comparison of standardization methods for interview scores (면접점수 표준화 방법 모의실험 비교)

  • Park, Cheol-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.2
    • /
    • pp.189-196
    • /
    • 2011
  • In this study, we perform a simulation study to compare frequently used standardization methods for interview scores based on trimmed mean, rank mean, and z-score mean. In this simulation study we assume that interviewer's score is influenced by a weighted average of true interviewee's true score and independent noise whose weight is determined by the professionality of the interviewer. In other words, as interviewer's professionality increases, the observed score becomes closer to the true score and if interviewer's professionality decreases, the observed score becomes closer to the noise instead of the true score. By adding interviewer's tendency bias to the weighed average, final interviewee's score is assumed to be observed. In this simulation, the interviewers's cores for each method are computed and then the method is considered best whose rank correlation between the method's scores and the true scores is highest. Simulation results show that when the true score is from normal distributions, z-score mean is best in general and when the true score is from Laplace distributions, z-score mean is better than rank mean in full interview system, where all interviewers meet all interviewees, and rank mean is better than z-score mean in half split interview system, where the interviewers meet only half of the interviewees. Trimmed mean is worst in general.

Magnetization structure of Aogashima Island using vector magnetic anomalies obtained by a helicopter-borne magnetometer (항공 벡터 자기이상 자료를 이용한 아오가시마섬(청도)의 자화구조 연구)

  • Isezaski, Nobuhiro;Matsuo, Jun
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.1
    • /
    • pp.17-26
    • /
    • 2009
  • On Aogashima Island, a volcanic island located in the southernmost part of the Izu Seven Islands Chain, vector magnetic anomalies were obtained in a helicopter-borne magnetic survey. The purpose of this study was to understand the volcanic structure of Aogashima Island in order to mitigate future disasters. Commonly, to obtain the magnetic structure of a volcanic island, total intensity anomalies (TIA) have been used, even though they have intrinsic errors that have not been evaluated correctly. Because the total intensity magnetic anomaly (TIA) is not a physical value, it does not satisfy Maxwell's Equations, Laplace's Equation, etc., and so TIA is not suitable for any physical analyses. In addition, it has been conventionally assumed that TIA is the same as the projected total intensity anomaly vector (PTA) for analyses of TIA. However, the effect of the intrinsic error ($\varepsilon_T$ = TIA.PTA) on the analysis results has not been taken into account. To avoid such an effect, vector magnetic anomalies were measured so that a reliable analysis of Aogashima Island magnetization could be carried out. In this study, we evaluated the error in TIA and used vector anomalies to avoid this erroneous effect, in the process obtaining reliable analysis results for 3D, vector magnetization distributions. An area of less than 1 A/m magnetization was found in the south-west part of Aogashima Island at the depth of 1.2 km. Taking the location of fumarolic activity into consideration, the lower-magnetization area was expected to be the source of that fumarolic activity of Aogashima Island.

Generating Motion- and Distortion-Free Local Field Map Using 3D Ultrashort TE MRI: Comparison with T2* Mapping

  • Jeong, Kyle;Thapa, Bijaya;Han, Bong-Soo;Kim, Daehong;Jeong, Eun-Kee
    • Investigative Magnetic Resonance Imaging
    • /
    • v.23 no.4
    • /
    • pp.328-340
    • /
    • 2019
  • Purpose: To generate phase images with free of motion-induced artifact and susceptibility-induced distortion using 3D radial ultrashort TE (UTE) MRI. Materials and Methods: The field map was theoretically derived by solving Laplace's equation with appropriate boundary conditions, and used to simulate the image distortion in conventional spin-warp MRI. Manufacturer's 3D radial imaging sequence was modified to acquire maximum number of radial spokes in a given time, by removing the spoiler gradient and sampling during both rampup and rampdown gradient. Spoke direction randomly jumps so that a readout gradient acts as a spoiling gradient for the previous spoke. The custom raw data was reconstructed using a homemade image reconstruction software, which is programmed using Python language. The method was applied to a phantom and in-vivo human brain and abdomen. The performance of UTE was compared with 3D GRE for phase mapping. Local phase mapping was compared with T2* mapping using UTE. Results: The phase map using UTE mimics true field-map, which was theoretically calculated, while that using 3D GRE revealed both motion-induced artifact and geometric distortion. Motion-free imaging is particularly crucial for application of phase mapping for abdomen MRI, which typically requires multiple breathold acquisitions. The air pockets, which are caught within the digestive pathway, induce spatially varying and large background field. T2* map, that was calculated using UTE data, suffers from non-uniform T2* value due to this background field, while does not appear in the local phase map of UTE data. Conclusion: Phase map generated using UTE mimicked the true field map even when non-zero susceptibility objects were present. Phase map generated by 3D GRE did not accurately mimic the true field map when non-zero susceptibility objects were present due to the significant field distortion as theoretically calculated. Nonetheless, UTE allows for phase maps to be free of susceptibility-induced distortion without the use of any post-processing protocols.

Derivation of Asymptotic Formulas for the Signal-to-Noise Ratio of Mismatched Optimal Laplacian Quantizers (불일치된 최적 라플라스 양자기의 신호대잡음비 점근식의 유도)

  • Na, Sang-Sin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.5C
    • /
    • pp.413-421
    • /
    • 2008
  • The paper derives asymptotic formulas for the MSE distortion and the signal-to-noise ratio of a mismatched fixed-rate minimum MSE Laplacian quantizer. These closed-form formulas are expressed in terms of the number N of quantization points, the mean displacement $\mu$, and the ratio $\rho$ of the standard deviation of the source to that for which the quantizer is optimally designed. Numerical results show that the principal formula is accurate in that, for rate R=$log_2N{\geq}6$, it predicts signal-to-noise ratios within 1% of the true values for a wide range of $\mu$, and $\rho$. The new findings herein include the fact that, for heavy variance mismatch of ${\rho}>3/2$, the signal-to-noise ratio increases at the rate of $9/\rho$ dB/bit, which is slower than the usual 6 dB/bit, and the fact that an optimal uniform quantizer, though optimally designed, is slightly more than critically mismatched to the source. It is also found that signal-to-noise ratio loss due to $\mu$ is moderate. The derived formulas can be useful in quantization of speech or music signals, which are modeled well as Laplacian sources and have changing short-term variances.

The Study for Performance Analysis of Software Reliability Model using Fault Detection Rate based on Logarithmic and Exponential Type (로그 및 지수형 결함 발생률에 따른 소프트웨어 신뢰성 모형에 관한 신뢰도 성능분석 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.306-311
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, reliability software cost model considering logarithmic and exponential fault detection rate based on observations from the process of software product testing was studied. Adding new fault probability using the Goel-Okumoto model that is widely used in the field of reliability problems presented. When correcting or modifying the software, finite failure non-homogeneous Poisson process model. For analysis of software reliability model considering the time-dependent fault detection rate, the parameters estimation using maximum likelihood estimation of inter-failure time data was made. The logarithmic and exponential fault detection model is also efficient in terms of reliability because it (the coefficient of determination is 80% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, the software developers have to consider life distribution by prior knowledge of the software to identify failure modes which can be able to help.

The Study of Infinite NHPP Software Reliability Model from the Intercept Parameter using Linear Hazard Rate Distribution (선형위험률분포의 절편모수에 근거한 무한고장 NHPP 소프트웨어 신뢰모형에 관한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.278-284
    • /
    • 2016
  • Software reliability in the software development process is an important issue. In infinite failure NHPP software reliability models, the fault occurrence rates may have constant, monotonic increasing or monotonic decreasing pattern. In this paper, infinite failures NHPP models that the situation was reflected for the fault occurs in the repair time, were presented about comparing property. Commonly, the software model of the infinite failures using the linear hazard rate distribution software reliability based on intercept parameter was used in business economics and actuarial modeling, was presented for comparison problem. The result is that a relatively large intercept parameter was appeared effectively form. The parameters estimation using maximum likelihood estimation was conducted and model selection was performed using the mean square error and the coefficient of determination. The linear hazard rate distribution model is also efficient in terms of reliability because it (the coefficient of determination is 90% or more) in the field of the conventional model can be used as an alternative model could be confirmed. From this paper, the software developers have to consider intercept parameter of life distribution by prior knowledge of the software to identify failure modes which can be able to help.

Electrical Impedance Tomography for Material Profile Reconstruction of Concrete Structures (콘크리트 구조의 재료 물성 재구성을 위한 전기 임피던스 단층촬영 기법)

  • Jung, Bong-Gu;Kim, Boyoung;Kang, Jun Won;Hwang, Jin-Ha
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.4
    • /
    • pp.249-256
    • /
    • 2019
  • This paper presents an optimization framework of electrical impedance tomography for characterizing electrical conductivity profiles of concrete structures in two dimensions. The framework utilizes a partial-differential-equation(PDE)-constrained optimization approach that can obtain the spatial distribution of electrical conductivity using measured electrical potentials from several electrodes located on the boundary of the concrete domain. The forward problem is formulated based on a complete electrode model(CEM) for the electrical potential of a medium due to current input. The CEM consists of a Laplace equation for electrical potential and boundary conditions to represent the current inputs to the electrodes on the surface. To validate the forward solution, electrical potential calculated by the finite element method is compared with that obtained using TCAD software. The PDE-constrained optimization approach seeks the optimal values of electrical conductivity on the domain of investigation while minimizing the Lagrangian function. The Lagrangian consists of least-squares objective functional and regularization terms augmented by the weak imposition of the governing equation and boundary conditions via Lagrange multipliers. Enforcing the stationarity of the Lagrangian leads to the Karush-Kuhn-Tucker condition to obtain an optimal solution for electrical conductivity within the target medium. Numerical inversion results are reported showing the reconstruction of the electrical conductivity profile of a concrete specimen in two dimensions.