• Title/Summary/Keyword: 평균치 함수

Search Result 217, Processing Time 0.026 seconds

Reflection and Transmission of Electromagnetic Waves at the Oscillating Dielectric Plane Surface;(Transverse Electric Wave) (진동하는 수전체면에서 전자파의 반사와 투과(TE파에 대하여))

  • 구자건
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.10 no.4
    • /
    • pp.193-200
    • /
    • 1985
  • In the reflection and transmission of a plane wave(TE) from a dielectric plane surface oscillating sinusoidally perpendicular to its surface, one could assume that the boundary moves with a uniform velocity equal to the instantaneous oscillating velocity. The reflected and the transmitted fields are obtained as the function of the incident angles, the dielecri'c permittivity, and the oscillating velocities according to the extended Lorentz transform.

  • PDF

결함도입을 고려한 개발 소프트웨어의 최저비용 산출에 관한 연구

  • Choe, Gyu-Sik
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.345-348
    • /
    • 2005
  • 소프트웨어 결함은 그것을 찾아내는 것도 힘들지만 정확한 해법을 찾는 것도 쉽지 않을 뿐더러, 또 테스트자의 능력 여하에 따라 수정중에 새로운 결함이 도입될 수도 있기 때문에 검출된 결함이 완벽하게 제거되기는 쉽지 않다. 따라서, 결함 제거 효율은 개발중인 소프트웨어의 신뢰도 성장이나 테스트 및 수정비용에 영향을 크게 미친다. 이는 소프트웨어 개발의 모든 과정에서 매우 유용한 척도로서 개발자가 디버깅 효율을 평가하는데 크게 도움이 될 뿐더러, 추가로 소요되는 작업량을 예측할 수 있게 해준다. 그러므로 개발 소프트웨어의 신뢰도와 비용면에서 불완전 디버깅의 영향을 연구하는 것은 매우 중요하다고 할 수 있으며, 이는 최적 인도 시각이나 운영 예산에도 영향을 줄 수 있다. 본 논문에서는 개발중인 소프트웨어를 대상으로 하여 디버깅이 완전하지 않으며, 이 때문에 디버깅 중 새로운 결함이 도입될 수도 있다는 제안하에 보편적으로 사용되는 신뢰도 모델을 대상으로 불완전 디버깅 범위로까지 소프트웨어의 신뢰도와 비용 문제를 확장하여 연구한다.

  • PDF

Defect-Limited Yield Difference Model (결함 제한적 수율변화 모델)

  • Lee, Hoong-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.6
    • /
    • pp.1614-1618
    • /
    • 2008
  • This paper propose a novel yield difference model according to layout modification. The difference of average number of faults by layout modification to increase or decrease spaces between geometries is formulated for short faults and open faults. Complex modification including wire bending with jogs is also modeled by dividing patterns into segments and redefining spaces and widths. This model can help to monitor the yield change and to generate a cost function of defect-limited yield quickly.

A Temporal Decomposition Method Based on a Rate-distortion Criterion (비트율-왜곡 기반 음성 신호 시간축 분할)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.315-322
    • /
    • 2002
  • In this paper, a new temporal decomposition method is proposed. which takes into consideration not only spectral distortion but also bit rates. The interpolation functions, which are one of necessary parameters for temporal decomposition, are obtained from the training speech corpus. Since the interval between the two targets uniquely defines the interpolation function, the interpolation can be represented without additional information. The locations of the targets are determined by minimizing the bit rates while the maximum spectral distortion maintains below a given threshold. The proposed method has been applied to compressing the LSP coefficients which are widely used as a spectral parameter. The results of the simulation show that an average spectral distortion of about 1.4 dB can be achieved at an average bit rate of about 8 bits/Frame.

Performance Enhancement of On-Line Scheduling Algorithm for IRIS Real-Time Tasks using Partial Solution (부분 해를 이용한 IRIS 실시간 태스크용 온-라인 스케줄링 알고리즘의 성능향상)

  • 심재홍;최경희;정기현
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.1
    • /
    • pp.12-21
    • /
    • 2003
  • In this paper, we propose an on-line scheduling algorithm with the goal of maximizing the total reward of IRIS (Increasing Reward with Increasing Service) real-time tasks that have reward functions and arrive dynamically into the system. We focus on enhancing the performance of scheduling algorithm, which W.: based on the following two main ideas. First, we show that the problem to maximize the total reward of dynamic tasks can also be solved by the problem to find minimum of maximum derivatives of reward functions. Secondly, we observed that only a few of scheduled tasks are serviced until a new task arrives, and the rest tasks are rescheduled with the new task. Based on our observation, the Proposed algorithm doesn't schedules all tasks in the system at every scheduling print, but a part of tasks. The performance of the proposed algorithm is verified through the simulations for various cases. The simulation result showed that the computational complexity of proposed algorithm is$O(N_2)$ in the worst case which is equal to those of the previous algorithms, but close to O(N) on the average.

Nonlinear Approximations Using Modified Mixture Density Networks (변형된 혼합 밀도 네트워크를 이용한 비선형 근사)

  • Cho, Won-Hee;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.7
    • /
    • pp.847-851
    • /
    • 2004
  • In the original mixture density network(MDN), which was introduced by Bishop and Nabney, the parameters of the conditional probability density function are represented by the output vector of a single multi-layer perceptron. Among the recent modification of the MDNs, there is the so-called modified mixture density network, in which each of the priors, conditional means, and covariances is represented via an independent multi-layer perceptron. In this paper, we consider a further simplification of the modified MDN, in which the conditional means are linear with respect to the input variable together with the development of the MATLAB program for the simplification. In this paper, we first briefly review the original mixture density network, then we also review the modified mixture density network in which independent multi-layer perceptrons play an important role in the learning for the parameters of the conditional probability, and finally present a further modification so that the conditional means are linear in the input. The applicability of the presented method is shown via an illustrative simulation example.

A Comparison Study between Uniform Testing Effort and Weibull Testing Effort during Software Development (소프트웨어 개발시 일정테스트노력과 웨이불 테스트 노력의 비교 연구)

  • 최규식;장원석;김종기
    • Journal of Information Technology Application
    • /
    • v.3 no.3
    • /
    • pp.91-106
    • /
    • 2001
  • We propose a software-reliability growth model incoporating the amount of uniform and Weibull testing efforts during the software testing phase in this paper. The time-dependent behavior of testing effort is described by uniform and Weibull curves. Assuming that the error detection rate to the amount of testing effort spent during the testing phase is proportional to the current error content, the model is formulated by a nonhomogeneous Poisson process. Using this model the method the data analysis for software reliability measurement is developed. The optimum release time is determined by considering how the initial reliability R($\chi$ 0) would be. The conditions are ($R\chi$ 0)>$R_{o}$ , $P_{o}$ >R($\chi$ 0)> $R_{o}$ $^{d}$ and R($\chi$ 0)<$R_{o}$ $^{d}$ for uniform testing efforts. deal case is $P_{o}$ >($R\chi$ 0)> $R_{o}$ $^{d}$ Likewise, it is ($R\chi$ 0)$\geq$$R_{o}$ , $R_{o}$ >($R\chi$ 0)>R(eqation omitted) and ($R\chi$ 0)<R(eqation omitted)for Weibull testing efforts. Ideal case is $R_{o}$ > R($\chi$ 0)> R(eqation omitted).

  • PDF

A Critical Evaluation of Dichotomous Choice Responses in Contingent Valuation Method (양분선택형 조건부가치측정법 응답자료의 실증적 쟁점분석)

  • Eom, Young Sook
    • Environmental and Resource Economics Review
    • /
    • v.20 no.1
    • /
    • pp.119-153
    • /
    • 2011
  • This study reviews various aspects of model formulating processes of dichotomous choice responses of the contingent valuation method (CVM), which has been increasingly used in the preliminary feasibility test of Korea public investment projects. The theoretical review emphasizes the consistency between WTP estimation process and WTP measurement process. The empirical analysis suggests that two common parametric models for dichotmous choice responses (RUM and RWTP) and two commonly used probability distributions of random components (probit and logit) resulted in all most the same empirical WTP distributions, as long as the WTP functions are specified to be a linear function of the bid amounts. However, the efficiency gain of DB response compared to SB response were supported on the ground that the two CV responses are derived from the same WTP distribution. Moreover for the exponential WTP function which guarantees the non-negative WTP measures, sample mean WTP were quite different from median WTP if the scale parameter of WTP function turned out to be large.

  • PDF

Expansion of Sensitivity Analysis for Statistical Moments and Probability Constraints to Non-Normal Variables (비정규 분포에 대한 통계적 모멘트와 확률 제한조건의 민감도 해석)

  • Huh, Jae-Sung;Kwak, Byung-Man
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.34 no.11
    • /
    • pp.1691-1696
    • /
    • 2010
  • The efforts of reflecting the system's uncertainties in design step have been made and robust optimization or reliabilitybased design optimization are examples of the most famous methodologies. The statistical moments of a performance function and the constraints corresponding to probability conditions are involved in the formulation of these methodologies. Therefore, it is essential to effectively and accurately calculate them. The sensitivities of these methodologies have to be determined when nonlinear programming is utilized during the optimization process. The sensitivity of statistical moments and probability constraints is expressed in the integral form and limited to the normal random variable; we aim to expand the sensitivity formulation to nonnormal variables. Additional functional calculation will not be required when statistical moments and failure or satisfaction probabilities are already obtained at a design point. On the other hand, the accuracy of the sensitivity results could be worse than that of the moments because the target function is expressed as a product of the performance function and the explicit functions derived from probability density functions.

Risk and The Economics of Acid Chemical Use in Korean taver Farming (김 양식에 있어서 산 이용의 생산위험과 경제성에 관한 연구)

  • Park Seong Kwae
    • The Journal of Fisheries Business Administration
    • /
    • v.32 no.1
    • /
    • pp.41-55
    • /
    • 2001
  • 본 논문의 목적은 김 양식 있어서 무기산 또는 유기산 사용 문제와 김 양식 어업인들의 생산위험 회피행위를 고찰하기 위한 이론적 틀을 개발하고, 정책함의를 도출하는데 있다. 김 양식의 생산위험 또는 가격위험은 김 양식 어업인들이 직면하고 있는 가장 중요한 의 사길정변수라고 할 수 있으며, 특히 문제가 되고 있는 무기산(또는 폐염산) 또는 유기산은 농업에 있어서 농약처럼 김 양식 어업인들이 생산위험을 최소화하기 위한 일종의 보험 생산요소(insurance production inputs)으로 볼 수 있다. 김 양식 어업인들의 생산위험은 평균(1차 적률 mean), 분산(2차 적률 variance), 왜곡도(3차 적률 skewnesss)에 의해 측정될 수 있으며, 특히 김 양식 어업인들은 확률이 낮을지라도 일단 첫병과 잡태(예: 파래 등)가 광범위하고 심각하게 발생하게 되면 생산물의 심각한 질적 저하가 야기된다는 사실을 경험적으로 인식하고 있다. 따라서 김 양식 어업인들은 평균생산 뿐만 아니라 생산의 분산과 하향성 확률 분포를 최소화할 수 있는 생산기술을 이용하게 된다. 이러한 김 양식 어업인들의 위험회피행위를 분석하기 위해 기대효용이론을 채택하고, 미지의 진효용함수를 테일러 시리즈 확장에 의해 3차 적률까지를 근사치로 이용하였다. 이윤에 대한 기대효용 극대화를 위한 1차 최적조건을 구하면, 어떤 산(무기산 또는 유기산)을 얼마만큼 이용하느냐 하는 문제는 생산량의 분산과 하향성 분포에 대한 김 양식 어업인들의 위험회피계수의 크기와 생산요소의 탄성치에 의해 결정된다. 특히 하향성 위험회피계수가 높고 3차 적률에 대한 생산요소 산의 탄성치가 클 경우 김 양식 어업인들른 하향성 위험을 줄이기 위해 상대적으로 강력하고 가격이 저렴한 산을 더 많이 이용하게 된다. 또한 두 가지 산의 효과가 같다면 무기산/유기산의 시장가격과 정부 산 가격 정책이 김 양식 어업인들의 산 종류 선택과 사용량 결정에 유의한 영향을 미치게 될 것이다. 무기산의 사용이 광범위하고 집약적으로 이루어질 경우 김 양식부문에서 폐공업용 염산 이용에 의한 생산위험 감소는 해양생태환경 파괴위험 증가로 이어질 수 있으며 여기에 바로 정부 산정책의 딜레마가 있다. 따라서 김 양식의 생산성 증대와 환경보전의 균형 유지에 대한 확고한 정책목표가 필요하며, 이러한 정책목표가 흔들릴 경우 산 문제에 대한 정부정책은 생산성 증대와 환경보전 어느 한쪽 부문에 심각한 왜곡현상을 초래할 수 있다.

  • PDF