• Title/Summary/Keyword: Deterministic models

Search Result 228, Processing Time 0.03 seconds

Formant Synthesis of Haegeum Sounds Using Cepstral Envelope (캡스트럼 포락선을 이용한 해금 소리의 포만트 합성)

  • Hong, Yeon-Woo;Cho, Sang-Jin;Kim, Jong-Myon;Chong, Ui-Pil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.526-533
    • /
    • 2009
  • This paper proposes a formant synthesis method of Haegeum sounds using cepstral envelope for spectral modeling. Spectral modeling synthesis (SMS) is a technique that models time-varying spectra as a combination of sinusoids (the "deterministic" part), and a time-varying filtered noise component (the "stochastic" part). SMS is appropriate for synthesizing sounds of string and wind instruments whose harmonics are evenly distributed over whole frequency band. Formants extracted from cepstral envelope are parameterized for synthesis of sinusoids. A resonator by Impulse Invariant Transform (IIT) is applied to synthesize sinusoids and the results are bandpass filtered to adjust magnitude. The noise is calculated by first generating the sinusoids with formant synthesis, subtracting them from the original sound, and then removing some harmonics remained. Linear interpolation is used to model noise. The synthesized sounds are made by summing sinusoids, which are shown to be similar to the original Haegeum sounds.

Development of Intelligent ATP System Using Genetic Algorithm (유전 알고리듬을 적용한 지능형 ATP 시스템 개발)

  • Kim, Tai-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.131-145
    • /
    • 2010
  • The framework for making a coordinated decision for large-scale facilities has become an important issue in supply chain(SC) management research. The competitive business environment requires companies to continuously search for the ways to achieve high efficiency and lower operational costs. In the areas of production/distribution planning, many researchers and practitioners have developedand evaluated the deterministic models to coordinate important and interrelated logistic decisions such as capacity management, inventory allocation, and vehicle routing. They initially have investigated the various process of SC separately and later become more interested in such problems encompassing the whole SC system. The accurate quotation of ATP(Available-To-Promise) plays a very important role in enhancing customer satisfaction and fill rate maximization. The complexity for intelligent manufacturing system, which includes all the linkages among procurement, production, and distribution, makes the accurate quotation of ATP be a quite difficult job. In addition to, many researchers assumed ATP model with integer time. However, in industry practices, integer times are very rare and the model developed using integer times is therefore approximating the real system. Various alternative models for an ATP system with time lags have been developed and evaluated. In most cases, these models have assumed that the time lags are integer multiples of a unit time grid. However, integer time lags are very rare in practices, and therefore models developed using integer time lags only approximate real systems. The differences occurring by this approximation frequently result in significant accuracy degradations. To introduce the ATP model with time lags, we first introduce the dynamic production function. Hackman and Leachman's dynamic production function in initiated research directly related to the topic of this paper. They propose a modeling framework for a system with non-integer time lags and show how to apply the framework to a variety of systems including continues time series, manufacturing resource planning and critical path method. Their formulation requires no additional variables or constraints and is capable of representing real world systems more accurately. Previously, to cope with non-integer time lags, they usually model a concerned system either by rounding lags to the nearest integers or by subdividing the time grid to make the lags become integer multiples of the grid. But each approach has a critical weakness: the first approach underestimates, potentially leading to infeasibilities or overestimates lead times, potentially resulting in excessive work-inprocesses. The second approach drastically inflates the problem size. We consider an optimized ATP system with non-integer time lag in supply chain management. We focus on a worldwide headquarter, distribution centers, and manufacturing facilities are globally networked. We develop a mixed integer programming(MIP) model for ATP process, which has the definition of required data flow. The illustrative ATP module shows the proposed system is largely affected inSCM. The system we are concerned is composed of a multiple production facility with multiple products, multiple distribution centers and multiple customers. For the system, we consider an ATP scheduling and capacity allocationproblem. In this study, we proposed the model for the ATP system in SCM using the dynamic production function considering the non-integer time lags. The model is developed under the framework suitable for the non-integer lags and, therefore, is more accurate than the models we usually encounter. We developed intelligent ATP System for this model using genetic algorithm. We focus on a capacitated production planning and capacity allocation problem, develop a mixed integer programming model, and propose an efficient heuristic procedure using an evolutionary system to solve it efficiently. This method makes it possible for the population to reach the approximate solution easily. Moreover, we designed and utilized a representation scheme that allows the proposed models to represent real variables. The proposed regeneration procedures, which evaluate each infeasible chromosome, makes the solutions converge to the optimum quickly.

Life Cycle Cost Analysis at Design Stage of Cable Stayed Bridges based on the Performance Degradation Models (성능저하모델에 기초한 사장교의 설계단계 생애주기비용 분석)

  • Koo, Bon Sung;Han, Sang Hoon;Cho, Choong Yuen
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.5
    • /
    • pp.2081-2091
    • /
    • 2013
  • Recently, the demand on the practical application of life-cycle cost effectiveness for design and rehabilitation of civil infrastructure is rapidly growing unprecedently in civil engineering practice. Accordingly, in the 21st century, it is almost obvious that life-cycle cost together with value engineering will become a new paradigm for all engineering decision problems in practice. However, in spite of impressive progress in the researches on the LCC, the most researches have only focused on the Deterministic or Probabilistic LCC analysis approach and general bridge at design stage. Thus, the goal of this study is to develop a practical and realistic methodology for the Life-Cycle Cost LCC-effective optimum decision-making based on reliability analysis of bridges at design stage. The proposed updated methodology is based on the concept of Life Cycle Performance(LCP) which is expressed as the sum of present value of expected direct/indirect maintenance costs with expected optimal maintenance scenario. The updated LCC methodology proposed in this study is applied to the optimum design problem of an actual highway bridge with Cable Stayed Bridges. In conclusion, based on the application of the proposed methods to an actual example bridge, it is demonstrated that a updated methodology for performance-based LCC analysis proposed in this thesis, shown applicably in practice as a efficient, practical, process LCC analysis method at design stage.

Application of an Automated Time Domain Reflectometry to Solute Transport Study at Field Scale: Transport Concept (시간영역 광전자파 분석기 (Automatic TDR System)를 이용한 오염물질의 거동에 관한 연구: 오염물질 운송개념)

  • Kim, Dong-Ju
    • Economic and Environmental Geology
    • /
    • v.29 no.6
    • /
    • pp.713-724
    • /
    • 1996
  • The time-series resident solute concentrations, monitored at two field plots using the automated 144-channel TDR system by Kim (this issue), are used to investigate the dominant transport mechanism at field scale. Two models, based on contradictory assumptions for describing the solute transport in the vadose zone, are fitted to the measured mean breakthrough curves (BTCs): the deterministic one-dimensional convection-dispersion model (CDE) and the stochastic-convective lognormal transfer function model (CLT). In addition, moment analysis has been performed using the probability density functions (pdfs) of the travel time of resident concentration. Results of moment analysis have shown that the first and second time moments of resident pdf are larger than those of flux pdf. Based on the time moments, expressed in function of model parameters, variance and dispersion of resident solute travel times are derived. The relationship between variance or dispersion of solute travel time and depth has been found to be identical for both the time-series flux and resident concentrations. Based on these relationships, the two models have been tested. However, due to the significant variations of transport properties across depth, the test has led to unreliable results. Consequently, the model performance has been evaluated based on predictability of the time-series resident BTCs at other depths after calibration at the first depth. The evaluation of model predictability has resulted in a clear conclusion that for both experimental sites the CLT model gives more accurate prediction than the CDE model. This suggests that solute transport at natural field soils is more likely governed by a stream tube model concept with correlated flow than a complete mixing model. Poor prediction of CDE model is attributed to the underestimation of solute spreading and thus resulting in an overprediction of peak concentration.

  • PDF

A Reliability Analysis of Shallow Foundations using a Single-Mode Performance Function (단일형 거동함수에 의한 얕은 기초의 신뢰도 해석 -임해퇴적층의 토성자료를 중심으로-)

  • 김용필;임병조
    • Geotechnical Engineering
    • /
    • v.2 no.1
    • /
    • pp.27-44
    • /
    • 1986
  • The measured soil data are analyzed to the descriptive statistics and classified into the four models of uncorrelated-normal (UNNO), uncorrelated-nonnormal (VNNN), correlatedonnormal(CONN), and correlated-nonnormal(CONN) . This paper presents the comparisons of reliability index and check points using the advanced first-order second-moment method with respect to the four models as well as BASIC Program. A sin91e-mode Performance function is consisted of the basic design variables of bearing capacity and settlements on shallow foundations and input the above analyzed soil informations. The main conclusions obtained in this study are summarized as follows: 1. In the bearing capacity mode, cohesion and bearing-capacity factors by C-U test are accepted for normal and lognormal distribution, respectively, and negatively low correlated to each other. Since the reliability index of the CONN model is the lowest one of the four model, which could be recommended a reliability.based design, whereas the other model might overestimate the geotechnical conditions. 2. In the case of settlements mode, the virgin compression ratio and preccnsolidation pressure are fitted for normal and lognormal distribution, respectively. Constraining settlements to the lower ones computed by deterministic method, The CONN model is the lowest reliability of the four models.

  • PDF

Statics corrections for shallow seismic refraction data (천부 굴절법 탄성파 탐사 자료의 정보정)

  • Palmer Derecke;Nikrouz Ramin;Spyrou Andreur
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.1
    • /
    • pp.7-17
    • /
    • 2005
  • The determination of seismic velocities in refractors for near-surface seismic refraction investigations is an ill-posed problem. Small variations in the computed time parameters can result in quite large lateral variations in the derived velocities, which are often artefacts of the inversion algorithms. Such artefacts are usually not recognized or corrected with forward modelling. Therefore, if detailed refractor models are sought with model based inversion, then detailed starting models are required. The usual source of artefacts in seismic velocities is irregular refractors. Under most circumstances, the variable migration of the generalized reciprocal method (GRM) is able to accommodate irregular interfaces and generate detailed starting models of the refractor. However, where the very-near-surface environment of the Earth is also irregular, the efficacy of the GRM is reduced, and weathering corrections can be necessary. Standard methods for correcting for surface irregularities are usually not practical where the very-near-surface irregularities are of limited lateral extent. In such circumstances, the GRM smoothing statics method (SSM) is a simple and robust approach, which can facilitate more-accurate estimates of refractor velocities. The GRM SSM generates a smoothing 'statics' correction by subtracting an average of the time-depths computed with a range of XY values from the time-depths computed with a zero XY value (where the XY value is the separation between the receivers used to compute the time-depth). The time-depths to the deeper target refractors do not vary greatly with varying XY values, and therefore an average is much the same as the optimum value. However, the time-depths for the very-near-surface irregularities migrate laterally with increasing XY values and they are substantially reduced with the averaging process. As a result, the time-depth profile averaged over a range of XY values is effectively corrected for the near-surface irregularities. In addition, the time-depths computed with a Bero XY value are the sum of both the near-surface effects and the time-depths to the target refractor. Therefore, their subtraction generates an approximate 'statics' correction, which in turn, is subtracted from the traveltimes The GRM SSM is essentially a smoothing procedure, rather than a deterministic weathering correction approach, and it is most effective with near-surface irregularities of quite limited lateral extent. Model and case studies demonstrate that the GRM SSM substantially improves the reliability in determining detailed seismic velocities in irregular refractors.

A Relief Method to Obtain the Solution of Optimal Problems (최적화문제를 해결하기 위한 완화(Relief)법)

  • Song, Jeong-Young;Lee, Kyu-Beom;Jang, Jigeul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.155-161
    • /
    • 2020
  • In general, optimization problems are difficult to solve simply. The reason is that the given problem is solved as soon as it is simple, but the more complex it is, the very large number of cases. This study is about the optimization of AI neural network. What we are dealing with here is the relief method for constructing AI network. The main topics deal with non-deterministic issues such as the stability and unstability of the overall network state, cost down and energy down. For this one, we discuss associative memory models, that is, a method in which local minimum memory information does not select fake information. The simulated annealing, this is a method of estimating the direction with the lowest possible value and combining it with the previous one to modify it to a lower value. And nonlinear planning problems, it is a method of checking and correcting the input / output by applying the appropriate gradient descent method to minimize the very large number of objective functions. This research suggests a useful approach to relief method as a theoretical approach to solving optimization problems. Therefore, this research will be a good proposal to apply efficiently when constructing a new AI neural network.

Why Gabor Frames? Two Fundamental Measures of Coherence and Their Role in Model Selection

  • Bajwa, Waheed U.;Calderbank, Robert;Jafarpour, Sina
    • Journal of Communications and Networks
    • /
    • v.12 no.4
    • /
    • pp.289-307
    • /
    • 2010
  • The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper studies non-asymptotic model selection for the general case of arbitrary (random or deterministic) design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence-termed as the worst-case coherence and the average coherence-among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property, which is termed as the coherence property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. In particular, this part of the analysis in the paper implies that an Alltop Gabor frame together with OST can successfully carry out model selection and recovery of sparse signals irrespective of the phases of the nonzero entries even if the number of nonzero entries scales almost linearly with the number of rows of the Alltop Gabor frame.

Rolling Horizon Implementation for Real-Time Operation of Dynamic Traffic Assignment Model (동적통행배정모형의 실시간 교통상황 반영)

  • SHIN, Seong Il;CHOI, Kee Choo;OH, Young Tae
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.4
    • /
    • pp.135-150
    • /
    • 2002
  • The basic assumption of analytical Dynamic Traffic Assignment models is that traffic demand and network conditions are known as a priori and unchanging during the whole planning horizon. This assumption may not be realistic in the practical traffic situation because traffic demand and network conditions nay vary from time to time. The rolling horizon implementation recognizes a fact : The Prediction of origin-destination(OD) matrices and network conditions is usually more accurate in a short period of time, while further into the whole horizon there exists a substantial uncertainty. In the rolling horizon implementation, therefore, rather than assuming time-dependent OD matrices and network conditions are known at the beginning of the horizon, it is assumed that the deterministic information of OD and traffic conditions for a short period are possessed, whereas information beyond this short period will not be available until the time rolls forward. This paper introduces rolling horizon implementation to enable a multi-class analytical DTA model to respond operationally to dynamic variations of both traffic demand and network conditions. In the paper, implementation procedure is discussed in detail, and practical solutions for some raised issues of 1) unfinished trips and 2) rerouting strategy of these trips, are proposed. Computational examples and results are presented and analyzed.

Chaotic Disaggregation of Daily Rainfall Time Series (카오스를 이용한 일 강우자료의 시간적 분해)

  • Kyoung, Min-Soo;Sivakumar, Bellie;Kim, Hung-Soo;Kim, Byung-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.9
    • /
    • pp.959-967
    • /
    • 2008
  • Disaggregation techniques are widely used to transform observed daily rainfall values into hourly ones, which serve as important inputs for flood forecasting purposes. However, an important limitation with most of the existing disaggregation techniques is that they treat the rainfall process as a realization of a stochastic process, thus raising questions on the lack of connection between the structure of the models on one hand and the underlying physics of the rainfall process on the other. The present study introduces a nonlinear deterministic (and specifically chaotic) framework to study the dynamic characteristics of rainfall distributions across different temporal scales (i.e. weights between scales), and thus the possibility of rainfall disaggregation. Rainfall data from the Seoul station (recorded by the Korea Meteorological Administration) are considered for the present investigation, and weights between only successively doubled resolutions (i.e., 24-hr to 12-hr, 12-hr to 6-hr, 6-hr to 3-hr) are analyzed. The correlation dimension method is employed to investigate the presence of chaotic behavior in the time series of weights, and a local approximation technique is employed for rainfall disaggregation. The results indicate the presence of chaotic behavior in the dynamics of weights between the successively doubled scales studied. The modeled (disaggregated) rainfall values are found to be in good agreement with the observed ones in their overall matching (e.g. correlation coefficient and low mean square error). While the general trend (rainfall amount and time of occurrence) is clearly captured, an underestimation of the maximum values are found.