• Title/Summary/Keyword: Stochastic Approach

Search Result 578, Processing Time 0.026 seconds

Robust Design Method for Complex Stochastic Inventory Model

  • Hwang, In-Keuk;Park, Dong-Jin
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1999.04a
    • /
    • pp.426-426
    • /
    • 1999
  • ;There are many sources of uncertainty in a typical production and inventory system. There is uncertainty as to how many items customers will demand during the next day, week, month, or year. There is uncertainty about delivery times of the product. Uncertainty exacts a toll from management in a variety of ways. A spurt in a demand or a delay in production may lead to stockouts, with the potential for lost revenue and customer dissatisfaction. Firms typically hold inventory to provide protection against uncertainty. A cushion of inventory on hand allows management to face unexpected demands or delays in delivery with a reduced chance of incurring a stockout. The proposed strategies are used for the design of a probabilistic inventory system. In the traditional approach to the design of an inventory system, the goal is to find the best setting of various inventory control policy parameters such as the re-order level, review period, order quantity, etc. which would minimize the total inventory cost. The goals of the analysis need to be defined, so that robustness becomes an important design criterion. Moreover, one has to conceptualize and identify appropriate noise variables. There are two main goals for the inventory policy design. One is to minimize the average inventory cost and the stockouts. The other is to the variability for the average inventory cost and the stockouts The total average inventory cost is the sum of three components: the ordering cost, the holding cost, and the shortage costs. The shortage costs include the cost of the lost sales, cost of loss of goodwill, cost of customer dissatisfaction, etc. The noise factors for this design problem are identified to be: the mean demand rate and the mean lead time. Both the demand and the lead time are assumed to be normal random variables. Thus robustness for this inventory system is interpreted as insensitivity of the average inventory cost and the stockout to uncontrollable fluctuations in the mean demand rate and mean lead time. To make this inventory system for robustness, the concept of utility theory will be used. Utility theory is an analytical method for making a decision concerning an action to take, given a set of multiple criteria upon which the decision is to be based. Utility theory is appropriate for design having different scale such as demand rate and lead time since utility theory represents different scale across decision making attributes with zero to one ranks, higher preference modeled with a higher rank. Using utility theory, three design strategies, such as distance strategy, response strategy, and priority-based strategy. for the robust inventory system will be developed.loped.

  • PDF

Human Motion Tracking by Combining View-based and Model-based Methods for Monocular Video Sequences (하나의 비디오 입력을 위한 모습 기반법과 모델 사용법을 혼용한 사람 동작 추적법)

  • Park, Ji-Hun;Park, Sang-Ho;Aggarwal, J.K.
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.657-664
    • /
    • 2003
  • Reliable tracking of moving humans is essential to motion estimation, video surveillance and human-computer interface. This paper presents a new approach to human motion tracking that combines appearance-based and model-based techniques. Monocular color video is processed at both pixel level and object level. At the pixel level, a Gaussian mixture model is used to train and classily individual pixel colors. At the object level, a 3D human body model projected on a 2D image plane is used to fit the image data. Our method does not use inverse kinematics due to the singularity problem. While many others use stochastic sampling for model-based motion tracking, our method is purely dependent on nonlinear programming. We convert the human motion tracking problem into a nonlinear programming problem. A cost function for parameter optimization is used to estimate the degree of the overlapping between the foreground input image silhouette and a projected 3D model body silhouette. The overlapping is computed using computational geometry by converting a set of pixels from the image domain to a polygon in the real projection plane domain. Our method is used to recognize various human motions. Motion tracking results from video sequences are very encouraging.

Evaluation of Subsystem Importance Index considering Effective Supply in Water Distribution Systems (유효유량 개념을 도입한 상수관망 Subsystem 별 중요도 산정)

  • Seo, Min-Yeol;Yoo, Do-Guen;Kim, Joong-Hoon;Jun, Hwan-Don;Chung, Gun-Hui
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.9 no.6
    • /
    • pp.133-141
    • /
    • 2009
  • The main objective of water distribution system is to supply enough water to users with proper pressure. Hydraulic analysis of water distribution system can be divided into Demand Driven Analysis (DDA) and Pressure Driven Analysis (PDA). Demand-driven analysis can give unrealistic results such as negative pressures in nodes due to the assumption that nodal demands are always satisfied. Pressure-driven analysis which is often used as an alternative requires a Head-Outflow Relationship (HOR) to estimate the amount of possible water supply at a certain level of pressure. However, the lack of data causes difficulty to develop the relationship. In this study, effective supply, which is the possible amount of supply while meeting the pressure requirement in nodes, is proposed to estimate the serviceability and user's convenience of the network. The effective supply is used to calculate Subsystem Importance Index (SII) which indicates the effect of isolating a subsystem on the entire network. Harmony Search, a stochastic search algorithm, is linked with EPANET to maximize the effective supply. The proposed approach is applied in example networks to evaluate the capability of the network when a subsystem is isolated, which can also be utilized to prioritize the rehabilitation order or evaluate reliability of the network.

Modeling of Near Fault Ground Motion due to Moderate Magnitude Earthquakes in Stable Continental Regions (안정대륙권역의 중규모지진에 의한 근단층지반운동의 모델링)

  • Kim, Jung-Han;Kim, Jae-Kwan
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.10 no.3 s.49
    • /
    • pp.101-111
    • /
    • 2006
  • This paper proposes a method for modeling new fault ground motion due to moderate size earthquakes in Stable Continental Regions (SCRs) for the first time. The near fault ground motion is characterized by a single long period velocity pulse of large amplitude. In order to model the velocity pulse, its period and peak amplitude need be determined in terms of earthquake magnitude and distance from the causative fault. Because there have been observed very few new fault ground motions, it is difficult to derive the model directly from the recorded data in SCRs. Instead an indirect approach is adopted in this work. The two parameters, the period and peak amplitude of the velocity pulse, are known to be functions of the rise time and the slip velocity. For Western United States (WUS) that belongs active tectonic regions, there art empirical formulas for these functions. The relations of rise time and slip velocity on the magnitude in SCRs are derived by comparing related data between Western United States and Central-Eastern United States that belongs to SCRs. From these relations, the functions of these pulse parameters for NFGM in SCRs can be expressed in terms of earthquake magnitude and distance. A time history of near fault ground motion of moderate magnitude earthquake in stable continental regions is synthesized by superposing the velocity pulse on the for field ground motion that is generated by stochastic method. As an demonstrative application, the response of a single degree of freedom elasto-plastic system is studied.

Nonlinear Autoregressive Modeling of Southern Oscillation Index (비선형 자기회귀모형을 이용한 남방진동지수 시계열 분석)

  • Kwon, Hyun-Han;Moon, Young-Il
    • Journal of Korea Water Resources Association
    • /
    • v.39 no.12 s.173
    • /
    • pp.997-1012
    • /
    • 2006
  • We have presented a nonparametric stochastic approach for the SOI(Southern Oscillation Index) series that used nonlinear methodology called Nonlinear AutoRegressive(NAR) based on conditional kernel density function and CAFPE(Corrected Asymptotic Final Prediction Error) lag selection. The fitted linear AR model represents heteroscedasticity, and besides, a BDS(Brock - Dechert - Sheinkman) statistics is rejected. Hence, we applied NAR model to the SOI series. We can identify the lags 1, 2 and 4 are appropriate one, and estimated conditional mean function. There is no autocorrelation of residuals in the Portmanteau Test. However, the null hypothesis of normality and no heteroscedasticity is rejected in the Jarque-Bera Test and ARCH-LM Test, respectively. Moreover, the lag selection for conditional standard deviation function with CAFPE provides lags 3, 8 and 9. As the results of conditional standard deviation analysis, all I.I.D assumptions of the residuals are accepted. Particularly, the BDS statistics is accepted at the 95% and 99% significance level. Finally, we split the SOI set into a sample for estimating themodel and a sample for out-of-sample prediction, that is, we conduct the one-step ahead forecasts for the last 97 values (15%). The NAR model shows a MSEP of 0.5464 that is 7% lower than those of the linear model. Hence, the relevance of the NAR model may be proved in these results, and the nonparametric NAR model is encouraging rather than a linear one to reflect the nonlinearity of SOI series.

Improvement in facies discrimination using multiple seismic attributes for permeability modelling of the Athabasca Oil Sands, Canada (캐나다 Athabasca 오일샌드의 투수도 모델링을 위한 다양한 탄성파 속성들을 이용한 상 구분 향상)

  • Kashihara, Koji;Tsuji, Takashi
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.1
    • /
    • pp.80-87
    • /
    • 2010
  • This study was conducted to develop a reservoir modelling workflow to reproduce the heterogeneous distribution of effective permeability that impacts on the performance of SAGD (Steam Assisted Gravity Drainage), the in-situ bitumen recovery technique in the Athabasca Oil Sands. Lithologic facies distribution is the main cause of the heterogeneity in bitumen reservoirs in the study area. The target formation consists of sand with mudstone facies in a fluvial-to-estuary channel system, where the mudstone interrupts fluid flow and reduces effective permeability. In this study, the lithologic facies is classified into three classes having different characteristics of effective permeability, depending on the shapes of mudstones. The reservoir modelling workflow of this study consists of two main modules; facies modelling and permeability modelling. The facies modelling provides an identification of the three lithologic facies, using a stochastic approach, which mainly control the effective permeability. The permeability modelling populates mudstone volume fraction first, then transforms it into effective permeability. A series of flow simulations applied to mini-models of the lithologic facies obtains the transformation functions of the mudstone volume fraction into the effective permeability. Seismic data contribute to the facies modelling via providing prior probability of facies, which is incorporated in the facies models by geostatistical techniques. In particular, this study employs a probabilistic neural network utilising multiple seismic attributes in facies prediction that improves the prior probability of facies. The result of using the improved prior probability in facies modelling is compared to the conventional method using a single seismic attribute to demonstrate the improvement in the facies discrimination. Using P-wave velocity in combination with density in the multiple seismic attributes is the essence of the improved facies discrimination. This paper also discusses sand matrix porosity that makes P-wave velocity differ between the different facies in the study area, where the sand matrix porosity is uniquely evaluated using log-derived porosity, P-wave velocity and photographically-predicted mudstone volume.

PCA­based Waveform Classification of Rabbit Retinal Ganglion Cell Activity (주성분분석을 이용한 토끼 망막 신경절세포의 활동전위 파형 분류)

  • 진계환;조현숙;이태수;구용숙
    • Progress in Medical Physics
    • /
    • v.14 no.4
    • /
    • pp.211-217
    • /
    • 2003
  • The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by off­line analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.

  • PDF

The Phenomenological Comparison between Results from Single-hole and Cross-hole Hydraulic Test (균열암반 매질 내 단공 및 공간 간섭 시험에 대한 현상적 비교)

  • Kim, Tae-Hee;Kim, Kue-Young;Oh, Jun-Ho;Hwang, Se-Ho
    • Journal of Soil and Groundwater Environment
    • /
    • v.12 no.5
    • /
    • pp.39-53
    • /
    • 2007
  • Generally, fractured medium can be described with some key parameters, such as hydraulic conductivities or random field of hydraulic conductivities (continuum model), spatial and statistical distribution of permeable fractures (discrete fracture network model). Investigating the practical applicability of the well-known conceptual models for the description of groundwater flow in fractured media, various types of hydraulic tests were applied to studies on the highly fractured media in Geumsan, Korea. Results from single-hole packer test show that the horizontal hydraulic conductivities in the permeable media are between $7.67{\times}10^{-10}{\sim}3.16{\times}10^{-6}$ m/sec, with $7.70{\times}10^{-7}$ m/sec arithmetic mean and $2.16{\times}10^{-7}$ m/sec geometric mean. Total number of test interval is 110 at 8 holes. The number of completely impermeable interval is 9, and the low permeable interval - below $1.0{\times}10^{-8}$ m/sec is 14. In other words, most of test intervals are permeable. The vertical distribution of hydraulic conductivities shows apparently the good correlation with the results of flowmeter test. But the results from the cross-hole test show some different features. The results from the cross-hole test are highly related to the connectivity and/or the binary properties of fractured media; permeable and impermeable. From the viewpoint of the connection, the application of the general stochastic approach with a single continuum model may not be appropriate even in the moderately or highly permeable fractured medium. Then, further studies on the investigation method and the analysis procedures should be required for the reasonable and practical design of the conceptual model, with which the binary properties, including permeable/impermeable features, can be described.