• Title/Summary/Keyword: optimal probability density

Search Result 77, Processing Time 0.025 seconds

The Principle of Justifiable Granularity and an Optimization of Information Granularity Allocation as Fundamentals of Granular Computing

  • Pedrycz, Witold
    • Journal of Information Processing Systems
    • /
    • v.7 no.3
    • /
    • pp.397-412
    • /
    • 2011
  • Granular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.

Determination of Optimal Sensor Locations for Modal System Identification-based Damage Detection on Structures (주파수영역 손상식별 SI 기법에 적응할 최적센서 위치결정법)

  • 권순정;신수봉;박영환
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2003.04a
    • /
    • pp.95-102
    • /
    • 2003
  • To define an analytical model for a structural system or to assess damage in the system, system identification(SI) methods have been developed and widely applied. The paper presents a method of determining optimal sensor location(OSL) based on the maximum likelihood approach, which is applicable to modal SI methods. To estimate unknown parameters reliably, it is necessary that the information provided by the experiment should be maximized. By applying the Cramer-Rao inequality, a Fisher information matrix in terms of the probability density function of measurements is obtained from a lower bound of the estimation error. The paper also proposes a scheme of determining of OSL on damaged structures by using maximum strain energy factor. Simulation studies have carried out to investigate the proposed OSL algorithm for both undamaged and damaged structures.

  • PDF

Tolerance Optimization with Markov Chain Process (마르코프 과정을 이용한 공차 최적화)

  • Lee, Jin-Koo
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.13 no.2
    • /
    • pp.81-87
    • /
    • 2004
  • This paper deals with a new approach to tolerance optimization problems. Optimal tolerance allotment problems can be formulated as stochastic optimization problems. Most schemes to solve the stochastic optimization problems have been found to exhibit difficulties in multivariate integration of the probability density function. As a typical example of stochastic optimization the optimal tolerance allotment problem has the same difficulties. In this stochastic model, manufacturing system is represented by Gauss-Markov stochastic process and the manufacturing unit availability is characterized for realistic optimization modeling. The new algorithm performed robustly for a large deviation approximation. A significant reduction in computation time was observed compared to the results obtained in previous studies.

Contingency Estimation Method based on Stochastic Earned Value Management System (추계적 EVMS 기반 예비비 산정 방법론)

  • Gwak, Han-Seong;Choi, Byung-Youn;Yi, Chang-Yong;Lee, Dong-Eun
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2018.05a
    • /
    • pp.72-73
    • /
    • 2018
  • The accuracy of contingency estimation plays an important role for dealing with the uncertainty of the financial success of construction project. Its' estimation may be used for various purposes such as schedule control, emergency resolve, and quality expense, etc. This paper presents a contingency estimation method which is schedule control specific. The method 1) implements stochastic EVMS, 2) detects a specific timing for schedule compression, 3) identifies an optimal strategy for shortening planned schedule, 4) finds a probability density function (PDF) of project cost overrun, and 5) estimates the optimal contingency cost based on the level of confidence. The method facilitates expeditious decisions involved in project budgeting. The validity of the method is confirmed by performing test case.

  • PDF

Artificial Neural Network for Stable Robotic Grasping (안정적 로봇 파지를 위한 인공신경망)

  • Kim, Kiseo;Kim, Dongeon;Park, Jinhyun;Lee, Jangmyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.94-103
    • /
    • 2019
  • The optimal grasping point of the object varies depending on the shape of the object, such as the weight, the material, the grasping contact with the robot hand, and the grasping force. In order to derive the optimal grasping points for each object by a three fingered robot hand, optimal point and posture have been derived based on the geometry of the object and the hand using the artificial neural network. The optimal grasping cost function has been derived by constructing the cost function based on the probability density function of the normal distribution. Considering the characteristics of the object and the robot hand, the optimum height and width have been set to grasp the object by the robot hand. The resultant force between the contact area of the robot finger and the object has been estimated from the grasping force of the robot finger and the gravitational force of the object. In addition to these, the geometrical and gravitational center points of the object have been considered in obtaining the optimum grasping position of the robot finger and the object using the artificial neural network. To show the effectiveness of the proposed algorithm, the friction cone for the stable grasping operation has been modeled through the grasping experiments.

Optimization of Gaussian Mixture in CDHMM Training for Improved Speech Recognition

  • Lee, Seo-Gu;Kim, Sung-Gil;Kang, Sun-Mee;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.7-21
    • /
    • 1999
  • This paper proposes an improved training procedure in speech recognition based on the continuous density of the Hidden Markov Model (CDHMM). Of the three parameters (initial state distribution probability, state transition probability, output probability density function (p.d.f.) of state) governing the CDHMM model, we focus on the third parameter and propose an efficient algorithm that determines the p.d.f. of each state. It is known that the resulting CDHMM model converges to a local maximum point of parameter estimation via the iterative Expectation Maximization procedure. Specifically, we propose two independent algorithms that can be embedded in the segmental K -means training procedure by replacing relevant key steps; the adaptation of the number of mixture Gaussian p.d.f. and the initialization using the CDHMM parameters previously estimated. The proposed adaptation algorithm searches for the optimal number of mixture Gaussian humps to ensure that the p.d.f. is consistently re-estimated, enabling the model to converge toward the global maximum point. By applying an appropriate threshold value, which measures the amount of collective changes of weighted variances, the optimized number of mixture Gaussian branch is determined. The initialization algorithm essentially exploits the CDHMM parameters previously estimated and uses them as the basis for the current initial segmentation subroutine. It captures the trend of previous training history whereas the uniform segmentation decimates it. The recognition performance of the proposed adaptation procedures along with the suggested initialization is verified to be always better than that of existing training procedure using fixed number of mixture Gaussian p.d.f.

  • PDF

Characteristics of Kill Probability Distribution of Air Track Within the Engagement Space Using Multivariate Probability Density Function & Bayesian Theorem (다변량 확률밀도함수와 베이지안 정리를 이용한 교전공간내 공중항적의 격추확률 분포 특성)

  • Hong, Dong-Wg;Aye, Sung-Man;Kim, Ju-Hyun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.49 no.6
    • /
    • pp.521-528
    • /
    • 2021
  • In order to allocate an appropriate interceptor weapon to an air track for which the threat assessment has been completed, it is necessary to evaluate the suitability of engagement in consideration of the expected point of engagement. In this thesis, a method of calculating the kill probability is proposed according to the position in the engagement space using Bayesian theorem with multivariate attribute information such as relative distance, approach azimuth angle, and altitude of the air track when passing through the engagement space. As a result of the calculation, it was confirmed that the distribution form of the kill probability value for each point in the engagement space follows a multivariate normal distribution based on the optimal predicted intercepting point. It is expected to be applicable to the engagement suitability evaluation of the engagement space.

An Optimal Ordering policy on Both Way Substitutable Two-Commodity Inventory Control System

  • Tanaka, Masatoshi;Yoshikawa, Shin-ichi;Tabata, Yoshio
    • Industrial Engineering and Management Systems
    • /
    • v.4 no.2
    • /
    • pp.145-157
    • /
    • 2005
  • There are a lot of raw materials, work-in-processes and finished goods in manufacturing industry. Here, the less stock of materials and work-in-processes manufacturing industry has, the worse the rate of the production is. Inversely, the more manufacturing industry has, the more expensive the cost to support them is. Thus, it is important for us to balance them efficiently. In general, inventory problems are to decide appropriate times to produce goods and to determine appropriate quantities of goods. Therefore, inventory problems require as more useful information as possible. For example, there are demand, lead time, ordering point and so on. In this paper, we deal with an optimal ordering policy on both way substitutable two-commodity inventory control system. That is, there is a problem of how to allocate the produced two kinds of goods in a factory to m areas so as to minimize the total expected inventory cost. The demand of each area is probabilistic, and we adopt the exponential distribution as a probability density function of demand. Moreover, we provide numerical examples of the problem.

Uncertainty Assessment of Emission Factors for Pinus densiflora using Monte Carlo Simulation Technique (몬테 카를로 시뮬레이션을 이용한 소나무 탄소배출계수의 불확도 평가)

  • Pyo, Jung Kee;Son, Yeong Mo;Jang, Gwang Min;Lee, Young Jin
    • Journal of Korean Society of Forest Science
    • /
    • v.102 no.4
    • /
    • pp.477-483
    • /
    • 2013
  • The purpose of this study was to calculate uncertainty of emission factor collected data and to evaluate the applicability of Monte Carlo simulation technique. To estimate the distribution of emission factors (Such as Basic wood density, Biomass expansion factor, and Root-to-shoot ratio), four probability density functions (Normal, Lognormal, Gamma, and Weibull) were used. The two sample Kolmogorov-Smirnov test and cumulative density figure were used to compare the optimal probability density function. It was observed that the basic wood density showed the gamma distribution, the biomass expansion factor results the log-normal distribution, and root-shoot ratio showd the normal distribution for Pinus densiflora in the Gangwon region; the basic wood density was the normal distribution, the biomass expansion factor was the gamma distribution, and root-shoot ratio was the gamma distribution for Pinus densiflora in the central region, respectively. The uncertainty assessment of emission factor were upper 62.1%, lower -52.6% for Pinus densiflora in the Gangwon region and upper 43.9%, lower -34.5% for Pinus densiflora in the central region, respectively.

Power Allocation and Splitting Algorithm with Low-complexity for SWIPT in Energy Harvesting Networks (에너지 하베스팅 네트워크에서 SWIPT를 위한 저복잡도를 갖는 파워 할당 및 분할 알고리즘)

  • Lee, Kisong;Ko, JeongGil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.5
    • /
    • pp.917-922
    • /
    • 2016
  • Recently, energy harvesting, in which energy is collected from RF signals, has been regarded as a promising technology to improve the lifetime of sensors by alleviating the lack of power supply problem. In this paper, we try to propose an efficient algorithm for simultaneous wireless information and power transfer. At first, we find the lower bound of water-level using the probability density function of channel, and derive the solution of power allocation in energy harvesting networks. In addition, we derive an efficient power splitting method for satisfying the minimum required harvested energy constraint. The simulation results confirm that the proposed scheme improves the average data rate while guaranteeing the minimum required harvested energy constraint, compared with the conventional scheme. In addition, the proposed algorithm can reduce the computational complexity remarkably with insignificant performance degradation less than 10%, compared to the optimal solution.