• Title/Summary/Keyword: 영역일반적 확률

Search Result 23, Processing Time 0.027 seconds

A Simple Stereo Matching Algorithm using PBIL and its Alternative (PBIL을 이용한 소형 스테레오 정합 및 대안 알고리즘)

  • Han Kyu-Phil
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.429-436
    • /
    • 2005
  • A simple stereo matching algorithm using population-based incremental learning(PBIL) is proposed in this paper to decrease the general problem of genetic algorithms, such as memory consumption and inefficiency of search. PBIL is a variation of genetic algorithms using stochastic search and competitive teaming based on a probability vector. The structure of PBIL is simpler than that of other genetic algorithm families, such as serial and parallel ones, due to the use of a probability vector. The PBIL strategy is simplified and adapted for stereo matching circumstances. Thus, gene pool, chromosome crossover, and gene mutation we removed, while the evolution rule, that fitter chromosomes should have higher survival probabilities, is preserved. As a result, memory space is decreased, matching rules are simplified and computation cost is reduced. In addition, a scheme controlling the distance of neighbors for disparity smoothness is inserted to obtain a wide-area consistency of disparities, like a result of coarse-to-fine matchers. Because of this scheme, the proposed algorithm can produce a stable disparity map with a small fixed-size window. Finally, an alterative version of the proposed algorithm without using probability vector is also presented for simpler set-ups.

The Legal Probability as Causal Responsibility founded on the Probabilistic Theory of Causality: On the Legal Responsibility of Autonomous Vehicles (인과적 책임으로서 법적 상당성에 대한 확률 인과 이론의 해명: 자율주행 자동차의 법적 책임을 중심으로)

  • Kim, Joonsung
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.6 no.12
    • /
    • pp.587-594
    • /
    • 2016
  • Autonomous A.I. vehicles are seemingly soon ready for our life. One of the critical problems with autonomous vehicles is how one could assign responsibility for accidents to them. We can envisage that autonomous vehicles may confront an ethical dilemma. Then a question arises of how we are able to assign legal responsibility to autonomous vehicles. In this paper, I first introduce what the ethical dilemma of autonomous vehicles is about. Second, I show how we could be able to assign legal responsibility for autonomous vehicles. Legal probability is the received criteria for causal responsibility most of the legal theorists consider. But it remains vague. I articulate the concept of legal probability in terms of the probabilitstic theory of individual level causality while considering how one can assign causal responsibility for autonomous vehicles. My theory of causal responsibility may help one to assign legal responsibility not just for autonomous vehicles but also for people.

Time Domain Response of Random Electromagnetic Signals for Electromagnetic Topology Analysis Technique

  • Han, Jung-hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.135-144
    • /
    • 2022
  • Electromagnetic topology (EMT) technique is a method to analyze each component of the electromagnetic propagation environment and combine them in the form of a network in order to effectively model the complex propagation environment. In a typical commercial communication channel model, since the propagation environment is complex and difficult to predict, a probabilistic propagation channel model that utilizes an average solution, although with low accuracy, is used. However, modeling techniques using EMT technique are considered for application of propagation and coupling analysis of threat electromagnetic waves such as electromagnetic pulses, radio wave models used in electronic warfare, local communication channel models used in 5G and 6G communications that require relatively high accuracy electromagnetic wave propagation characteristics. This paper describes the effective implementation method, algorithm, and program implementation of the electromagnetic topology (EMT) method analyzed in the frequency domain. Also, a method of deriving a response in the time domain to an arbitrary applied signal source with respect to the EMT analysis result in the frequency domain will be discussed.

Probabilistic Service Life Analysis of GGBFS Concrete Exposed to Carbonation Cold Joint and Loading Conditions (탄산화에 노출된 GGBFS 콘크리트의 콜드 조인트 및 하중 재하를 고려한 확률론적 내구수명 해석)

  • Kim, Tae-Hoon;Kwon, Seung-Jun
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.24 no.3
    • /
    • pp.39-46
    • /
    • 2020
  • Carbonation is a deterioration which degrades structural and material performance by permitting CO2 and corrosion of embedded steel. Service life evaluation through deterministic method is conventional, however the researches with probabilistic approach on service life considering loading and cold joint effect on carbonation have been performed very limitedly. In this study, probabilistic service life evaluation was carried out through MCS (Monte Carlo Simulation) which adopted random variables such as cover depth, CO2 diffusion coefficient, exterior CO2 concentration, and internal carbonatable materials. Probabilistic service life was derived by changing mean value and COV (Coefficient of variation) from 100 % to 300 % and 0.1 ~ 0.2, respectively. From the analysis, maximum reduction ratio (47.7%) and minimum reduction ratio (11.4%) of service life were obtained in cover depth and diffusion coefficient, respectively. In the loading conditions of 30~60% for compressive and tensile stress, GGBFS concrete was effective to reduce cold joint effect on carbonation. In the tensile condition, service life decreased linearly regardless of material types. Additionally service life rapidly decreased due to micro crack propagation in the all cases when 60% loading was considered in compressive condition.

A Study on the Construction of Computerized Algorithm for Proper Construction Cost Estimation Method by Historical Data Analysis (실적자료 분석에 의한 적정 공사비 산정방법의 전산화 알고리즘 구축에 관한 연구)

  • Chun Jae-Youl
    • Korean Journal of Construction Engineering and Management
    • /
    • v.4 no.4 s.16
    • /
    • pp.192-200
    • /
    • 2003
  • The object of this research is to develop a computerized algorithm of cost estimation method to forecast the total construction cost in the bidding stage by the historical and elemental work cost data. Traditional cost models to prepare Bill of Quantities in the korea construction industry since 1970 are not helpful to forecast the project total cost in the bidding stage because the BOQ is always constant data according to the design factors of a particular project. On the contrary, statistical models can provide cost quicker and more reliable than traditional ones if the collected cost data are sufficient enough to analyze the trends of the variables. The estimation system considers non-deterministic methods which referred to as the 'Monte Carlo simulation. The method interprets cost data to generate a probabilistic distribution for total costs from the deficient elemental experience cost distribution.

Light-weight Signal Processing Method for Detection of Moving Object based on Magnetometer Applications (이동 물체 탐지를 위한 자기센서 응용 신호처리 기법)

  • Kim, Ki-Taae;Kwak, Chul-Hyun;Hong, Sang-Gi;Park, Sang-Jun;Kim, Keon-Wook
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.153-162
    • /
    • 2009
  • This paper suggests the novel light-weight signal processing algorithm for wireless sensor network applications which needs low computing complexity and power consumption. Exponential average method (EA) is utilized by real time, to process the magnetometer signal which is analyzed to understand the own physical characteristic in time domain. EA provides the robustness about noise, magnetic drift by temperature and interference, furthermore, causes low memory consumption and computing complexity for embedded processor. Hence, optimal parameter of proposal algorithm is extracted by statistical analysis. Using general and precision magnetometer, detection probability over 90% is obtained which restricted by 5% false alarm rate in simulation and using own developed magnetometer H/W, detection probability over 60~70% is obtained under 1~5% false alarm rate in simulation and experiment.

Optimization of Data Recovery using Non-Linear Equalizer in Cellular Mobile Channel (셀룰라 이동통신 채널에서 비선형 등화기를 이용한 최적의 데이터 복원)

  • Choi, Sang-Ho;Ho, Kwang-Chun;Kim, Yung-Kwon
    • Journal of IKEEE
    • /
    • v.5 no.1 s.8
    • /
    • pp.1-7
    • /
    • 2001
  • In this paper, we have investigated the CDMA(Code Division Multiple Access) Cellular System with non-linear equalizer in reverse link channel. In general, due to unknown characteristics of channel in the wireless communication, the distribution of the observables cannot be specified by a finite set of parameters; instead, we partitioned the m-dimensional sample space Into a finite number of disjointed regions by using quantiles and a vector quantizer based on training samples. The algorithm proposed is based on a piecewise approximation to regression function based on quantiles and conditional partition moments which are estimated by Robbins Monro Stochastic Approximation (RMSA) algorithm. The resulting equalizers and detectors are robust in the sense that they are insensitive to variations in noise distributions. The main idea is that the robust equalizers and robust partition detectors yield better performance in equiprobably partitioned subspace of observations than the conventional equalizer in unpartitioned observation space under any condition. And also, we apply this idea to the CDMA system and analyze the BER performance.

  • PDF

Quadratic Sigmoid Neural Equalizer (이차 시그모이드 신경망 등화기)

  • Choi, Soo-Yong;Ong, Sung-Hwan;You, Cheol-Woo;Hong, Dae-Sik
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.1
    • /
    • pp.123-132
    • /
    • 1999
  • In this paper, a quadratic sigmoid neural equalizer(QSNE) is proposed to improve the performance of conventional neural equalizer in terms of bit error probability by using a quadratic sigmoid function as the activation function of neural networks. Conventional neural equalizers which have been used to compensate for nonlinear distortions adopt the sigmoid function. In the case of sigmoid neural equalizer, each neuron has one linear decision boundary. So many neurons are required when the neural equalizer has to separate complicated structure. But in case of the proposed QSNF and quadratic sigmoid neural decision feedback equalizer(QSNDFE), each neuron separates decision region with two parallel lines. Therefore, QSNE and QSNDFE have better performance and simpler structure than the conventional neural equalizers in terms of bit error probability. When the proposed QSNDFE is applied to communication systems and digital magnetic recording systems, it is an improvement of approximately 1.5dB~8.3dB in signal to moise ratio(SNR) over the conventional decision feedback equalizer(DEF) and neural decision feedback equalizer(NDFE). As intersymbol interference(ISI) and nonlinear distortions become severer, QSNDFE shows astounding SNR shows astounding SNR performance gain over the conventional equalizers in the same bit error probability.

  • PDF

Statistical Estimation for Hazard Function and Process Capability Index under Bivariate Exponential Process (이변량 지수 공정 하에서 위험함수와 공정능력지수에 대한 통계적 추정)

  • Cho, Joong-Jae;Kang, Su-Mook;Park, Byoung-Sun
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.3
    • /
    • pp.449-461
    • /
    • 2009
  • Higher sigma quality level is generally perceived by customers as improved performance by assigning a correspondingly higher satisfaction score. The process capability indices and the sigma level $Z_{st}$ ave been widely used in six sigma industries to assess process performance. Most evaluations on process capability indices focus on statistical estimation under normal process which may result in unreliable assessments of process performance. In this paper, we consider statistical estimation for bivariate VPCI(Vector-valued Process Capability Index) $C_{pkl}=(C_{pklx},\;C_{pklx})$ under Marshall and Olkin (1967)'s bivariate exponential process. First, we derive some limiting distribution for statistical inference of bivariate VPCI $C_{pkl}$. And we propose two asymptotic normal confidence regions for bivariate VPCI $C_{pkl}$. The proposed method may be very useful under bivariate exponential process. A numerical result based on our proposed method shows to be more reliable.

Classification of a Volumetric MRI Using Gibbs Distributions and a Line Model (깁스분포와 라인모델을 이용한 3차원 자기공명영상의 분류)

  • Junchul Chun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.2 no.1
    • /
    • pp.58-66
    • /
    • 1998
  • Purpose : This paper introduces a new three dimensional magnetic Resonance Image classification which is based on Mar kov Random Field-Gibbs Random Field with a line model. Material and Methods : The performance of the Gibbs Classifier over a statistically heterogeneous image can be improved if the local stationary regions in the image are disassociated from each other through the mechanism of the interaction parameters defined at the local neighborhood level. This usually involves the construction of a line model for the image. In this paper we construct a line model for multisignature images based on the differential of the image which can provide an a priori estimate of the unobservable line field, which may lie in regions with significantly different statistics. the line model estimated from the original image data can in turn be used to alter the values of the interaction parameters of the Gibbs Classifier. Results : MRF-Gibbs classifier for volumetric MR images is developed under the condition that the domain of the image classification is $E^{3}$ space rather thatn the conventional $E^{2}$ space. Compared to context free classification, MRF-Gibbs classifier performed better in homogeneous and along boundaries since contextual information is used during the classification. Conclusion : We construct a line model for multisignature, multidimensional image and derive the interaction parameter for determining the energy function of MRF-Gibbs classifier.

  • PDF