• Title/Summary/Keyword: 확률이론

Search Result 834, Processing Time 0.036 seconds

푸슈킨-체비쇼프-마코프-콜모고로프-펠레만, 러시아 상트페테르부르크 : 2011 IEEE 국제 정보 이론 심포지엄

  • Lee, Mun-Ho
    • Information and Communications Magazine
    • /
    • v.28 no.9
    • /
    • pp.84-95
    • /
    • 2011
  • 2011년 IEEE 국제 정보 이론(ISIT) 심포지엄 (Symposium) 이 7월 31일부터 8월 5일까지 러시아 상트페테르부르크(St. Petersburg)에서 열렸다. 본 심포지엄에는 총 1,562편의 논문이 수락되어, 논문 수락율이 약 60%로 높지만, 최고 수준의 심포지엄으로, 1,500여명의 학자들이 참여했으며, 학문적으로 정보 이론의 핵심인 체비쇼프 다항식, 마코프 확률 연쇄, 콜모고로프 엔트로피, 페렐만의 푸앵카레 추측 증명 등, 학문의 원천 아이디어 발상에 대해 고찰하고, 최근 뜨거운 감자로 떠오른 극 부호(Po1ar code)를 간단히 소개했다. 또한, 우리 전통 문화 유산인 제주 정낭을 채널 부호의 관점에서 해석, 이에 따른 채널 용량을 구했고, 제주 정낭이 오늘날 중계망(Relay network)의 모태임을 증명했다.

Distributed localization using Bayes' rule in wireless Sensor Networks (베이즈 이론을 이용한 무선 센서 네트워크 기반의 위치 인식 기술)

  • Kong, Young-Bae;Park, Gui-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1821-1822
    • /
    • 2007
  • 무선 센서 네트워크에서 위치인식 기술은 데이터 수집, 라우팅, 위치기반 서비스와 같은 기술에 필수적인 기술이다. 본 논문에서는 베이즈 이론을 이용한 그리드방식의 분산형 위치 인식기술을 제안한다. 이 기법은 센서 노드들이 받은 신호세기를 바탕으로 하여 그리드를 구성해서 베이즈 이론을 이용하여 가장 큰 확률을 갖는 그리드를 자신의 위치로 인식하는 방식이다. 우리는 시뮬레이션을 통하여 기존의 방식보다 제안된 알고리즘이 정확한 위치를 갖으며, 더욱 효율적인 연산을 수행함을 알 수 있다.

  • PDF

An Empirical Study on Dividend Initiation Decisions of Firms (기업의 배당개시결정에 관한 실증적 연구)

  • Shin, Min-Shik;Song, Joon-Hyup
    • The Korean Journal of Financial Management
    • /
    • v.24 no.4
    • /
    • pp.135-161
    • /
    • 2007
  • In this paper, we study empirically the dividend initiation decisions of IPO firms listed on Korea Securities Market and KOSDAQ Market. Specifically, we study three aspects of dividend initiation decision, (a) dividend initiation decision, (b) dividend level decision, (c) time-to-initiation decision. The main results of this study can be summarized as follows. First, determinants suggested by the major theories of dividends, namely, residual dividend, dividend signaling, agency, catering, and transactions cost theory explain significantly the dividend initiation decision. Second, determinants suggested by the major theories of dividends explain significantly the dividend level decision. So to speak, most of the findings for dividend initiation decision also hold for the dividend level decision. Third, most of the factors that increase(decrease) the probability of dividend initiation reduce(increase) the time-to-initiation. Almost of the dividend initiation firms start paying dividends within two years of the IPO. Thus, if IPO firm does not initiate dividend early in the life of the firm, then it is highly likely that it will never initiate dividend.

  • PDF

A Framework for Assessing Probability Knowledge and Skills for Middle School Students: A Case of U.S. (중학교 학생들의 확률적 사고 수준 평가 기준 개발 : 미국의 사례)

  • Park, Ji-Yoon;Lee, Kyung-Hwa
    • School Mathematics
    • /
    • v.11 no.1
    • /
    • pp.1-15
    • /
    • 2009
  • Some researchers (Jones et al., 1997; Tarr & Jones, 1997; Tarr & Lannin, 2005) have worked on students' probabilistic thinking framework. These studies contributed to an understanding of students' thinking in probability by depicting levels. However, understanding middle school students' probabilistic thinking is limited to the concepts in conditional probability and independence. In this study, the framework to understand middle school students' thinking in probability is integrated on the works of Jones et al. (1997), Polaki (2005) and Tarr and Jones (1997). As in their works, depicting levels of probabilistic thinking is focused on the concepts and skills for students in middle school. The concepts and skills considered as being necessary for middle school students were integrated from NCTM documents and NAEP frameworks.

  • PDF

A Probabilistic Detection Algorithm for Noiseless Group Testing (무잡음 그룹검사에 대한 확률적 검출 알고리즘)

  • Seong, Jin-Taek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.10
    • /
    • pp.1195-1200
    • /
    • 2019
  • This paper proposes a detection algorithm for group testing. Group testing is a problem of finding a very small number of defect samples out of a large number of samples, which is similar to the problem of Compressed Sensing. In this paper, we define a noiseless group testing and propose a probabilistic algorithm for detection of defective samples. The proposed algorithm is constructed such that the extrinsic probabilities between the input and output signals exchange with each other so that the posterior probability of the output signal is maximized. Then, defective samples are found in the group testing problem through a simulation on the detection algorithm. The simulation results for this study are compared with the lower bound in the information theory to see how much difference in failure probability over the input and output signal sizes.

Fast Bayesian Inversion of Geophysical Data (지구물리 자료의 고속 베이지안 역산)

  • Oh, Seok-Hoon;Kwon, Byung-Doo;Nam, Jae-Cheol;Kee, Duk-Kee
    • Journal of the Korean Geophysical Society
    • /
    • v.3 no.3
    • /
    • pp.161-174
    • /
    • 2000
  • Bayesian inversion is a stable approach to infer the subsurface structure with the limited data from geophysical explorations. In geophysical inverse process, due to the finite and discrete characteristics of field data and modeling process, some uncertainties are inherent and therefore probabilistic approach to the geophysical inversion is required. Bayesian framework provides theoretical base for the confidency and uncertainty analysis for the inference. However, most of the Bayesian inversion require the integration process of high dimension, so massive calculations like a Monte Carlo integration is demanded to solve it. This method, though, seemed suitable to apply to the geophysical problems which have the characteristics of highly non-linearity, we are faced to meet the promptness and convenience in field process. In this study, by the Gaussian approximation for the observed data and a priori information, fast Bayesian inversion scheme is developed and applied to the model problem with electric well logging and dipole-dipole resistivity data. Each covariance matrices are induced by geostatistical method and optimization technique resulted in maximum a posteriori information. Especially a priori information is evaluated by the cross-validation technique. And the uncertainty analysis was performed to interpret the resistivity structure by simulation of a posteriori covariance matrix.

  • PDF

Probability-based IoT management model using blockchain to expand multilayered networks (블록체인을 이용하여 다층 네트워크를 확장한 확률 기반의 IoT 관리 모델)

  • Jeong, Yoon-Su
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.4
    • /
    • pp.33-39
    • /
    • 2020
  • Interest in 5G communication security has been growing recently amid growing expectations for 5G technology with faster speed and stability than LTE. However, 5G has so far included disparate areas, so it has not yet fully supported the issues of security. This paper proposes a blockchain-based IoT management model in order to efficiently provide the authentication of users using IoT in 5G In order to efficiently fuse the authentication of IoT users with probabilistic theory and physical structure, the proposed model uses two random keys in reverse direction at different layers so that two-way authentication is achieved by the managers of layers and layers. The proposed model applied blockchain between grouped IoT devices by assigning weights to layer information of IoT information after certification of IoT users in 5G environment is stratified on a probabilistic basis. In particular, the proposed model has better functions than the existing blockchain because it divides the IoT network into layered, multi-layered networks.

A Critical Evaluation of Dichotomous Choice Responses in Contingent Valuation Method (양분선택형 조건부가치측정법 응답자료의 실증적 쟁점분석)

  • Eom, Young Sook
    • Environmental and Resource Economics Review
    • /
    • v.20 no.1
    • /
    • pp.119-153
    • /
    • 2011
  • This study reviews various aspects of model formulating processes of dichotomous choice responses of the contingent valuation method (CVM), which has been increasingly used in the preliminary feasibility test of Korea public investment projects. The theoretical review emphasizes the consistency between WTP estimation process and WTP measurement process. The empirical analysis suggests that two common parametric models for dichotmous choice responses (RUM and RWTP) and two commonly used probability distributions of random components (probit and logit) resulted in all most the same empirical WTP distributions, as long as the WTP functions are specified to be a linear function of the bid amounts. However, the efficiency gain of DB response compared to SB response were supported on the ground that the two CV responses are derived from the same WTP distribution. Moreover for the exponential WTP function which guarantees the non-negative WTP measures, sample mean WTP were quite different from median WTP if the scale parameter of WTP function turned out to be large.

  • PDF

Reduction of Approximate Rule based on Probabilistic Rough sets (확률적 러프 집합에 기반한 근사 규칙의 간결화)

  • Kwon, Eun-Ah;Kim, Hong-Gi
    • The KIPS Transactions:PartD
    • /
    • v.8D no.3
    • /
    • pp.203-210
    • /
    • 2001
  • These days data is being collected and accumulated in a wide variety of fields. Stored data itself is to be an information system which helps us to make decisions. An information system includes many kinds of necessary and unnecessary attribute. So many algorithms have been developed for finding useful patterns from the data and reasoning approximately new objects. We are interested in the simple and understandable rules that can represent useful patterns. In this paper we propose an algorithm which can reduce the information in the system to a minimum, based on a probabilistic rough set theory. The proposed algorithm uses a value that tolerates accuracy of classification. The tolerant value helps minimizing the necessary attribute which is needed to reason a new object by reducing conditional attributes. It has the advantage that it reduces the time of generalizing rules. We experiment a proposed algorithm with the IRIS data and Wisconsin Breast Cancer data. The experiment results show that this algorithm retrieves a small reduct, and minimizes the size of the rule under the tolerant classification rate.

  • PDF

Calculation of Probabilistic Damage Stability Based on Grid Model (격자모델을 이용한 확률론적 손상복원력 계산의 전산화)

  • Jong-Ho Nam;Won-Don Kim;Kwang-Wook Kim
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.31 no.1
    • /
    • pp.14-21
    • /
    • 1994
  • The studios on the stability of damaged ships have been carried out continuously to prevent frequent damages or sinkings which cause large loss of life and fortunes. For dry cargo ships, continuing losses have resulted in new legislation of the probabilistic damage stability. IMO has developed requirements for the subdivison and damage stability of dry cargo ships based on probabilistic concepts. The calculation of the probabilistc damage stability is a complicated and iterative job hence development of computer programs is indispensable. In this research, programming of the probabilistic damage stability according to new requirements has been done and the results were compared with those carried out by the other foreign packages. New algorithm using a grid model in a transversal section was introduced to reduce efforts in preparing input data for damage scenarios and as a result, has brought significant improvement in efficiency and performance.

  • PDF