• Title/Summary/Keyword: Bayes' Theorem

Search Result 48, Processing Time 0.023 seconds

Forecasting Demand of 5G Internet of things based on Bayesian Regression Model (베이지안 회귀모델을 활용한 5G 사물인터넷 수요 예측)

  • Park, Kyung Jin;Kim, Taehan
    • Journal of Information Technology Applications and Management
    • /
    • v.26 no.2
    • /
    • pp.61-73
    • /
    • 2019
  • In 2019, 5G mobile communication technology will be commercialized. From the viewpoint of technological innovation, 5G service can be applied to other industries or developed further. Therefore, it is important to measure the demand of the Internet of things (IoT) because it is predicted to be commercialized widely in the 5G era and its demand hugely effects on the economic value of 5G industry. In this paper, we applied Bayesian method on regression model to find out the demand of 5G IoT service, wearable service in particular. As a result, we confirmed that the Bayesian regression model is closer to the actual value than the existing regression model. These findings can be utilized for predicting future demand of new industries.

Research on optimal port cargo vehicle arrival scheduling system using Monte Carlo simulation, AlphaGo Zero, and Bayes' theorem (몬테카를로 시뮬레이션, 알파고 제로, 베이즈 정리를 이용한 최적의 항만 화물차 입항 스케줄링 시스템에 대한 연구)

  • Min-Gyeong Kim;Sua Park;Hae-Young Lee;Na-Young Kim;Sang-Oh Yoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1096-1097
    • /
    • 2023
  • 본 연구에서는 항만 교통 혼잡 문제를 해결하기 위해 최적화와 관련된 요소와 트럭 운전기사와 터미널 사이의 협상과 관련된 요소를 새로운 방식으로 고려한 중장기 및 실시간 스케줄링 모델을 제시한다. 중장기 스케줄링 모델은 몬테카를로 시뮬레이션, 실시간 스케줄링 모델은 알파고 제로의 원리와 베이즈 정리를 이용하여 구현했다. 실험 결과 제시된 알파고 제로를 이용한 실시간 스케줄링 시스템이 화물차 평균 지연시간을 30분에서 4분으로 대폭 줄여 지연 시간을 최소화하는 것을 입증했다. 실험 관련 코드는 다음 주소에서 확인할 수 있다 : https://github.com/yulleta/Application_of_AlphaGo-Zero_to_port_arrival_scheduling

Estimating the Likelihood of Malignancy in Solitary Pulmonary Nodules by Bayesian Approach (Bayes식 접근법에 의한 고립성 폐결절의 악성도 예측)

  • Shin, Kyeong-Cheol;Chung, Jin-Hong;Lee, Kwan-Ho;Kim, Chang-Ho;Park, Jae-Yong;Jung, Tae-Hoon;Han, Sung-Beom;Jeon, Young-Jun
    • Tuberculosis and Respiratory Diseases
    • /
    • v.47 no.4
    • /
    • pp.498-506
    • /
    • 1999
  • Background : The causes of solitary pulmonary nodule are many, but the main concern is whether the nodule is benign or malignant. Because a solitary pulmonary nodule is the initial manifestation of the majority of lung cancer, accurate clinical and radiologic interpretation is important. Bayes' theorem is a simple method of combining clinical and radiologic findings to estimate the probability that a nodule in an individual patients is malignant. We estimated the probability of malignancy of solitary pulmonary nodules with a specific combination of features by Bayesian approach. Method : One hundred and eighty patients with solitary pulmonary nodules were identified from multi-center analysis. The hospital records of these patients were reviewed and patient age, smoking history, original radiologic findings, and diagnosis of the solitary pulmonary nodules were recorded. The diagnosis of solitary pulmonary nodule was established pathologically in all patients. We used to Bayes' theorem to devise a simple scheme for estimating the likelihood that a solitary pulmonary nodule is malignant based on radiological and clinical characteristics. Results : In patients characteristics, the probability of malignancy increases with advancing age, peaking in patients older than 66 year of age(LR : 3.64), and higher in patients with smoking history more than 46 pack years(LR : 8.38). In radiological features, the likelihood ratios were increased with increasing size of the nodule and nodule with lobulated or spiculated margin. Conclusion : In conclusion, the likelihood ratios of malignancy may improve the accuracy of the probability of malignancy, and can be a guide of management of solitary pulmonary nodule.

  • PDF

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Reliability Based Pile Bearing Capacity Evaluation (신뢰도에 근거한 말뚝의 지지력 평가)

  • Lee, In-Mo;Jo, Guk-Hwan;Lee, Jeong-Hak
    • Geotechnical Engineering
    • /
    • v.11 no.1
    • /
    • pp.9-22
    • /
    • 1995
  • The purpose of this study is to propose safety factors of pile bearing capacity based on the reliability analysis. Each prediction method involves various degrees of uncertainties. To account for these uncertainties in a systematic way, the ratios of the measured bearing capacity from pile load tests to the predicted bearing capacity are represented in the form of a probability density function. The safety factor for each design method is obtained so that the probability of pile foundation failure is less than 10-3. The Bayesian theorem is applied in a way that the distribution using static formulae is assumed to be the A-prior and the distribution using dynamic formulae or wave equation based methods is assumed to be the likelihood, and these two are combined to obtain the posterior which has the reduced uncertainty. The results of this study show that static formulae of the pile bearing capacity using the 5.p.7. N-value as well as dynamic formulae are highly unreliable and have to have the safety factor more than 7.4 : the wave equation analysis using PDA(Pile Driving Analyzer) system the most reliable with the safety factor close to 2.7. The safety factor could be reduced certain amount by adoption the Bayes methodology in pile design.

  • PDF

Non-chemical Risk Assessment for Lifting and Low Back Pain Based on Bayesian Threshold Models

  • Pandalai, Sudha P.;Wheeler, Matthew W.;Lu, Ming-Lun
    • Safety and Health at Work
    • /
    • v.8 no.2
    • /
    • pp.206-211
    • /
    • 2017
  • Background: Self-reported low back pain (LBP) has been evaluated in relation to material handling lifting tasks, but little research has focused on relating quantifiable stressors to LBP at the individual level. The National Institute for Occupational Safety and Health (NIOSH) Composite Lifting Index (CLI) has been used to quantify stressors for lifting tasks. A chemical exposure can be readily used as an exposure metric or stressor for chemical risk assessment (RA). Defining and quantifying lifting nonchemical stressors and related adverse responses is more difficult. Stressor-response models appropriate for CLI and LBP associations do not easily fit in common chemical RA modeling techniques (e.g., Benchmark Dose methods), so different approaches were tried. Methods: This work used prospective data from 138 manufacturing workers to consider the linkage of the occupational stressor of material lifting to LBP. The final model used a Bayesian random threshold approach to estimate the probability of an increase in LBP as a threshold step function. Results: Using maximal and mean CLI values, a significant increase in the probability of LBP for values above 1.5 was found. Conclusion: A risk of LBP associated with CLI values > 1.5 existed in this worker population. The relevance for other populations requires further study.

Influence of decorrelation on phase sensitivity in a Mach-Zehnder interferometer (매개하향변환 과정에서 발생하는 두광자의 상관관계가 Mach-Zehnder 간섭계의 분해능에 미치는 영향)

  • 김헌오;고정훈;박구동;김태수
    • Korean Journal of Optics and Photonics
    • /
    • v.12 no.4
    • /
    • pp.251-256
    • /
    • 2001
  • The influences of decorrelation on phase sensitivity are studied with a computer simulation based on the Bayesian theorem, when correlated photons produced by parametric down-conversion are incident on a Mach-Zehnder interferometer. Although the down-converted photons show a perfect correlation in the production process, this degree of correlation may be decreased by reflection, absorption, and scattering during propagation. It is found that this decorrelation results in phase sensitivity degradation, and that the sensitivity is related to the detector quantum efficiency. The results show that when the phase difference between the two paths is smaller the phase sensitivity is better. etter.

  • PDF

New Inference for a Multiclass Gaussian Process Classification Model using a Variational Bayesian EM Algorithm and Laplace Approximation

  • Cho, Wanhyun;Kim, Sangkyoon;Park, Soonyoung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.202-208
    • /
    • 2015
  • In this study, we propose a new inference algorithm for a multiclass Gaussian process classification model using a variational EM framework and the Laplace approximation (LA) technique. This is performed in two steps, called expectation and maximization. First, in the expectation step (E-step), using Bayes' theorem and the LA technique, we derive the approximate posterior distribution of the latent function, indicating the possibility that each observation belongs to a certain class in the Gaussian process classification model. In the maximization step, we compute the maximum likelihood estimators for hyper-parameters of a covariance matrix necessary to define the prior distribution of the latent function by using the posterior distribution derived in the E-step. These steps iteratively repeat until a convergence condition is satisfied. Moreover, we conducted the experiments by using synthetic data and Iris data in order to verify the performance of the proposed algorithm. Experimental results reveal that the proposed algorithm shows good performance on these datasets.

Emotion Classification Using EEG Spectrum Analysis and Bayesian Approach (뇌파 스펙트럼 분석과 베이지안 접근법을 이용한 정서 분류)

  • Chung, Seong Youb;Yoon, Hyun Joong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.37 no.1
    • /
    • pp.1-8
    • /
    • 2014
  • This paper proposes an emotion classifier from EEG signals based on Bayes' theorem and a machine learning using a perceptron convergence algorithm. The emotions are represented on the valence and arousal dimensions. The fast Fourier transform spectrum analysis is used to extract features from the EEG signals. To verify the proposed method, we use an open database for emotion analysis using physiological signal (DEAP) and compare it with C-SVC which is one of the support vector machines. An emotion is defined as two-level class and three-level class in both valence and arousal dimensions. For the two-level class case, the accuracy of the valence and arousal estimation is 67% and 66%, respectively. For the three-level class case, the accuracy is 53% and 51%, respectively. Compared with the best case of the C-SVC, the proposed classifier gave 4% and 8% more accurate estimations of valence and arousal for the two-level class. In estimation of three-level class, the proposed method showed a similar performance to the best case of the C-SVC.

Boundary-adaptive Despeckling : Simulation Study

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.3
    • /
    • pp.295-309
    • /
    • 2009
  • In this study, an iterative maximum a posteriori (MAP) approach using a Bayesian model of Markovrandom field (MRF) was proposed for despeckling images that contains speckle. Image process is assumed to combine the random fields associated with the observed intensity process and the image texture process respectively. The objective measure for determining the optimal restoration of this "double compound stochastic" image process is based on Bayes' theorem, and the MAP estimation employs the Point-Jacobian iteration to obtain the optimal solution. In the proposed algorithm, MRF is used to quantify the spatial interaction probabilistically, that is, to provide a type of prior information on the image texture and the neighbor window of any size is defined for contextual information on a local region. However, the window of a certain size would result in using wrong information for the estimation from adjacent regions with different characteristics at the pixels close to or on boundary. To overcome this problem, the new method is designed to use less information from more distant neighbors as the pixel is closer to boundary. It can reduce the possibility to involve the pixel values of adjacent region with different characteristics. The proximity to boundary is estimated using a non-uniformity measurement based on standard deviation of local region. The new scheme has been extensively evaluated using simulation data, and the experimental results show a considerable improvement in despeckling the images that contain speckle.