• Title/Summary/Keyword: subjective probability model

Search Result 48, Processing Time 0.027 seconds

Influence of Cushioning Variables in the Workplace and in the Family on the Probability of Suffering Stress

  • Gonzalo, David Cardenas
    • Safety and Health at Work
    • /
    • v.7 no.3
    • /
    • pp.175-184
    • /
    • 2016
  • Stress at work and in the family is a very common issue in our society that generates many health-related problems. During recent years, numerous studies have sought to define the term stress, raising many contradictions that various authors have studied. Other authors have attempted to establish some criteria, in subjective and not very quantitative ways, in an attempt to reduce and even to eliminate stressors and their effects at work and in the family context. The purpose of this study was to quantify so-called cushioning variables, such as control, social support, home/work life conciliation, and even sports and leisure activities, with the purpose of, as much as possible, reducing the negative effects of stress, which seriously affects the health of workers. The study employs data from the Fifth European Working Conditions Survey, in which nearly 44,000 interviewees from 34 countries in the European Union participated. We constructed a probabilistic model based on a Bayesian network, using variables from both the workplace and the family, the aforementioned cushioning variables, as well as the variable stress. If action is taken on the above variables, then the probabilities of suffering high levels of stress may be reduced. Such action may improve the quality of life of people at work and in the family.

Development of Fuzzy Model for Analyzing Construction Risk Factors (건설공사의 리스크분석을 위한 퍼지평가모형 개발)

  • Park Seo-Young;Kang Leen-Seok;Kim Chang-Hak;Son Chang-Bak
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • autumn
    • /
    • pp.519-524
    • /
    • 2001
  • Recently, our construction market recognizes the necessity of risk management, however the application of practical system is still limited on the construction site because the methodology for analyzing and quantifying construction risk and for building actual risk factors is not easy. This study suggests a risk management method by fuzzy theory, which is using subjective knowledge of an expert and linguistic value, to analyze and Quantify risk. The result of study is expected to improve the accuracy of risk analysis because three factors, such as probability, impact and frequency, for estimating membership function are introduced to quantify each risk factor.

  • PDF

Probabilistic Risk Assessment for Construction Projects (건설공사의 확률적 위험도분석평가)

  • 조효남;임종권;김광섭
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 1997.10a
    • /
    • pp.24-31
    • /
    • 1997
  • Recently, in Korea, demand for establishment of systematic risk assessment techniques for construction projects has increased, especially after the large construction failures occurred during construction such as New Haengju Bridge construction projects, subway construction projects, gas explosion accidents etc. Most of existing risk analysis modeling techniques such as Event Tree Analysis and Fault Tree Analysis may not be available for realistic risk assessment of construction projects because it is very complex and difficult to estimate occurrence frequency and failure probability precisely due to a lack of data related to the various risks inherent in construction projects like natural disasters, financial and economic risks, political risks, environmental risks as well as design and construction-related risks. Therefor the main objective of this paper is to suggest systematic probabilistic risk assessment model and demonstrate an approach for probabilistic risk assessment using advanced Event Tree Analysis introducing Fuzzy set theory concepts. It may be stated that the Fuzzy Event Tree AnaIysis may be very usefu1 for the systematic and rational risk assessment for real constructions problems because the approach is able to effectively deal with all the related construction risks in terms of the linguistic variables that incorporate systematically expert's experiences and subjective judgement.

  • PDF

Applying Theory of Planned Behavior to Examine Users' Intention to Adopt Broadband Internet in Lower-Middle Income Countries' Rural Areas: A Case of Tanzania

  • Sadiki Ramadhani Kalula;Mussa Ally Dida;Zaipuna Obeid Yonah
    • Journal of Information Science Theory and Practice
    • /
    • v.12 no.1
    • /
    • pp.60-76
    • /
    • 2024
  • Broadband Internet has proven to be vital for economic growth in developed countries. Developing countries have implemented several initiatives to increase their broadband access. However, its full potential can only be realized through adoption and use. With lower-middle-income countries accounting for the majority of the world's unconnected population, this study employs the theory of planned behavior (TPB) to investigate users' intentions to adopt broadband. Rural Tanzania was chosen as a case study. A cross-sectional study was conducted over three weeks, using 155 people from seven villages with the lowest broadband adoption rates. Non-probability voluntary response sampling was used to recruit the participants. Using the TPB constructs: attitude toward behavior (ATB), subjective norms (SN), and perceived behavioral control (PBC), ordinal regression analysis was employed to predict intention. Descriptive statistical analysis yielded mean scores (standard deviation) as 3.59 (0.46) for ATB, 3.34 (0.40) for SN, 3.75 (0.29) for PBC, and 4.12 (0.66) for intention. The model adequately described the data based on a comparison of the model with predictors and the null model, which revealed a substantial improvement in fit (p<0.05). Moreover, the predictors accounted for 50.3% of the variation in the intention to use broadband Internet, demonstrating the predictive power of the TPB constructs. Furthermore, the TPB constructs were all significant positive predictors of intention: ATB (β=1.938, p<0.05), SN (β=2.144, p<0.05), and PBC (β=1.437, p=0.013). The findings of this study provide insight into how behavioral factors influence the likelihood of individuals adopting broadband Internet and could guide interventions through policies meant to promote broadband adoption.

A study on the vessel traffic safety assessment of Busan Harbor (부산항내 선박통항 안전성 평가에 관한 연구)

  • KIM, Won-Ouk;KIM, Dae-Hee;KIM, Seok-Jae
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.53 no.4
    • /
    • pp.423-429
    • /
    • 2017
  • As invigoration plan of the marine tourism, Busan City has the plan to operate the cruise ship inside of the harbor, but the area has narrow water way with heavy traffic. As a result it is requested to evaluate the safety for the preparation of actual navigation. In this study, the Ship Handling Simulation (SHS) Assessment was conducted, which is regulated by the Maritime Traffic Safety Audit Scheme (MTSAS) in compliance with the Marine Safety Law and the Maritime Traffic Risk Assessment System based on the Electronic Chart Display and Information System (ECDIS). The proximity assessment, control assessment and subjective assessment were implemented, which is enacted by the Marine Safety Law by using the SHS. In the case of proximity assessment, the probability of trespass was not analyzed. As the control assessment, the swept path was measured at 11.7 m and 11.5 m for port entry and port departure respectively, which exceeded the width of the model vessel, 10.4 m over; it was considered as a marginal factor. As a result of the subjective evaluation of the navigator, there would be no difficulty on ship maneuvering by paying particular attention to the mooring vessel nearby the Busan Bridge and Yeongdo Bridge as well as the coming vessel from the invisible sea area when the vessel is entering and departing the port. The Marine Traffic Risk Assessment System analyzed as [Cautious] level until the vessel passed the Busan bridge and the curved area at 5 kts and it became to [Dangerous] level from where it left 75 m to the Busan Bridge. When the vessel passed the Busan Bridge and the curved area at 10 kts and entered the narrow area, it indicated the [Dangerous] level and became to [Very dangerous] level from where it left 410 m to the Busan bridge. In conclusion, the vessel should maintain at the speed of 5 kts to reduce the risk when it passes this area.

No-reference Image Quality Assessment With A Gradient-induced Dictionary

  • Li, Leida;Wu, Dong;Wu, Jinjian;Qian, Jiansheng;Chen, Beijing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.288-307
    • /
    • 2016
  • Image distortions are typically characterized by degradations of structures. Dictionaries learned from natural images can capture the underlying structures in images, which are important for image quality assessment (IQA). This paper presents a general-purpose no-reference image quality metric using a GRadient-Induced Dictionary (GRID). A dictionary is first constructed based on gradients of natural images using K-means clustering. Then image features are extracted using the dictionary based on Euclidean-norm coding and max-pooling. A distortion classification model and several distortion-specific quality regression models are trained using the support vector machine (SVM) by combining image features with distortion types and subjective scores, respectively. To evaluate the quality of a test image, the distortion classification model is used to determine the probabilities that the image belongs to different kinds of distortions, while the regression models are used to predict the corresponding distortion-specific quality scores. Finally, an overall quality score is computed as the probability-weighted distortion-specific quality scores. The proposed metric can evaluate image quality accurately and efficiently using a small dictionary. The performance of the proposed method is verified on public image quality databases. Experimental results demonstrate that the proposed metric can generate quality scores highly consistent with human perception, and it outperforms the state-of-the-arts.

The Analysis of the Effects of Physical Activity on Impaired Fasting Glucose in Adults Over 20 Years of Age

  • Joo-Won Yoon
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.11 no.2
    • /
    • pp.181-196
    • /
    • 2023
  • Purpose : The purpose of this study was to investigate the effects of physical activity on impaired fasting glucose in adults aged 20 years or older. Methods : This study utilized raw data from the 8th National Health and Nutrition Examination survey (2019~2021). The subjects of this study were 5,344 adults aged 20 years or older who were confirmed to be free of diabetes. The control variables in this study model are health behavior characteristics (subjective health status, smoking, drinking), anthropometric characteristics (body mass index), and personal background characteristics (gender, age, income level, education level, marital status). As for the analysis method, the degree of physical activity was made into a dummy variable, and a probit model was used. Results : As a result of this study, compared to quartile 1 of the relative grip strength value obtained by dividing the grip strength by the body mass index (body mass index, kg, m2), fasting blood glucose levels were significantly higher in quartile 2 (.05, p<.01), quartile 3 (.04, p<.01), and quartile 4 (.04, p<.01). It was found that the probability of belonging to the normal category was higher than that of impaired fasting glucose. In addition, in the group of adults aged 20 or older who had a lot of aerobic and anaerobic physical activity, fasting blood sugar was more likely to be in the normal category. Conclusion : Based on the results of this study, it was suggested that diabetes should be managed through physical activity in the pre-diabetic stage, as prevention is important as well as treatment. From a practical point of view, muscle strength, such as grip strength, can be identified as a reliable indicator for identifying impaired fasting glucose.

A Comparative Study of Predictive Factors for Hypertension using Logistic Regression Analysis and Decision Tree Analysis

  • SoHyun Kim;SungHyoun Cho
    • Physical Therapy Rehabilitation Science
    • /
    • v.12 no.2
    • /
    • pp.80-91
    • /
    • 2023
  • Objective: The purpose of this study is to identify factors that affect the incidence of hypertension using logistic regression and decision tree analysis, and to build and compare predictive models. Design: Secondary data analysis study Methods: We analyzed 9,859 subjects from the Korean health panel annual 2019 data provided by the Korea Institute for Health and Social Affairs and National Health Insurance Service. Frequency analysis, chi-square test, binary logistic regression, and decision tree analysis were performed on the data. Results: In logistic regression analysis, those who were 60 years of age or older (Odds ratio, OR=68.801, p<0.001), those who were divorced/widowhood/separated (OR=1.377, p<0.001), those who graduated from middle school or younger (OR=1, reference), those who did not walk at all (OR=1, reference), those who were obese (OR=5.109, p<0.001), and those who had poor subjective health status (OR=2.163, p<0.001) were more likely to develop hypertension. In the decision tree, those over 60 years of age, overweight or obese, and those who graduated from middle school or younger had the highest probability of developing hypertension at 83.3%. Logistic regression analysis showed a specificity of 85.3% and sensitivity of 47.9%; while decision tree analysis showed a specificity of 81.9% and sensitivity of 52.9%. In classification accuracy, logistic regression and decision tree analysis showed 73.6% and 72.6% prediction, respectively. Conclusions: Both logistic regression and decision tree analysis were adequate to explain the predictive model. It is thought that both analysis methods can be used as useful data for constructing a predictive model for hypertension.

An Analysis of Intuitive Thinking of Elementary Students in Mathematical Problem Solving Process (수학 문제해결 과정에 나타난 초등학생들의 직관적 사고 분석)

  • You, Dae-Hyun;Kang, Wan
    • Education of Primary School Mathematics
    • /
    • v.12 no.1
    • /
    • pp.1-20
    • /
    • 2009
  • The purposes of this study are to analyze elementary school student's intuitive thinking in the process of mathematical problem solving and to analyze elementary school student's errors of intuitive thinking in the process of mathematical problem solving. According to these purposes, the research questions can be set up as followings. (1) How is the state of illumination of the elementary school student's intuitive thinking in the process of mathematical problem solving? (2) What are origins of errors by elementary school student's intuitive thinking in the process of mathematical problem solving? In this study, Bogdan & Biklen's qualitative research method were used. The subjects in this study were 4 students who were attending the elementary school. The data in this study were 'Intuitine Thinking Test', records of observation and interview. In the interview, the discourses were recorded by sound and video recording. These were later transcribed and analyzed in detail. The findings of this study were as follows: First, If Elementary school student Knows the algorithm of problem, they rely on solving by algorithm rather than solving by intuitive thinking. Second, their problem solving ability by intuitive model are low. What is more they solve the problem by Intuitive model, their Self- Evidence is low. Third, in the process of solving the problem, intuitive thinking can complement logical thinking. Last, in the concept of probability and problem of probability, they are led into cognitive conflict cause of subjective interpretation.

  • PDF

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.