• Title/Summary/Keyword: probability theory

Search Result 689, Processing Time 0.026 seconds

Design of the Robust CV Control Chart using Location Parameter (위치모수를 이용한 로버스트 CV 관리도의 설계)

  • Chun, Dong-Jin;Chung, Young-Bae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.1
    • /
    • pp.116-122
    • /
    • 2016
  • Recently, the production cycle in manufacturing process has been getting shorter and different types of product have been produced in the same process line. In this case, the control chart using coefficient of variation would be applicable to the process. The theory that random variables are located in the three times distance of the deviation from mean value is applicable to the control chart that monitor the process in the manufacturing line, when the data of process are changed by the type of normal distribution. It is possible to apply to the control chart of coefficient of variation too. ${\bar{x}}$, s estimates that taken in the coefficient of variation have just used all of the data, but the upper control limit, center line and lower control limit have been settled by the effect of abnormal values, so this control chart could be in trouble of detection ability of the assignable value. The purpose of this study was to present the robust control chart than coefficient of variation control chart in the normal process. To perform this research, the location parameter, ${\bar{x_{\alpha}}}$, $s_{\alpha}$ were used. The robust control chart was named Tim-CV control chart. The result of simulation were summarized as follows; First, P values, the probability to get away from control limit, in Trim-CV control chart were larger than CV control chart in the normal process. Second, ARL values, average run length, in Trim-CV control chart were smaller than CV control chart in the normal process. Particularly, the difference of performance of two control charts was so sure when the change of the process was getting to bigger. Therefore, the Trim-CV control chart proposed in this paper would be more efficient tool than CV control chart in small quantity batch production.

A Novel Redundant Data Storage Algorithm Based on Minimum Spanning Tree and Quasi-randomized Matrix

  • Wang, Jun;Yi, Qiong;Chen, Yunfei;Wang, Yue
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.227-247
    • /
    • 2018
  • For intermittently connected wireless sensor networks deployed in hash environments, sensor nodes may fail due to internal or external reasons at any time. In the process of data collection and recovery, we need to speed up as much as possible so that all the sensory data can be restored by accessing as few survivors as possible. In this paper a novel redundant data storage algorithm based on minimum spanning tree and quasi-randomized matrix-QRNCDS is proposed. QRNCDS disseminates k source data packets to n sensor nodes in the network (n>k) according to the minimum spanning tree traversal mechanism. Every node stores only one encoded data packet in its storage which is the XOR result of the received source data packets in accordance with the quasi-randomized matrix theory. The algorithm adopts the minimum spanning tree traversal rule to reduce the complexity of the traversal message of the source packets. In order to solve the problem that some source packets cannot be restored if the random matrix is not full column rank, the semi-randomized network coding method is used in QRNCDS. Each source node only needs to store its own source data packet, and the storage nodes choose to receive or not. In the decoding phase, Gaussian Elimination and Belief Propagation are combined to improve the probability and efficiency of data decoding. As a result, part of the source data can be recovered in the case of semi-random matrix without full column rank. The simulation results show that QRNCDS has lower energy consumption, higher data collection efficiency, higher decoding efficiency, smaller data storage redundancy and larger network fault tolerance.

A Study of Optimal-CSOs by Continuous Rainfall/Runoff Simulation Techniques (연속 강우-유출 모의기법을 이용한 최적 CSOs 산정에 관한 연구)

  • Jo, Deok Jun;Kim, Myoung Su;Lee, Jung Ho;Kim, Joong Hoon
    • Journal of Korean Society on Water Environment
    • /
    • v.22 no.6
    • /
    • pp.1068-1074
    • /
    • 2006
  • For receiving water quality protection a control systems of urban drainage for CSOs reduction is needed. Examples in combined sewer systems include downstream storage facilities that detain runoff during periods of high flow and allow the detained water to be conveyed by an interceptor sewer to a centralized treatment plant during periods of low flow. The design of such facilities as storm-water detention storage is highly dependant on the temporal variability of storage capacity available as well as the infiltration capacity of soil and recovery of depression storage. For the continuous long-term analysis of urban drainage system this study used analytical probabilistic model based on derived probability distribution theory. As an alternative to the modeling of urban drainage system for planning or screening level analysis of runoff control alternatives, this model has evolved that offers much ease and flexibility in terms of computation while considering long-term meteorology. This study presented rainfall and runoff characteristics of the subject area using analytical probabilistic model. Runoff characteristics manifested the unique characteristics of the subject area with the infiltration capacity of soil and recovery of depression storage and was examined appropriately by sensitivity analysis. This study presented the average annual CSOs, number of CSOs and event mean CSOs for the decision of storage volume.

Qualitative Data Analysis using Computers (컴퓨터를 이용한 질적 자료 분석)

  • Yi Myung-Sun
    • Journal of Korean Academy of Fundamentals of Nursing
    • /
    • v.6 no.3
    • /
    • pp.570-582
    • /
    • 1999
  • Although computers cannot analyze textual data in the same way as they analyze numerical data. they can nevertheless be of great assistance to qualitative researchers. Thus, the use of computers in analyzing qualitative data has increased since the 1980s. The purpose of this article was to explore advantages and disadvanteges of using computers to analyze textual data and to suggest strategies to prevent problems of using computers. In additon, it illustrated characteristics and functions of softwares designed to analyze qualitative data to help researchers choose the program wisely. It also demonstrated precise functions and procedures of the NUDIST program which was designed to develop a conceptual framework or grounded theory from unstructured data. Major advantage of using computers in qualitative research is the management of huge amount of unstructured data. By managing overloaded data, researcher can keep track of the emerging ideas, arguments and theoretical concepts and can organize these tasks mope efficiently than the traditional method of 'cut-and-paste' technique. Additional advantages are the abilities to increase trustworthiness of research, transparency of research process, and intuitional creativity of the researcher, and to facilitate team and secondary research. On the other hand, disvantages of using computers were identified as worries that the machine could conquer the human understanding and as probability of these problems. it suggested strategies such as 1) deep understanding of orthodoxy in analytical process. To overcome philosophical and theoretical background of qualitative research method, 2) deep understanding of the data as a whole before using software, 3) use of software after familiarity with it, 4) continuous evaluation of software and feedback from them, and 5) continuous awareness of the limitation of the machine, that is computer, in the interpretive analysis.

  • PDF

Probabilistic Modeling of Photovoltaic Power Systems with Big Learning Data Sets (대용량 학습 데이터를 갖는 태양광 발전 시스템의 확률론적 모델링)

  • Cho, Hyun Cheol;Jung, Young Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.5
    • /
    • pp.412-417
    • /
    • 2013
  • Analytical modeling of photovoltaic power systems has been receiving significant attentions in recent years in that it is easy to apply for prediction of its dynamics and fault detection and diagnosis in advanced engineering technologies. This paper presents a novel probabilistic modeling approach for such power systems with a big data sequence. Firstly, we express input/output function of photovoltaic power systems in which solar irradiation and ambient temperature are regarded as input variable and electric power is output variable respectively. Based on this functional relationship, conditional probability for these three random variables(such as irradiation, temperature, and electric power) is mathematically defined and its estimation is accomplished from ratio of numbers of all sample data to numbers of cases related to two input variables, which is efficient in particular for a big data sequence of photovoltaic powers systems. Lastly, we predict the output values from a probabilistic model of photovoltaic power systems by using the expectation theory. Two case studies are carried out for testing reliability of the proposed modeling methodology in this paper.

A Study for Investigating Predictors of AIDS and Patients Care Intention Among Nursing Students (간호학생들의 에이즈 환자 간호의도에 영향을 미치는 요인)

  • 이종경
    • Journal of Korean Academy of Nursing
    • /
    • v.31 no.2
    • /
    • pp.292-303
    • /
    • 2001
  • The purpose of the study was to find out the level of knowledge, attitude, subjective norm, social interaction, and behavioral intention of nursing students regarding AIDS. It also identified factors that predict behavioral intentions and to provide care for patients with AIDS using Theory of Reasoned Action. The subjects consisted of 117 nursing students at three universities. Data was collected with self reporting in a questionnaire of with 67 items. Data was analyzed by an SPSS pc+ program. The results were as follows; 1. The mean age of the subjects was 20.98 years. The mean score for HIV/AIDS knowledge was 24.444 out of 32. Mostly Korean students were quite knowledgeable about the basic facts and symptoms of AIDS but confused about the made of transmission such as public toilets, prevention methods, and especially infection control. 2. This study found that social interaction, attitudes and subjective norms of Korean nursing students explained the intention to care for AIDS patients. The students who had a more positive attitude toward caring for AIDS patients and those who perceived more support from their significant others for caring the AIDS patients reported a more positive intention to care for AIDS patients. 3. In stepwise multiple regression analysis, 47.58% of the variance in AIDS patient care intention was accounted for by social interaction (33.41%), attitude (9.1%), and subjective norm (5.0 %). According to the finding of this study, and social interaction are the most significant predictors of intentions. Therefore it can be suggested that a HIV/AIDS prevention program should focus on transmission modes and prevention methods, especially in infection control. AIDS education efforts aimed at nursing students should place greater emphasis on correcting these kinds of misconceptions. Nursing intenvention for reducing fear of contagion, improving perception of social interaction, fostering positive attitudes and increasing intention to care for AIDS patients should be provided for nursing students. They also recommended that nursing students be adequately prepared to care for AIDS patients because of the increasing probability that they will encounter AIDS patients. Therefore it is important that education about HIV/AIDS should be incorporated within current undergraduate curriculum.

  • PDF

Scalable CC-NUMA System using Repeater Node (리피터 노드를 이용한 Scalable CC-NUMA 시스템)

  • Kyoung, Jin-Mi;Jhang, Seong-Tae
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.9
    • /
    • pp.503-513
    • /
    • 2002
  • Since CC-NUMA architecture has to access remote memory, the interconnection network determines the performance of the CC-NUMA system. Bus which has been used as a popular interconnection network has many limits in a large-scale system because of the limited physical scalability and bandwidth. The dual ring interconnection network, composed of high-speed point-to-point links, is made to resolve the defects of the bus for the large-scale system. However, it also has a problem, in that the response latency is rapidly increased when many nodes are attached to the snooping based CC-NUMA system with the dual ring. In this paper, we propose a ring architecture with repeater nodes in order to overcome the problem of the dual ring on a snooping based CC-NUMA system, and design a repeater node adapted to this architecture. We will also analyze the effects of proposed architecture on the system performance and the response latency by using a probability-driven simulator.

Credit Card Interest Rate with Imperfect Information (불완전 정보와 신용카드 이자율)

  • Song, Soo-Young
    • The Korean Journal of Financial Management
    • /
    • v.22 no.2
    • /
    • pp.213-226
    • /
    • 2005
  • Adverse selection is a heavily scrutinized subject within the financial intermediary industry. Consensus is reached regarding its effect on the loan interest rate. Despite the similar features of financial service offered by the credit card, we still have controversy regarding credit card interest rate on how is adverse selection incurred with the change of interest rate. Thus, this paper explores how does the adverse selection, if ever, take place and affect the credit card interest rate. Information asymmetry regarding the credit card users' type represented by the default probability is assumed. The users are assumed to be rational in that they want to minimize the per unit dollar expense associated with the commercial transaction and financing between the two typical payment methods, cash and credit card. Suppliers, i.e. credit card companies, would like to maximize their profit and would be better off with more pervasive use of credit cards over the cash. Then we could show that the increasing credit card interest rate is subject to the adverse selection, sharing the same tenet with that of the bank loan interest rate proposed by Stiglitz and Weiss. Hence the current theory predicts that credit card market also suffers from adverse selection with increasing interest rate.

  • PDF

Cost Distribution Strategies in the Film Industry: the Simplex Method (영화의 유통전략에 대한 연구: 심플렉스 해법을 중심으로)

  • Hwang, Hee-Joong
    • Journal of Distribution Science
    • /
    • v.14 no.10
    • /
    • pp.147-152
    • /
    • 2016
  • Purpose - High quality films are affected by both the production stage and various variables such as the size of the movie investment and marketing that changes consumers' perceptions. Consumer preferences should be recognized first to ensure that the movie is successful. If a film is produced without pre-investigation and analysis of consumer demand and taste, the probability of success will be low. This study investigates the balance of production costs, marketing costs, and profits using game theory, suggesting an optimization strategy using the simplex method of linear programming. Research design, data, and methodology - Before the release of the movie, initial demand is assumed to be driven largely by marketing costs. In the next phase, demand is assumed to be driven purely by a movie's production cost and quality, which might also further determine consumer demand. Thus, it is essential to determine how to distribute pure production costs and other costs (marketing) in a limited movie production budget. Moreover, it should be taken into account how to optimally distribute under the assumption that the audience and production company's input resources are limited. This research simplifies the assumptions for large-scale and relatively small-scale movie investments and examines how movie distribution participant profits differ when each cost is invested differently. Results - When first movers or market leaders have to choose both quality and marketing, it has been proven that pursuing a strategy choosing only one is more likely than choosing both. In this situation, market leaders should maximize marketing costs under the premise that market leaders will not lag their quality behind the quality of second movers. Additionally, focusing on movie marketing that produces a quick effect while ceding creative activity to increase movie quality is a natural outcome in the movie distribution environment since a cooperative strategy between market competitors is not feasible. Conclusions - Government film development policy should ignore quality competition between movie production companies and focus on preventing marketing competition. If movie production companies focus on movie production quality improvement then a creative competition would ensue.

CLUSTERING DNA MICROARRAY DATA BY STOCHASTIC ALGORITHM

  • Shon, Ho-Sun;Kim, Sun-Shin;Wang, Ling;Ryu, Keun-Ho
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.438-441
    • /
    • 2007
  • Recently, due to molecular biology and engineering technology, DNA microarray makes people watch thousands of genes and the state of variation from the tissue samples of living body. With DNA Microarray, it is possible to construct a genetic group that has similar expression patterns and grasp the progress and variation of gene. This paper practices Cluster Analysis which purposes the discovery of biological subgroup or class by using gene expression information. Hence, the purpose of this paper is to predict a new class which is unknown, open leukaemia data are used for the experiment, and MCL (Markov CLustering) algorithm is applied as an analysis method. The MCL algorithm is based on probability and graph flow theory. MCL simulates random walks on a graph using Markov matrices to determine the transition probabilities among nodes of the graph. If you look at closely to the method, first, MCL algorithm should be applied after getting the distance by using Euclidean distance, then inflation and diagonal factors which are tuning modulus should be tuned, and finally the threshold using the average of each column should be gotten to distinguish one class from another class. Our method has improved the accuracy through using the threshold, namely the average of each column. Our experimental result shows about 70% of accuracy in average compared to the class that is known before. Also, for the comparison evaluation to other algorithm, the proposed method compared to and analyzed SOM (Self-Organizing Map) clustering algorithm which is divided into neural network and hierarchical clustering. The method shows the better result when compared to hierarchical clustering. In further study, it should be studied whether there will be a similar result when the parameter of inflation gotten from our experiment is applied to other gene expression data. We are also trying to make a systematic method to improve the accuracy by regulating the factors mentioned above.

  • PDF