• Title/Summary/Keyword: stochastic problem

Search Result 535, Processing Time 0.027 seconds

Density Estimation Technique for Effective Representation of Light In-scattering (빛의 내부산란의 효과적인 표현을 위한 밀도 추정기법)

  • Min, Seung-Ki;Ihm, In-Sung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.1
    • /
    • pp.9-20
    • /
    • 2010
  • In order to visualize participating media in 3D space, they usually calculate the incoming radiance by subdividing the ray path into small subintervals, and accumulating their respective light energy due to direct illumination, scattering, absorption, and emission. Among these light phenomena, scattering behaves in very complicated manner in 3D space, often requiring a great deal of simulation efforts. To effectively simulate the light scattering effect, several approximation techniques have been proposed. Volume photon mapping takes a simple approach where the light scattering phenomenon is represented in volume photon map through a stochastic simulation, and the stored information is explored in the rendering stage. While effective, this method has a problem that the number of necessary photons increases very fast when a higher variance reduction is needed. In an attempt to resolve such problem, we propose a different approach for rendering particle-based volume data where kernel smoothing, one of several density estimation methods, is explored to represent and reconstruct the light in-scattering effect. The effectiveness of the presented technique is demonstrated with several examples of volume data.

An Analysis of Determinants of Medical Cost Inflation using both Deterministic and Stochastic Models (의료비 상승 요인 분석)

  • Kim, Han-Joong;Chun, Ki-Hong
    • Journal of Preventive Medicine and Public Health
    • /
    • v.22 no.4 s.28
    • /
    • pp.542-554
    • /
    • 1989
  • The skyrocketing inflation of medical costs has become a major health problem among most developed countries. Korea, which recently covered the entire population with National Health Insurance, is facing the same problem. The proportion of health expenditure to GNP has increased from 3% to 4.8% during the last decade. This was remarkable, if we consider the rapid economic growth during that time. A few policy analysts began to raise cost containment as an agenda, after recognizing the importance of medical cost inflation. In order to Prepare an appropriate alternative for the agenda, it is necessary to find out reasons for the cost inflation. Then, we should focus on the reasons which are controllable, and those whose control are socially desirable. This study is designed to articulate the theory of medical cost inflation through literature reviews, to find out reasons for cost inflation, by analyzing aggregated data with a deterministic model. Finally to identify determinants of changes in both medical demand and service intensity which are major reasons for cost inflation. The reasons for cost inflation are classified into cost push inflation and demand pull inflation, The former consists of increases in price and intensity of services, while the latter is made of consumer derived demand and supplier induced demand. We used a time series (1983-1987), and cross sectional (over regions) data of health insurance. The deterministic model reveals, that an increase in service intensity is a major cause of inflation in the case of inpatient care, while, more utilization, is a primary attribute in the case of physician visits. Multiple regression analysis shows that an increase in hospital beds is a leading explanatory variable for the increase in hospital care. It also reveals, that an introduction of a deductible clause, an increase in hospital beds and degree of urbanization, are statistically significant variables explaining physician visits. The results are consistent with the existing theory, The magnitude of service intensity is influenced by the level of co-payment, the proportion of old age and an increase in co-payment. In short, an increase in co-payment reduced the utilization, but it induced more intensities or services. We can conclude that the strict fee regulation or increase in the level of co-payment can not be an effective measure for cost containment under the fee for service system. Because the provider can react against the regulation by inducing more services.

  • PDF

A Stochastic Study for the Emergency Treatment of Carbon Monoxide Poisoning in Korea (일산화탄소중독(一酸化炭素中毒)의 진료대책(診療對策) 수립(樹立)을 위한 추계학적(推計學的) 연구(硏究))

  • Kim, Yong-Ik;Yun, Dork-Ro;Shin, Young-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.16 no.1
    • /
    • pp.135-152
    • /
    • 1983
  • Emergency medical service is an important part of the health care delivery system, and the optimal allocation of resources and their efficient utilization are essentially demanded. Since these conditions are the prerequisite to prompt treatment which, in turn, will be crucial for life saving and in reducing the undesirable sequelae of the event. This study, taking the hyperbaric chamber for carbon monoxide poisoning as an example, is to develop a stochastic approach for solving the problems of optimal allocation of such emergency medical facility in Korea. The hyperbaric chamber, in Korea, is used almost exclusively for the treatment of acute carbon monoxide poisoning, most of which occur at home, since the coal briquette is used as domestic fuel by 69.6 per cent of the Korean population. The annual incidence rate of the comatous and fatal carbon monoxide poisoning is estimated at 45.5 per 10,000 of coal briquette-using population. It offers a serious public health problem and occupies a large portion of the emergency outpatients, especially in the winter season. The requirement of hyperbaric chambers can be calculated by setting the level of the annual queueing rate, which is here defined as the proportion of the annual number of the queued patients among the annual number of the total patients. The rate is determined by the size of the coal briquette-using population which generate a certain number of carbon monoxide poisoning patients in terms of the annual incidence rate, and the number of hyperbaric chambers per hospital to which the patients are sent, assuming that there is no referral of the patients among hospitals. The queueing occurs due to the conflicting events of the 'arrival' of the patients and the 'service' of the hyperbaric chambers. Here, we can assume that the length of the service time of hyperbaric chambers is fixed at sixty minutes, and the service discipline is based on 'first come, first served'. The arrival pattern of the carbon monoxide poisoning is relatively unique, because it usually occurs while the people are in bed. Diurnal variation of the carbon monoxide poisoning can hardly be formulated mathematically, so empirical cumulative distribution of the probability of the hourly arrival of the patients was used for Monte Carlo simulation to calculate the probability of queueing by the number of the patients per day, for the cases of one, two or three hyperbaric chambers assumed to be available per hospital. Incidence of the carbon monoxide poisoning also has strong seasonal variation, because of the four distinctive seasons in Korea. So the number of the patients per day could not be assumed to be distributed according to the Poisson distribution. Testing the fitness of various distributions of rare event, it turned out to be that the daily distribution of the carbon monoxide poisoning fits well to the Polya-Eggenberger distribution. With this model, we could forecast the number of the poisonings per day by the size of the coal-briquette using population. By combining the probability of queueing by the number of patients per day, and the probability of the number of patients per day in a year, we can estimate the number of the queued patients and the number of the patients in a year by the number of hyperbaric chamber per hospital and by the size of coal briquette-using population. Setting 5 per cent as the annual queueing rate, the required number of hyperbaric chambers was calculated for each province and for the whole country, in the cases of 25, 50, 75 and 100 per cent of the treatment rate which stand for the rate of the patients treated by hyperbaric chamber among the patients who are to be treated. Findings of the study were as follows. 1. Probability of the number of patients per day follows Polya-Eggenberger distribution. $$P(X=\gamma)=\frac{\Pi\limits_{k=1}^\gamma[m+(K-1)\times10.86]}{\gamma!}\times11.86^{-{(\frac{m}{10.86}+\gamma)}}$$ when$${\gamma}=1,2,...,n$$$$P(X=0)=11.86^{-(m/10.86)}$$ when $${\gamma}=0$$ Hourly arrival pattern of the patients turned out to be bimodal, the large peak was observed in $7 : 00{\sim}8 : 00$ a.m., and the small peak in $11 : 00{\sim}12 : 00$ p.m. 2. In the cases of only one or two hyperbaric chambers installed per hospital, the annual queueing rate will be at the level of more than 5 per cent. Only in case of three chambers, however, the rate will reach 5 per cent when the average number of the patients per day is 0.481. 3. According to the results above, a hospital equipped with three hyperbaric chambers will be able to serve 166,485, 83,242, 55,495 and 41,620 of population, when the treatmet rate are 25, 50, 75 and 100 per cent. 4. The required number of hyperbaric chambers are estimated at 483, 963, 1,441 and 1,923 when the treatment rate are taken as 25, 50, 75 and 100 per cent. Therefore, the shortage are respectively turned out to be 312, 791. 1,270 and 1,752. The author believes that the methodology developed in this study will also be applicable to the problems of resource allocation for the other kinds of the emergency medical facilities.

  • PDF

A User Optimer Traffic Assignment Model Reflecting Route Perceived Cost (경로인지비용을 반영한 사용자최적통행배정모형)

  • Lee, Mi-Yeong;Baek, Nam-Cheol;Mun, Byeong-Seop;Gang, Won-Ui
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.117-130
    • /
    • 2005
  • In both deteministic user Optimal Traffic Assignment Model (UOTAM) and stochastic UOTAM, travel time, which is a major ccriterion for traffic loading over transportation network, is defined by the sum of link travel time and turn delay at intersections. In this assignment method, drivers actual route perception processes and choice behaviors, which can become main explanatory factors, are not sufficiently considered: therefore may result in biased traffic loading. Even though there have been some efforts in Stochastic UOTAM for reflecting drivers' route perception cost by assuming cumulative distribution function of link travel time, it has not been fundamental fruitions, but some trials based on the unreasonable assumptions of Probit model of truncated travel time distribution function and Logit model of independency of inter-link congestion. The critical reason why deterministic UOTAM have not been able to reflect route perception cost is that the route perception cost has each different value according to each origin, destination, and path connection the origin and destination. Therefore in order to find the optimum route between OD pair, route enumeration problem that all routes connecting an OD pair must be compared is encountered, and it is the critical reason causing computational failure because uncountable number of path may be enumerated as the scale of transportation network become bigger. The purpose of this study is to propose a method to enable UOTAM to reflect route perception cost without route enumeration between an O-D pair. For this purpose, this study defines a link as a least definition of path. Thus since each link can be treated as a path, in two links searching process of the link label based optimum path algorithm, the route enumeration between OD pair can be reduced the scale of finding optimum path to all links. The computational burden of this method is no more than link label based optimum path algorithm. Each different perception cost is embedded as a quantitative value generated by comparing the sub-path from the origin to the searching link and the searched link.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

A Dynamic Shortest Path Finding Model using Hierarchical Road Networks (도로 위계 구조를 고려한 동적 최적경로 탐색 기법개발)

  • Kim, Beom-Il;Lee, Seung-Jae
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.6 s.84
    • /
    • pp.91-102
    • /
    • 2005
  • When it comes to the process of information storage, people are likely to organize individual information into the forms of groups rather than independent attributes, and put them together in their brains. Likewise, in case of finding the shortest path, this study suggests that a Hierarchical Road Network(HRN) model should be selected to browse the most desirable route, since the HRN model takes the process mentioned above into account. Moreover, most of drivers make a decision to select a route from origin to destination by road hierarchy. It says that the drivers feel difference between the link travel tine which was measured by driving and the theoretical link travel time. There is a different solution which has predicted the link travel time to solve this problem. By using this solution, the link travel time is predicted based on link conditions from time to time. The predicated link travel time is used to search the shortest path. Stochastic Process model uses the historical patterns of travel time conditions on links. The HRN model has compared favorably with the conventional shortest path finding model in tern of calculated speeds. Even more, the result of the shortest path using the HRN model has more similar to the survey results which was conducted to the taxi drivers. Taxi drivers have a strong knowledge of road conditions on the road networks and they are more likely to select a shortest path according to the real common sense.

Design of Partial Discharge Pattern Classifier of Softmax Neural Networks Based on K-means Clustering : Comparative Studies and Analysis of Classifier Architecture (K-means 클러스터링 기반 소프트맥스 신경회로망 부분방전 패턴분류의 설계 : 분류기 구조의 비교연구 및 해석)

  • Jeong, Byeong-Jin;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.1
    • /
    • pp.114-123
    • /
    • 2018
  • This paper concerns a design and learning method of softmax function neural networks based on K-means clustering. The partial discharge data Information is preliminarily processed through simulation using an Epoxy Mica Coupling sensor and an internal Phase Resolved Partial Discharge Analysis algorithm. The obtained information is processed according to the characteristics of the pattern using a Motor Insulation Monitoring System program. At this time, the processed data are total 4 types that void discharge, corona discharge, surface discharge and slot discharge. The partial discharge data with high dimensional input variables are secondarily processed by principal component analysis method and reduced with keeping the characteristics of pattern as low dimensional input variables. And therefore, the pattern classifier processing speed exhibits improved effects. In addition, in the process of extracting the partial discharge data through the MIMS program, the magnitude of amplitude is divided into the maximum value and the average value, and two pattern characteristics are set and compared and analyzed. In the first half of the proposed partial discharge pattern classifier, the input and hidden layers are classified by using the K-means clustering method and the output of the hidden layer is obtained. In the latter part, the cross entropy error function is used for parameter learning between the hidden layer and the output layer. The final output layer is output as a normalized probability value between 0 and 1 using the softmax function. The advantage of using the softmax function is that it allows access and application of multiple class problems and stochastic interpretation. First of all, there is an advantage that one output value affects the remaining output value and its accompanying learning is accelerated. Also, to solve the overfitting problem, L2-normalization is applied. To prove the superiority of the proposed pattern classifier, we compare and analyze the classification rate with conventional radial basis function neural networks.

Energy Efficiency Enhancement of Macro-Femto Cell Tier (매크로-펨토셀의 에너지 효율 향상)

  • Kim, Jeong-Su;Lee, Moon-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.47-58
    • /
    • 2018
  • The heterogeneous cellular network (HCN) is most significant as a key technology for future fifth generation (5G) wireless networks. The heterogeneous network considered consists of randomly macrocell base stations (MBSs) overlaid with femtocell base stations (BSs). The stochastic geometry has been shown to be a very powerful tool to model, analyze, and design networks with random topologies such as wireless ad hoc, sensor networks, and multi- tier cellular networks. The HCNs can be energy-efficiently designed by deploying various BSs belonging to different networks, which has drawn significant attention to one of the technologies for future 5G wireless networks. In this paper, we propose switching off/on systems enabling the BSs in the cellular networks to efficiently consume the power by introducing active/sleep modes, which is able to reduce the interference and power consumption in the MBSs and FBSs on an individual basis as well as improve the energy efficiency of the cellular networks. We formulate the minimization of the power onsumption for the MBSs and FBSs as well as an optimization problem to maximize the energy efficiency subject to throughput outage constraints, which can be solved the Karush Kuhn Tucker (KKT) conditions according to the femto tier BS density. We also formulate and compare the coverage probability and the energy efficiency in HCNs scenarios with and without coordinated multi-point (CoMP) to avoid coverage holes.

Accuracy Enhancement using Network Based GPS Carrier Phase Differential Positioning (네트워크 기반의 GPS 반송파 상대측위 정확도 향상)

  • Lee, Yong-Wook;Bae, Kyoung-Ho
    • Spatial Information Research
    • /
    • v.15 no.2
    • /
    • pp.111-121
    • /
    • 2007
  • The GPS positioning offer 3D position using code and carrier phase measurements, but the user can obtain the precise accuracy positioning using carrier phase in Real Time Kinematic(RTK). The main problem, which RTK have to overcome, is the necessary to have a reference station(RS) when using RTK should be generally no more than 10km on average, which is significantly different from DGPS, where distances to RS can exceed several hundred kilometers. The accuracy of today's RTK is limited by the distance dependent errors from orbit, ionosphere and troposphere as well as station dependent influences like multipath and antenna phase center variations. For these reasons, the author proposes Network based GPS Carrier Phase Differential Positioning using Multiple RS which is detached from user receiver about 30km. An important part of the proposed system is algorithm and software development, named DAUNet. The main process is corrections computation, corrections interpolation and searching for the integer ambiguity. Corrections computation of satellite by satellite and epoch by epoch at each reference station are calculated by a Functional model and Stochastic model based on a linear combination algorithm and corrections interpolation at user receiver are used by area correction parameters. As results, the users can obtain the cm-level positioning.

  • PDF

A design of fuzzy pattern matching classifier using genetic algorithms and its applications (유전 알고리즘을 이용한 퍼지 패턴 매칭 분류기의 설계와 응용)

  • Jung, Soon-Won;Park, Gwi-Tae
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.87-95
    • /
    • 1996
  • A new design scheme for the fuzzy pattern matching classifier (FPMC) is proposed. in conventional design of FPMC, there are no exact information about the membership function of which shape and number critically affect the performance of classifier. So far, a trial and error or heuristic method is used to find membership functions for the input patterns. But each of them have limits in its application to the various types of pattern recognition problem. In this paper, a new method to find the appropriate shape and number of membership functions for the input patterns which minimize classification error is proposed using genetic algorithms(GAs). Genetic algorithms belong to a class of stochastic algorithms based on biological models of evolution. They have been applied to many function optimization problems and shown to find optimal or near optimal solutions. In this paper, GAs are used to find the appropriate shape and number of membership functions based on fitness function which is inversely proportional to classification error. The strings in GAs determine the membership functions and recognition results using these membership functions affect reproduction of next generation in GAs. The proposed design scheme is applied to the several patterns such as tire tread patterns and handwritten alphabetic characters. Experimental results show the usefulness of the proposed scheme.

  • PDF