• Title/Summary/Keyword: Statistical control techniques

Search Result 174, Processing Time 0.031 seconds

A Control Method for designing Object Interactions in 3D Game (3차원 게임에서 객체들의 상호 작용을 디자인하기 위한 제어 기법)

  • 김기현;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.322-331
    • /
    • 2003
  • As the complexity of a 3D game is increased by various factors of the game scenario, it has a problem for controlling the interrelation of the game objects. Therefore, a game system has a necessity of the coordination of the responses of the game objects. Also, it is necessary to control the behaviors of animations of the game objects in terms of the game scenario. To produce realistic game simulations, a system has to include a structure for designing the interactions among the game objects. This paper presents a method that designs the dynamic control mechanism for the interaction of the game objects in the game scenario. For the method, we suggest a game agent system as a framework that is based on intelligent agents who can make decisions using specific rules. Game agent systems are used in order to manage environment data, to simulate the game objects, to control interactions among game objects, and to support visual authoring interface that ran define a various interrelations of the game objects. These techniques can process the autonomy level of the game objects and the associated collision avoidance method, etc. Also, it is possible to make the coherent decision-making ability of the game objects about a change of the scene. In this paper, the rule-based behavior control was designed to guide the simulation of the game objects. The rules are pre-defined by the user using visual interface for designing their interaction. The Agent State Decision Network, which is composed of the visual elements, is able to pass the information and infers the current state of the game objects. All of such methods can monitor and check a variation of motion state between game objects in real time. Finally, we present a validation of the control method together with a simple case-study example. In this paper, we design and implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the most effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

Analysis of Output Constancy Checks Using Process Control Techniques in Linear Accelerators (선형가속기의 출력 특성에 대한 공정능력과 공정가능성을 이용한 통계적 분석)

  • Oh, Se An;Yea, Ji Woon;Kim, Sang Won;Lee, Rena;Kim, Sung Kyu
    • Progress in Medical Physics
    • /
    • v.25 no.3
    • /
    • pp.185-192
    • /
    • 2014
  • The purpose of this study is to evaluate the results for the quality assurance through a statistical analysis on the output characteristics of linear accelerators belonging to Yeungnam University Medical Center by using the Shewhart-type chart, Exponentially weighted moving average chart (EWMA) chart, and process capability indices $C_p$ and $C_{pk}$. To achieve this, we used the output values measured using respective treatment devices (21EX, 21EX-S, and Novalis Tx) by medical physicists every month from September, 2012 to April, 2014. The output characteristics of treatment devices followed the IAEA TRS-398 guidelines, and the measurements included photon beams of 6 MV, 10 MV, and 15 MV and electron beams of 4 MeV, 6 MeV, 9 MeV, 12 MeV, 16MeV, and 20 MeV. The statistical analysis was done for the output characteristics measured, and was corrected every month. The width of control limit of weighting factors and measurement values were calculated as ${\lambda}=0.10$ and L=2.703, respectively; and the process capability indices $C_p$ and $C_{pk}$ were greater than or equal to 1 for all energies of the linear accelerators (21EX, 21EX-S, and Novalis Tx). Measured values of output doses with drastic and minor changes were found through the Shewhart-type chart and EWMA chart, respectively. The process capability indices $C_p$ and $C_{pk}$ of the treatment devices in our institution were, respectively, 2.384 and 2.136 for 21EX, 1.917 and 1.682 for 21EX-S, and 2.895 and 2.473 for Novalis Tx, proving that Novalis Tx has the most stable and accurate output characteristics.

Enterprise Human Resource Management using Hybrid Recognition Technique (하이브리드 인식 기술을 이용한 전사적 인적자원관리)

  • Han, Jung-Soo;Lee, Jeong-Heon;Kim, Gui-Jung
    • Journal of Digital Convergence
    • /
    • v.10 no.10
    • /
    • pp.333-338
    • /
    • 2012
  • Human resource management is bringing the various changes with the IT technology. In particular, if HRM is non-scientific method such as group management, physical plant, working hours constraints, personal contacts, etc, the current enterprise human resources management(e-HRM) appeared in the individual dimension management, virtual workspace (for example: smart work center, home work, etc.), working time flexibility and elasticity, computer-based statistical data and the scientific method of analysis and management has been a big difference in the sense. Therefore, depending on changes in the environment, companies have introduced a variety of techniques as RFID card, fingerprint time & attendance systems in order to build more efficient and strategic human resource management system. In this paper, time and attendance, access control management system was developed using multi camera for 2D and 3D face recognition technology-based for efficient enterprise human resource management. We had an issue with existing 2D-style face-recognition technology for lighting and the attitude, and got more than 90% recognition rate against the poor readability. In addition, 3D face recognition has computational complexities, so we could improve hybrid video recognition and the speed using 3D and 2D in parallel.

Construction of Basin Scale Climate Change Scenarios by the Transfer Function and Stochastic Weather Generation Models (전이함수모형과 일기 발생모형을 이용한 유역규모 기후변화시나리오의 작성)

  • Kim, Byung-Sik;Seoh, Byung-Ha;Kim, Nam-Won
    • Journal of Korea Water Resources Association
    • /
    • v.36 no.3 s.134
    • /
    • pp.345-363
    • /
    • 2003
  • From the General Circulation Models(GCMs), it is known that the increases of concentrations of greenhouse gases will have significant implications for climate change in global and regional scales. The GCM has an uncertainty in analyzing the meteorologic processes at individual sites and so the 'downscaling' techniques are used to bridge the spatial and temporal resolution gaps between what, at present, climate modellers can provide and what impact assessors require. This paper describes a method for assessing local climate change impacts using a robust statistical downscaling technique. The method facilitates the rapid development of multiple, low-cost, single-site scenarios of daily surface weather variables under current and future regional climate forcing. The construction of climate change scenarios based on spatial regression(transfer function) downscaling and on the use of a local stochastic weather generator is described. Regression downscaling translates the GCM grid-box predictions with coarse resolution of climate change to site-specific values and the values were then used to perturb the parameters of the stochastic weather generator in order to simulate site-specific daily weather values. In this study, the global climate change scenarios are constructed using the YONU GCM control run and transient experiments.

Efficient Robust Design Optimization Using Statistical Moment Based on Multiplicative Decomposition Considering Non-normal Noise Factors (비정규 분포의 잡음인자를 고려한 곱분해기법 기반의 통계 모멘트를 이용한 효율적인 강건 최적설계)

  • Cho, Su-Gil;Lee, Min-Uk;Lim, Woo-Chul;Choi, Jong-Su;Kim, Hyung-Woo;Hong, Sup;Lee, Tae-Hee
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.11
    • /
    • pp.1305-1310
    • /
    • 2012
  • The performance of a system can be affected by the variance of noise factors, which arise owing to uncertainties of the material properties and environmental factors acting on the system. For robust design optimization of the system performance, it is necessary to minimize the effect of the variance of the noise factors that are impossible to control. However, present robust design techniques consider the variation of design factors, and not the noise factors, as being important. Furthermore, it is necessary to assume a normal distribution; however, a normal distribution is often not suitable to estimate the variations. In this study, a robust design technique is proposed to consider the variation of noise factors that are estimated as non-normal distributions in a real experiment. As an example of an engineering problem, a deep-sea manganese nodule miner tracked vehicle is used to demonstrate the feasibility of the proposed method.

A BPM Activity-Performer Correspondence Analysis Method (BPM 기반의 업무-수행자 대응분석 기법)

  • Ahn, Hyun;Park, Chungun;Kim, Kwanghoon
    • Journal of Internet Computing and Services
    • /
    • v.14 no.4
    • /
    • pp.63-72
    • /
    • 2013
  • Business Process Intelligence (BPI) is one of the emerging technologies in the knowledge discovery and analysis area. BPI deals with a series of techniques from discovering knowledge to analyzing the discovered knowledge in BPM-supported organizations. By means of the BPI technology, we are able to provide the full functionality of control, monitoring, prediction, and optimization of process-supported organizational knowledge. Particularly, we focus on the focal organizational knowledge, which is so-called the BPM activity-performer affiliation networking knowledge that represents the affiliated relationships between performers and activities in enacting a specific business process model. That is, in this paper we devise a statistical analysis method to be applied to the BPM activity-performer affiliation networking knowledge, and dubbed it the activity-performer correspondence analysis method. The devised method consists of a series of pipelined phases from the generation of a bipartite matrix to the visualization of the analysis result, and through the method we are eventually able to analyze the degree of correspondences between a group of performers and a group of activities involved in a business process model or a package of business process models. Conclusively, we strongly expect the effectiveness and efficiency of the human resources allotments, and the improvement of the correlational degree between business activities and performers, in planning and designing business process models and packages for the BPM-supported organization, through the activity-performer correspondence analysis method.

Application of convolutional autoencoder for spatiotemporal bias-correction of radar precipitation (CAE 알고리즘을 이용한 레이더 강우 보정 평가)

  • Jung, Sungho;Oh, Sungryul;Lee, Daeeop;Le, Xuan Hien;Lee, Giha
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.7
    • /
    • pp.453-462
    • /
    • 2021
  • As the frequency of localized heavy rainfall has increased during recent years, the importance of high-resolution radar data has also increased. This study aims to correct the bias of Dual Polarization radar that still has a spatial and temporal bias. In many studies, various statistical techniques have been attempted to correct the bias of radar rainfall. In this study, the bias correction of the S-band Dual Polarization radar used in flood forecasting of ME was implemented by a Convolutional Autoencoder (CAE) algorithm, which is a type of Convolutional Neural Network (CNN). The CAE model was trained based on radar data sets that have a 10-min temporal resolution for the July 2017 flood event in Cheongju. The results showed that the newly developed CAE model provided improved simulation results in time and space by reducing the bias of raw radar rainfall. Therefore, the CAE model, which learns the spatial relationship between each adjacent grid, can be used for real-time updates of grid-based climate data generated by radar and satellites.

The Efficacy of Endovascular Treatment for Deep Vein Thrombosis (하지 심부정맥 혈전증에서 중재적 치료의 유용성)

  • Kim, Seon-Hee;Chung, Sung-Woon;Kim, Chang-Won
    • Journal of Chest Surgery
    • /
    • v.43 no.3
    • /
    • pp.266-272
    • /
    • 2010
  • Background: Deep vein thrombosis (DVT) is a serious disease that causes life-threatening pulmonary embolism and chronic venous insufficiency. Anticoagulation is the standard therapy for DVT. However, the results of standard anticoagulation for treating DVT have been disappointing, so endovascular treatment is commonly performed nowadays. The aim of this study was to evaluate the efficacy of an endovascular procedure for treating patients with DVT. Material and Method: We retrospectively evaluated the clinical data of 29 DVT patients who underwent an endovascular procedure between December 2006 and July 2008. We compared the results of the 29 patients with the results of another 45 patients who were treated with only aspirin and heparin. Result: The patient’s mean age was 55.4 years in the intervention group and 53.7 years in the control group. DVT occurred more frequently in the females. Catheter-directed thrombolysis was performed in 22 patients (75.8%). Aspiration thrombectomy was performed in 18 patients (62%) and a endovascular stent was placed in 25 patients (86.2%). Fifteen patients (51.7%) underwent percutaneous insertion of a retrievable IVC filter for the prevention of pulmonary embolism. In the control group, thirty nine patients (86.7%) were treated with low-molecular heparin, and seven patients (15.6%) who were contraindicated for warfarin were treated with aspirin. No bleeding complications occurred during thrombolysis or anticoagulation. We analyzed the statistical data according to recurrence of DVT and the incidence of post-thrombotic syndrome (PTS) during the follow-up period. The intervention group had a significantly lower incidence of PTS (p-value=0.008), but they had the same result as the control group for the recurrence of DVT. In addition, death from the DVT did not occur in the intervention group. Thus, we obtained better clinical outcomes in the intervention group as compared to those in the anticoagulation only group. Conclusion: Endovascular procedures are effective alternative modalities, as compared to systemic anticoagulation, for the treatment of DVT. But more studies are needed to determine the specific indications and to validate the long-term efficacy of endovascular procedures for the treatment of DVT.

Development on Early Warning System about Technology Leakage of Small and Medium Enterprises (중소기업 기술 유출에 대한 조기경보시스템 개발에 대한 연구)

  • Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.143-159
    • /
    • 2017
  • Due to the rapid development of IT in recent years, not only personal information but also the key technologies and information leakage that companies have are becoming important issues. For the enterprise, the core technology that the company possesses is a very important part for the survival of the enterprise and for the continuous competitive advantage. Recently, there have been many cases of technical infringement. Technology leaks not only cause tremendous financial losses such as falling stock prices for companies, but they also have a negative impact on corporate reputation and delays in corporate development. In the case of SMEs, where core technology is an important part of the enterprise, compared to large corporations, the preparation for technological leakage can be seen as an indispensable factor in the existence of the enterprise. As the necessity and importance of Information Security Management (ISM) is emerging, it is necessary to check and prepare for the threat of technology infringement early in the enterprise. Nevertheless, previous studies have shown that the majority of policy alternatives are represented by about 90%. As a research method, literature analysis accounted for 76% and empirical and statistical analysis accounted for a relatively low rate of 16%. For this reason, it is necessary to study the management model and prediction model to prevent leakage of technology to meet the characteristics of SMEs. In this study, before analyzing the empirical analysis, we divided the technical characteristics from the technology value perspective and the organizational factor from the technology control point based on many previous researches related to the factors affecting the technology leakage. A total of 12 related variables were selected for the two factors, and the analysis was performed with these variables. In this study, we use three - year data of "Small and Medium Enterprise Technical Statistics Survey" conducted by the Small and Medium Business Administration. Analysis data includes 30 industries based on KSIC-based 2-digit classification, and the number of companies affected by technology leakage is 415 over 3 years. Through this data, we conducted a randomized sampling in the same industry based on the KSIC in the same year, and compared with the companies (n = 415) and the unaffected firms (n = 415) 1:1 Corresponding samples were prepared and analyzed. In this research, we will conduct an empirical analysis to search for factors influencing technology leakage, and propose an early warning system through data mining. Specifically, in this study, based on the questionnaire survey of SMEs conducted by the Small and Medium Business Administration (SME), we classified the factors that affect the technology leakage of SMEs into two factors(Technology Characteristics, Organization Characteristics). And we propose a model that informs the possibility of technical infringement by using Support Vector Machine(SVM) which is one of the various techniques of data mining based on the proven factors through statistical analysis. Unlike previous studies, this study focused on the cases of various industries in many years, and it can be pointed out that the artificial intelligence model was developed through this study. In addition, since the factors are derived empirically according to the actual leakage of SME technology leakage, it will be possible to suggest to policy makers which companies should be managed from the viewpoint of technology protection. Finally, it is expected that the early warning model on the possibility of technology leakage proposed in this study will provide an opportunity to prevent technology Leakage from the viewpoint of enterprise and government in advance.

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.