• Title/Summary/Keyword: statistical distribution module

Search Result 7, Processing Time 0.023 seconds

Statistical Analysis on the Web Using PHP3 (PHP3를 이용한 웹상에서의 통계분석)

  • Hwang, Jin-Soo;Uhm, Dae-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.10 no.2
    • /
    • pp.501-510
    • /
    • 1999
  • We have seen a rapid development of multimedia intustry as computer evolves and the internet has changed our way of life dramatically in these days. There we several attempts to teach elementary statistics on the web but most of them are based on commercial products. The need for statistical data analysis and decision making based on those analysis is growing. In this article we try to show one way of reaching that goal by using a server side scripting language PHP3 toghether with extra graphical module and statistical distribution module on the web. We showed some elementary exploratory graphical data analysis and statistical inferences. There are plenty of room of improvements to make it a full blown statistical analysis tool on the web in the new future. All the programs and databases used in our article we public programs. The main engine PHP3 is included as an apache web server module so it is very light and fast. It will be much better when the PHP4(ZEND) will be officially out in terms of processing speed.

  • PDF

The Development of Fixing Equipment of the Unit Module Using the Probability Distribution of Transporting Load (운반하중의 확률분포를 활용한 유닛모듈 운반용 고정장치 개발)

  • Park, Nam-Cheon;Kim, Seok;Kim, Kyoon-Tai
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.6
    • /
    • pp.4267-4275
    • /
    • 2015
  • Prefabricated houses are fabricated at the factory for approximately 60 to 80% of the entire construction process, and assembled in the field. In the process of transporting and lifting, internal and external finishes of the unit module are concerned about damages. The purpose of this study is to improve the fixing equipment by analyzing load behavior. The improved fixing equipment would minimize the deformation of internal and external finishes. In order to develop the improved fixing equipment, transporting load on the fixing equipment is analyzed using Monte Carlo simulations, and structural performance is verified by the non-linear finite element analysis. Statistical analysis shows load distribution of unit module is similar with extreme value distribution. Based on the statistical analysis and Monte Carlo simulation, the maximum transporting load is 28.9kN and 95% confidence interval of transporting load is -1.22kN to 9.5kN. The nonlinear structural analysis shows improved fixing equipment is not destructed to the limit load of 35.3kN and withstands the load-bearing in the 95% confidence interval of transporting load.

A Study on the Fabrication of the Sensor Module for the Detection of Resistive Leakage Current (Igr) in Real Time and Its Reliability Evaluation (실시간 Igr 검출을 위한 센서 모듈의 제작 및 신뢰성 평가에 관한 연구)

  • Lee, Byung-Seol;Choi, Chung-Seog
    • Journal of the Korean Society of Safety
    • /
    • v.33 no.1
    • /
    • pp.28-34
    • /
    • 2018
  • The purpose of this study is to fabricate a sensor module to detect the resistive leakage current (Igr) in real time that occurs to low voltage electric lines and to verify its reliability. In the case of the developed sensor module, wires are inserted into the zero current transformer (ZCT) and current transformer (CT) in advance and then the branch line is connected to the circuit breaker. The measurement result of the resistance of the distribution panel equipped with the developed sensor module shows that the resistance is $0.151m{\Omega}$ between the R and R phases, $0.169m{\Omega}$ between the S and S phases, and $0.178m{\Omega}$ between the T and T phases, respectively. The insulation resistance measured at AC 500 V and 1,000 V is $0.08m{\Omega}$ between the R, S, T and N phases, respectively. Then, the insulation resistance measured at DC 500 V is $83.3G{\Omega}$ between the R, S, T and G terminal, respectively. In addition, the applied withstanding voltage is AC 220 V/380 V/440 V and it was found that characteristics between all phases are good. This study measured the standby power by installing the developed sensor module at the rear of the MCCB and switching the circuit breaker on sequentially. The standby power is 1.350 W when one circuit breaker is turned on, 1.690 W when 2 circuit breakers are turned on, and 4.371 W when 10 circuit breakers are turned on. This study also verified the reliability of the standby power of the distribution panel equipped with the developed sensor module using the Minitab Program (Minitab PGM). Since the analysis shows the statistical average of 1.34627 in the reliable range of normal distribution, standard deviation of 0.001874, AD of 0.554, and P value of 0.140, it is found that the distribution panel equipped with the developed sensor module has high reliability.

Development of Web-based Off-site Consequence Analysis Program and its Application for ILRT Extension (격납건물종합누설률시험 주기연장을 위한 웹기반 소외결말분석 프로그램 개발 및 적용)

  • Na, Jang-Hwan;Hwang, Seok-Won;Oh, Ji-Yong
    • Journal of the Korean Society of Safety
    • /
    • v.27 no.5
    • /
    • pp.219-223
    • /
    • 2012
  • For an off-site consequence analysis at nuclear power plant, MELCOR Accident Consequence Code System(MACCS) II code is widely used as a software tool. In this study, the algorithm of web-based off-site consequence analysis program(OSCAP) using the MACCS II code was developed for an Integrated Leak Rate Test (ILRT) interval extension and Level 3 probabilistic safety assessment(PSA), and verification and validation(V&V) of the program was performed. The main input data for the MACCS II code are meteorological, population distribution and source term information. However, it requires lots of time and efforts to generate the main input data for an off-site consequence analysis using the MACCS II code. For example, the meteorological data are collected from each nuclear power site in real time, but the formats of the raw data collected are different from each site. To reduce the efforts and time for risk assessments, the web-based OSCAP has an automatic processing module which converts the format of the raw data collected from each site to the input data format of the MACCS II code. The program also provides an automatic function of converting the latest population data from Statistics Korea, the National Statistical Office, to the population distribution input data format of the MACCS II code. For the source term data, the program includes the release fraction of each source term category resulting from modular accident analysis program(MAAP) code analysis and the core inventory data from ORIGEN. These analysis results of each plant in Korea are stored in a database module of the web-based OSCAP, so the user can select the defaulted source term data of each plant without handling source term input data.

Stochastic finite element based reliability analysis of steel fiber reinforced concrete (SFRC) corbels

  • Gulsan, Mehmet Eren;Cevik, Abdulkadir;Kurtoglu, Ahmet Emin
    • Computers and Concrete
    • /
    • v.15 no.2
    • /
    • pp.279-304
    • /
    • 2015
  • In this study, reliability analyses of steel fiber reinforced concrete (SFRC) corbels based on stochastic finite element were performed for the first time in literature. Prior to stochastic finite element analysis, an experimental database of 84 sfrc corbels was gathered from literature. These sfrc corbels were modeled by a special finite element program. Results of experimental studies and finite element analysis were compared and found to be very close to each other. Furthermore experimental crack patterns of corbel were compared with finite element crack patterns and were observed to be quite similar. After verification of the finite element models, stochastic finite element analyses were implemented by a specialized finite element module. As a result of stochastic finite element analysis, appropriate probability distribution functions (PDF's) were proposed. Finally, coefficient of variation, bias and strength reduction (resistance) factors were proposed for sfrc corbels as a consequence of stochastic based reliability analysis.

Evaluation of the Standardized Curriculum Module and Integrated Program for Social-Environmental Education (사회환경 교육과정의 표준화 모형 및 통합 프로그램의 평가)

  • Lee, Sook-Im;Kang, Myoung-Hee;Nam, Sang-Jun;Park, Suk-Soon;Sung, Hyo-Hyun;Choi, Don-Hyung;Hur, Myung
    • Hwankyungkyoyuk
    • /
    • v.14 no.2
    • /
    • pp.76-94
    • /
    • 2001
  • Promoting positive values, attitudes, participation and personal actions on the basis of the acquisition of one's knowledge and skills is emphasized on teaching environmental education. To complete this purpose of environmental education, it is necessary not only to use various and practical educational resources, but also to develop information system, with multimedia and internet, which are effective for learning. This research attempts to assess the consistency of planning, organization and operation of integrated program for social environmental education which was developed for the necessities mentioned above. We surveyed about the accuracy of contents, usefulness, convenience and easiness of Integrated program for social environmental education. Also, we used a questionnaire to clarify the values and attitudes of respondents after they took environmental education. Then, technically, descriptive statistical method has been used to analyze the results of these surveys. Finally, we conduct an examination of the distribution of chi-square to verify the relationship between the learner's experience of using computer and one's concern about environmental issues. The results of this program, developed by research team, can be assessed by following five basic elements: usefulness, practicality, appropriation, efficiency and effectiveness. More than 90% of respondents said that this program is convenient and easy to loam. Also, more than 85% of whole respondents identified that after teaming this program they recognized more clearly what the main contents of environmental education are. In addition, we got positive response from 93% of respondents that they could understand environmental problems. On the other hand, values and attitudes of respondents have not improved a lot after the environmental education compared with the remarkable change in their recognition and understanding of environmental issues; only 34% of respondents responded that they changed their life style for making better environment after teaming this program. But it is clear that they understand much better about the environmental policy after they are educated. Developed by using information system, this integrated program for social environmental education may get different results according to a respondent's experience of using computer. Therefore, the more a respondent got a chance to use computer for a long period of time, the more he/she gave positive evaluation on the convenience and easiness of this program. However, there was no certain relationship between the frequence of using computer and one's understanding of environmental issues. Futhermore, a person who has constant concerns about environmental problems showed more positive attitudes against the understanding of environmental education. This integrated program for social environmental education, characterized by integrated, specialized and efficient educational system, can also be used as a curriculum or teaching materials for environmental education for adults; especially, it would be appropriate for teaching learners at all levels, who have different personal characteristics, to let them acquire virtual education by using information system.

  • PDF

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.