• Title/Summary/Keyword: Multiple stage design

Search Result 211, Processing Time 0.031 seconds

Multivariate design estimations under copulas constructions. Stage-1: Parametrical density constructions for defining flood marginals for the Kelantan River basin, Malaysia

  • Latif, Shahid;Mustafa, Firuza
    • Ocean Systems Engineering
    • /
    • v.9 no.3
    • /
    • pp.287-328
    • /
    • 2019
  • Comprehensive understanding of the flood risk assessments via frequency analysis often demands multivariate designs under the different notations of return periods. Flood is a tri-variate random consequence, which often pointing the unreliability of univariate return period and demands for the joint dependency construction by accounting its multiple intercorrelated flood vectors i.e., flood peak, volume & durations. Selecting the most parsimonious probability functions for demonstrating univariate flood marginals distributions is often a mandatory pre-processing desire before the establishment of joint dependency. Especially under copulas methodology, which often allows the practitioner to model univariate marginals separately from their joint constructions. Parametric density approximations often hypothesized that the random samples must follow some specific or predefine probability density functions, which usually defines different estimates especially in the tail of distributions. Concentrations of the upper tail often seem interesting during flood modelling also, no evidence exhibited in favours of any fixed distributions, which often characterized through the trial and error procedure based on goodness-of-fit measures. On another side, model performance evaluations and selections of best-fitted distributions often demand precise investigations via comparing the relative sample reproducing capabilities otherwise, inconsistencies might reveal uncertainty. Also, the strength & weakness of different fitness statistics usually vary and having different extent during demonstrating gaps and dispensary among fitted distributions. In this literature, selections efforts of marginal distributions of flood variables are incorporated by employing an interactive set of parametric functions for event-based (or Block annual maxima) samples over the 50-years continuously-distributed streamflow characteristics for the Kelantan River basin at Gulliemard Bridge, Malaysia. Model fitness criteria are examined based on the degree of agreements between cumulative empirical and theoretical probabilities. Both the analytical as well as graphically visual inspections are undertaken to strengthen much decisive evidence in favour of best-fitted probability density.

Prediction Model for Specific Cutting Energy of Pick Cutters Based on Gene Expression Programming and Particle Swarm Optimization (유전자 프로그래밍과 개체군집최적화를 이용한 픽 커터의 절삭비에너지 예측모델)

  • Hojjati, Shahabedin;Jeong, Hoyoung;Jeon, Seokwon
    • Tunnel and Underground Space
    • /
    • v.28 no.6
    • /
    • pp.651-669
    • /
    • 2018
  • This study suggests the prediction model to estimate the specific energy of a pick cutter using a gene expression programming (GEP) and particle swarm optimization (PSO). Estimating the performance of mechanical excavators is of crucial importance in early design stage of tunnelling projects, and the specific energy (SE) based approach serves as a standard performance prediction procedure that is applicable to all excavation machines. The purpose of this research, is to investigate the relationship between UCS and BTS, penetration depth, cut spacing, and SE. A total of 46 full-scale linear cutting test results using pick cutters and different values of depth of cut and cut spacing on various rock types was collected from the previous study for the analysis. The Mean Squared Error (MSE) associated with the conventional Multiple Linear Regression (MLR) method is more than two times larger than the MSE generated by GEP-PSO algorithm. The $R^2$ value associated with the GEP-PSO algorithm, is about 0.13 higher than the $R^2$ associated with MLR.

Evaluation of Road Safety Audit on Existing Freeway by Empirical Bayes Method (경험적 베이즈 방법에 의한 공용중인 고속도로 교통안전진단사업의 효과평가)

  • Mun, Sung-Ra
    • International Journal of Highway Engineering
    • /
    • v.14 no.2
    • /
    • pp.117-129
    • /
    • 2012
  • Road safety audit is the preventive enhancement strategy for safety. : it gets rid of beforehand the potential factor of a traffic accident in the stage of road planning and design and it evaluates the appropriation for road geometric structure or safety facility to prevent traffic accident in the stage of operation after the construction. Since this strategy is introduced to our country in the early 2000s, various projects have been processed and it was legislated recently. And now, the evaluation of past project for its continuation is needed. Therefore, in this study the evaluation of road safety audit on existing freeway is performed. The spatial extent of this study is Yong-dong line on which the safety treatment was executed in 2005 and 2006. And, the temporal range of this study is each 2-year of before and after from 2005 and 2006. The empirical bayes method of observational evaluation studies is applied to analyze. As a result, there is an effect of improvement on most of treated sections. But there is ineffective or negligible on some sections. Compared with the detail of treatment on each section, the effect of multiple or various treatments is good for that section. On the other hand, the section on which effect doesn't appear is the result of single or unimportant treatments. Throughout these results, the concrete analysis can be performed and the countermeasures designed for the section on which effect doesn't appear. Also it is used as reference to the future plan and direction of road safety audit on existing freeway.

A Study on the Construction Cost Index for Calculating Conceptual Estimation : 1970-1999 (개략공사비 산출을 위한 공사비 지수 연구 : 1970-1999)

  • Nam, Song Hyun;Park, Hyung Keun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.5
    • /
    • pp.527-534
    • /
    • 2020
  • A significant factor in construction work is cost. At early- and advanced-stage design, costs should be calculated to derive realistic cost estimates according to unit price calculation. Based on these estimates, the economic feasibility of construction work is assessed, and whether to proceed is determined. Through the Korea Institute of Civil Engineering and Building Technology, the construction cost index has been calculated by indirect methods after both the producer price index and construction market labor have been reprocessed to easily adjust the price changes of construction costs in Korea, and the Institute has announced it since 2004. As of January 2000, however, the construction cost index was released, and this has a time constraint on the correction and use of past construction cost data to the present moment. Variables were calculated to compute a rough construction cost that utilized past construction costs through surveys of the producer price index and the construction market labor force consisting of the construction cost index. After significant independent variables among the many variables were selected through correlation analysis, the construction cost index from 1970 to 1999 was calculated and presented through multiple regression analysis. This study therefore has prominent significance in terms of proposing a method of calculating rough construction costs that utilize construction costs that pre-date the 2000s.

Design and Implementation of an Intelligent Medical Expert System for TMA(Tissue Mineral Analysis) (TMA 분석을 위한 지능적 의학 전문가 시스템의 설계 및 구현)

  • 조영임;한근식
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.137-152
    • /
    • 2004
  • Assesment of 30 nutritional minerals and 8 toxic elements in hair are very important not only for determining adequacy, deficiencies and unbalance, but also for assessing their relative relationships in the body. A test has been developed that serves this purpose exceedingly well. This test is known as tissue mineral analysis(TMA). TMA is very popular method in hair mineral analysis for health care professionals in over 46 countries' medical center. However, there are some problems. First, they do not have database which is suitable for korean to do analyze. Second, as the TMA results from TEI-USA is composed of english documents and graphic files prohibited to open, its usability is very low. Third, some of them has low level database which is related to TMA, so hairs are sent to TEI-USA for analyzing and medical services. it bring about an severe outflow of dollars. Finally, TMA results are based on the database of american health and mineral standards, it is possibly mislead korean mineral standards. The purposes of this research is to develope the first Intelligent Medical Expert System(IMES) of TMA, in Korea, which makes clear the problems mentioned earlier IMES can analyze the tissue mineral data with multiple stage decision tree classifier. It is also constructed with multiple fuzzy rule base and hence analyze the complex data from Korean database by fuzzy inference methods. Pilot test of this systems are increased of business efficiency and business satisfaction 86% and 92% respectively.

Design Information Management System Core Development Using Industry Foundation Classes (IFC를 이용한 설계정보관리시스템 핵심부 구축)

  • Lee Keun-hyung;Chin Sang-yoon;Kim Jae-jun
    • Korean Journal of Construction Engineering and Management
    • /
    • v.1 no.2 s.2
    • /
    • pp.98-107
    • /
    • 2000
  • Increased use of computers in AEC (Architecture, Engineering and Construction) has expanded the amount of information gained from CAD (Computer Aided Design), PMIS (Project Management Information System), Structural Analysis Program, and Scheduling Program as well as making it more complex. And the productivity of AEC industry is largely dependent on well management and efficient reuse of this information. Accordingly, such trend incited much research and development on ITC (Information Technology in Construction) and CIC (Computer Integrated Construction) to be conducted. In exemplifying such effort, many researchers studied and researched on IFC (Industry Foundation Classes) since its development by IAI (International Alliance for Interoperability) for the product based information sharing. However, in spite of some valuable outputs, these researches are yet in the preliminary stage and deal mainly with conceptual ideas and trial implementations. Research on unveiling the process of the IFC application development, the core of the Design Information management system, and its applicable plan still need be done. Thus, the purpose of this paper is to determine the technologies needed for Design Information management system using IFC, and to present the key roles and the process of the IFC application development and its applicable plan. This system play a role to integrate the architectural information and the structural information into the product model and to group many each product items with various levels and aspects. To make the process model, we defined two activities, 'Product Modeling', 'Application Development', at the initial level. Then we decomposed the Application Development activity into five activities, 'IFC Schema Compile', 'Class Compile', 'Make Project Database Schema', 'Development of Product Frameworker', 'Make Project Database'. These activities are carried out by C++ Compiler, CAD, ObjectStore, ST-Developer, and ST-ObjectStore. Finally, we proposed the applicable process with six stages, '3D Modeling', 'Creation of Product Information', 'Creation and Update of Database', 'Reformation of Model's Structure with Multiple Hierarchies', 'Integration of Drawings and Specifications', and 'Creation of Quantity Information'. The IFCs, including the other classes which are going to be updated and developed newly on the construction, civil/structure, and facility management, will be used by the experts through the internet distribution technologies including CORBA and DCOM.

  • PDF

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

A Product Model Centered Integration Methodology for Design and Construction Information (프로덕트 모델 중심의 설계, 시공 정보 통합 방법론)

  • Lee Keun-Hyoung;Kim Jae-Jun
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • autumn
    • /
    • pp.99-106
    • /
    • 2002
  • Researches on integration of design and construction information from earlier era focused on the conceptual data models. Development and prevalent use of commercial database management system led many researchers to design database schemas for enlightening of relationship between non-graphic data items. Although these researches became the foundation fur the proceeding researches. they did not utilize the graphic data providable from CAD system which is already widely used. 4D CAD concept suggests a way of integrating graphic data with schedule data. Although this integration provided a new possibility for integration, there exists a limitation in data dependency on a specific application. This research suggests a new approach on integration for design and construction information, 'Product Model Centered Integration Methodology'. This methodology achieves integration by preliminary research on existing methodology using 4D CAD concept. and by development and application of new integration methodology, 'Product Model Centered Integration Methodology'. 'Design Component' can be converted into digital format by object based CAD system. 'Unified Object-based Graphic Modeling' shows how to model graphic product model using CAD system. Possibility of reusing design information in latter stage depends on the ways of creating CAD model, so modeling guidelines and specifications are suggested. Then prototype system for integration management, and exchange are presented, using 'Product Frameworker', and 'Product Database' which also supports multiple-viewpoints. 'Product Data Model' is designed, and main data workflows are represented using 'Activity Diagram', one of UML diagrams. These can be used for writing programming codes and developing prototype in order to automatically create activity items in actual schedule management system. Through validation processes, 'Product Model Centered Integration Methodology' is suggested as the new approach for integration of design and construction information.

  • PDF

Design and Full Size Flexural Test of Spliced I-type Prestressed Concrete Bridge Girders Having Holes in the Web (분절형 복부 중공 프리스트레스트 콘크리트 교량 거더의 설계 및 실물크기 휨 실험 분석)

  • Han, Man Yop;Choi, Sokhwan;Jeon, Yong-Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.3A
    • /
    • pp.235-249
    • /
    • 2011
  • A new form of I-type PSC bridge girder, which has hole in the web, is proposed in this paper. Three different concepts were combined and implemented in the design. First of all, a girder was precast at a manufacturing plant as divided pieces and assembled at the construction site using post-tensioning method, and the construction period at the site will be reduced dramatically. In this way, the quality of concrete can be assured at the manufacturing factory and concrete curing can be well controlled, and the spliced girder segments can be moved to the construction site without a transportation problem. Secondly, a numerous number of holes was made in the web of the girder. This reduces the self-weight of the girder. But more important thing related to the holes is that about half of the total anchorages can be moved from the girder ends into individual holes. The magnitude of negative moment developed at girder ends will be reduced. Also, since the longitudinal compressive stresses are reduced at ends, thick end diaphragm is not necessary. Thirdly, Prestressing force was introduced into the member through multiple stages. This concept of multi-stage prestressing method overcomes the prestressing force limit restrained by the allowable stresses at each loading stage, and maximizes the magnitude of applicable prestressing force. It makes the girder longer and shallower. Two 50 meter long full scale girders were fabricated and tested. One of them was non-spliced, or monolithic girder, made as one piece from the beginning, and the other one was assembled using post-tensioning method from five pieces of segments. It was found from the result that monolithic and spliced girder show similar load-deflection relationships and crack patterns. Girders satisfied specific girder design specification in flexural strength, deflection, and live load deflection control limit. Both spliced and monolithic holed web post-tensioned girders can be used to achieve span lengths of more than 50m with the girder height of 2 m.

Water Digital Twin for High-tech Electronics Industrial Wastewater Treatment System (II): e-ASM Calibration, Effluent Prediction, Process selection, and Design (첨단 전자산업 폐수처리시설의 Water Digital Twin(II): e-ASM 모델 보정, 수질 예측, 공정 선택과 설계)

  • Heo, SungKu;Jeong, Chanhyeok;Lee, Nahui;Shim, Yerim;Woo, TaeYong;Kim, JeongIn;Yoo, ChangKyoo
    • Clean Technology
    • /
    • v.28 no.1
    • /
    • pp.79-93
    • /
    • 2022
  • In this study, an electronics industrial wastewater activated sludge model (e-ASM) to be used as a Water Digital Twin was calibrated based on real high-tech electronics industrial wastewater treatment measurements from lab-scale and pilot-scale reactors, and examined for its treatment performance, effluent quality prediction, and optimal process selection. For specialized modeling of a high-tech electronics industrial wastewater treatment system, the kinetic parameters of the e-ASM were identified by a sensitivity analysis and calibrated by the multiple response surface method (MRS). The calibrated e-ASM showed a high compatibility of more than 90% with the experimental data from the lab-scale and pilot-scale processes. Four electronics industrial wastewater treatment processes-MLE, A2/O, 4-stage MLE-MBR, and Bardenpo-MBR-were implemented with the proposed Water Digital Twin to compare their removal efficiencies according to various electronics industrial wastewater characteristics. Bardenpo-MBR stably removed more than 90% of the chemical oxygen demand (COD) and showed the highest nitrogen removal efficiency. Furthermore, a high concentration of 1,800 mg L-1 T MAH influent could be 98% removed when the HRT of the Bardenpho-MBR process was more than 3 days. Hence, it is expected that the e-ASM in this study can be used as a Water Digital Twin platform with high compatibility in a variety of situations, including plant optimization, Water AI, and the selection of best available technology (BAT) for a sustainable high-tech electronics industry.