• Title/Summary/Keyword: Mathematical Models

Search Result 1,812, Processing Time 0.028 seconds

The Application of Fuzzy Logic to Assess the Performance of Participants and Components of Building Information Modeling

  • Wang, Bohan;Yang, Jin;Tan, Adrian;Tan, Fabian Hadipriono;Parke, Michael
    • Journal of Construction Engineering and Project Management
    • /
    • v.8 no.4
    • /
    • pp.1-24
    • /
    • 2018
  • In the last decade, the use of Building Information Modeling (BIM) as a new technology has been applied with traditional Computer-aided design implementations in an increasing number of architecture, engineering, and construction projects and applications. Its employment alongside construction management, can be a valuable tool in helping move these activities and projects forward in a more efficient and time-effective manner. The traditional stakeholders, i.e., Owner, A/E and the Contractor are involved in this BIM system that is used in almost every activity of construction projects, such as design, cost estimate and scheduling. This article extracts major features of the application of BIM from perspective of participating BIM components, along with the different phrases, and applies to them a logistic analysis using a fuzzy performance tree, quantifying these phrases to judge the effectiveness of the BIM techniques employed. That is to say, these fuzzy performance trees with fuzzy logic concepts can properly translate the linguistic rating into numeric expressions, and are thus employed in evaluating the influence of BIM applications as a mathematical process. The rotational fuzzy models are used to represent the membership functions of the performance values and their corresponding weights. Illustrations of the use of this fuzzy BIM performance tree are presented in the study for the uninitiated users. The results of these processes are an evaluation of BIM project performance as highly positive. The quantification of the performance ratings for the individual factors is a significant contributor to this assessment, capable of parsing vernacular language into numerical data for a more accurate and precise use in performance analysis. It is hoped that fuzzy performance trees and fuzzy set analysis can be used as a tool for the quality and risk analysis for other construction techniques in the future. Baldwin's rotational models are used to represent the membership functions of the fuzzy sets. Three scenarios are presented using fuzzy MEAN, AND and OR gates from the lowest to intermediate levels of the tree, and fuzzy SUM gate to relate the intermediate level to the top component of the tree, i.e., BIM application final performance. The use of fuzzy MEAN for lower levels and fuzzy SUM gates to reach the top level suggests the most realistic and accurate results. The methodology (fuzzy performance tree) described in this paper is appropriate to implement in today's construction industry when limited objective data is presented and it is heavily relied on experts' subjective judgment.

Propagation Analysis of Dam Break Wave using Approximate Riemann solver (Riemann 해법을 이용한 댐 붕괴파의 전파 해석)

  • Kim, Byung Hyun;Han, Kun Yeon;Ahn, Ki Hong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.5B
    • /
    • pp.429-439
    • /
    • 2009
  • When Catastrophic extreme flood occurs due to dam break, the response time for flood warning is much shorter than for natural floods. Numerical models can be powerful tools to predict behaviors in flood wave propagation and to provide the information about the flooded area, wave front arrival time and water depth and so on. But flood wave propagation due to dam break can be a process of difficult mathematical characterization since the flood wave includes discontinuous flow and dry bed propagation. Nevertheless, a lot of numerical models using finite volume method have been recently developed to simulate flood inundation due to dam break. As Finite volume methods are based on the integral form of the conservation equations, finite volume model can easily capture discontinuous flows and shock wave. In this study the numerical model using Riemann approximate solvers and finite volume method applied to the conservative form for two-dimensional shallow water equation was developed. The MUSCL scheme with surface gradient method for reconstruction of conservation variables in continuity and momentum equations is used in the predictor-corrector procedure and the scheme is second order accurate both in space and time. The developed finite volume model is applied to 2D partial dam break flows and dam break flows with triangular bump and validated by comparing numerical solution with laboratory measurements data and other researcher's data.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Efficient Peer-to-Peer Lookup in Multi-hop Wireless Networks

  • Shin, Min-Ho;Arbaugh, William A.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.3 no.1
    • /
    • pp.5-25
    • /
    • 2009
  • In recent years the popularity of multi-hop wireless networks has been growing. Its flexible topology and abundant routing path enables many types of applications. However, the lack of a centralized controller often makes it difficult to design a reliable service in multi-hop wireless networks. While packet routing has been the center of attention for decades, recent research focuses on data discovery such as file sharing in multi-hop wireless networks. Although there are many peer-to-peer lookup (P2P-lookup) schemes for wired networks, they have inherent limitations for multi-hop wireless networks. First, a wired P2P-lookup builds a search structure on the overlay network and disregards the underlying topology. Second, the performance guarantee often relies on specific topology models such as random graphs, which do not apply to multi-hop wireless networks. Past studies on wireless P2P-lookup either combined existing solutions with known routing algorithms or proposed tree-based routing, which is prone to traffic congestion. In this paper, we present two wireless P2P-lookup schemes that strictly build a topology-dependent structure. We first propose the Ring Interval Graph Search (RIGS) that constructs a DHT only through direct connections between the nodes. We then propose the ValleyWalk, a loosely-structured scheme that requires simple local hints for query routing. Packet-level simulations showed that RIGS can find the target with near-shortest search length and ValleyWalk can find the target with near-shortest search length when there is at least 5% object replication. We also provide an analytic bound on the search length of ValleyWalk.

The Role of PK/PD Modeling and Simulation in Model-based New Drug Development (모델 기반학적 신약개발에서 약동/약력학 모델링 및 시뮬레이션의 역할)

  • Yun, Hwi-Yeol;Baek, In-Hwan;Seo, Jeong-Won;Bae, Kyung-Jin;Lee, Mann-Hyung;Kang, Won-Ku;Kwon, Kwang-Il
    • Korean Journal of Clinical Pharmacy
    • /
    • v.18 no.2
    • /
    • pp.84-96
    • /
    • 2008
  • In the recent, pharmacokinetic (PK)/pharmacodynamic (PD) modeling has appeared as a critical path tools in new drug development to optimize drug efficacy and safety. PK/PD modeling is the mathematical approaches of the relationships between PK and PD. This approach in new drug development can be estimated inaccessible PK and PD parameters, evaluated competing hypothesis, and predicted the response under new conditions. Additionally, PK/PD modeling provides the information about systemic conditions for understanding the pharmacology and biology. These advantages of PK/PD model development are to provide the early decision-making information in new drug development process, and to improve the prediction power for the success of clinical trials. The purpose of this review article is to summarize the PK/PD modeling process, and to provide the theoretical and practical information about widely used PK/PD models. This review also provides model schemes and the differential equations for the development of PK/PD model.

  • PDF

Growth Characteristics of Enterobacter sakazakii Used to Develop a Predictive Model

  • Seo, Kyo-Young;Heo, Sun-Kyung;Bae, Dong-Ho;Oh, Deog-Hwan;Ha, Sang-Do
    • Food Science and Biotechnology
    • /
    • v.17 no.3
    • /
    • pp.642-650
    • /
    • 2008
  • A mathematical model was developed for predicting the growth rate of Enterobacter sakazakii in tryptic soy broth medium as a function of the combined effects of temperature (5, 10, 20, 30, and $40^{\circ}C$), pH (4, 5, 6, 7, 8, 9, and 10), and the NaCl concentration (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10%). With all experimental variables, the primary models showed a good fit ($R^2=0.8965$ to 0.9994) to a modified Gompertz equation to obtain growth rates. The secondary model was 'In specific growth $rate=-0.38116+(0.01281^*Temp)+(0.07993^*pH)+(0.00618^*NaCl)+(-0.00018^*Temp^2)+(-0.00551^*pH^2)+(-0.00093^*NaCl^2)+(0.00013^*Temp*pH)+(-0.00038^*Temp*NaCl)+(-0.00023^*pH^*NaCl)$'. This model is thought to be appropriate for predicting growth rates on the basis of a correlation coefficient (r) 0.9579, a coefficient of determination ($R^2$) 0.91, a mean square error 0.026, a bias factor 1.03, and an accuracy factor 1.13. Our secondary model provided reliable predictions of growth rates for E. sakazakii in broth with the combined effects of temperature, NaCl concentration, and pH.

Development of Instructional Models for Problem Solving in Quadratic Functions and Ellipses (이차함수와 타원의 문제해결 지도를 위한 멀티미디어 학습자료 개발)

  • 김인수;고상숙;박승재;김영진
    • Journal of Educational Research in Mathematics
    • /
    • v.8 no.1
    • /
    • pp.59-71
    • /
    • 1998
  • Recently, most classrooms in Korea are fully equipped with multimedia environments such as a powerful pentium pc, a 43″large sized TV, and so on through the third renovation of classroom environments. However, there is not much software teachers can use directly in their teaching. Even with existing software such as GSP, and Mathematica, it turns out that it doesn####t fit well in a large number of students in classrooms and with all written in English. The study is to analyze the characteristics of problem-solving process and to develop a computer program which integrates the instruction of problem solving into a regular math program in areas of quadratic functions and ellipses. Problem Solving in this study included two sessions: 1) Learning of basic facts, concepts, and principles; 2) problem solving with problem contexts. In the former, the program was constructed based on the definitions of concepts so that students can explore, conjecture, and discover such mathematical ideas as basic facts, concepts, and principles. In the latter, the Polya#s 4 phases of problem-solving process contributed to designing of the program. In understanding of a problem, the program enhanced students#### understanding with multiple, dynamic representations of the problem using visualization. The strategies used in making a plan were collecting data, using pictures, inductive, and deductive reasoning, and creative reasoning to develop abstract thinking. In carrying out the plan, students can solve the problem according to their strategies they planned in the previous phase. In looking back, the program is very useful to provide students an opportunity to reflect problem-solving process, generalize their solution and create a new in-depth problem. This program was well matched with the dynamic and oscillation Polya#s problem-solving process. Moreover, students can facilitate their motivation to solve a problem with dynamic, multiple representations of the problem and become a powerful problem solve with confidence within an interactive computer environment. As a follow-up study, it is recommended to research the effect of the program in classrooms.

  • PDF

A Study on the Estimation Model of the Proper Cargo Handling Capacity based on Simulation in Port - Port Cargo Exclusive Pier Example - (항만에서 시뮬레이션 기반 적정하역능력 산정 모델에 관한 연구 - 항만 화물 전용부두 중심으로 -)

  • Park, Sang-Kook;Park, Nam-Kyu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.10
    • /
    • pp.2454-2460
    • /
    • 2013
  • So far, the port cargo handling capacity of general cargo was computed using simple formulae based on mathematical models. However, this simple calculation could not be able to reflect the reality. Thus, the simulation method was applied in this paper to overcome the limitation that the calculation method used in the past studies has. The process occurring from arrival to departure of a ship, which is reflecting the process rules of berth, was modeled to estimate the optimum level of handling capacity by using an example of the loading and unloading of an appropriated wharf at the harbor, and simulation was performed by developing the prototype. The actual processing capability of Mukho port was compared to the estimated capability calculated using the simulation method and the optimum level of capability could be computed by repeatedly simulating the input variable condition of the simulation prototype.

Mix Design of Lightweight Aggregate Concrete and Determination of Targeted Dry Density of Concrete (경량골재 콘크리트의 배합설계 및 목표 콘크리트 기건밀도의 결정)

  • Yang, Keun-Hyeok
    • Journal of the Korea Institute of Building Construction
    • /
    • v.13 no.5
    • /
    • pp.491-497
    • /
    • 2013
  • The objective of the present study is to establish a straightforward mixture proportioning procedure for structural lightweight aggregate concrete (LWAC), and evaluate the selection range of the targeted dry density of concrete against the designed concrete compressive strength. In developing this procedure, mathematical models were formulated based on a nonlinear regression analysis over 347 data sets and two boundary conditions of the absolute volume and dry density of concrete. The proposed procedure demonstrated the appropriate water-to-cement ratio and dry density of concrete to achieve the designed strength decrease with the increase in volumetric ratio of coarse aggregates. This trend was more significant in all-LWAC than in sand-LWAC. Overall, the selection range of the dry density of LWAC exists within a certain range according to the designed strength, which can be obtained using the proposed procedure.

EDISON Platform to Supporting Education and Integration Research in Computational Science (계산과학 분야의 교육 및 융합연구 지원을 위한 EDISON 플랫폼)

  • Jin, Du-Seok;Jung, Young-Jin;Jung, Hoe-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.1
    • /
    • pp.176-182
    • /
    • 2012
  • Recently, a new theoretical and methodological approach for computational science is becoming more and more popular for analyzing and solving scientific problems in various scientific disciplines and applied research. Computational science is a field of study concerned with constructing mathematical models and quantitative analysis techniques and using large computing resources to solve the problems which are difficult to approach in a physical experimentally. In this paper, we present R&D of EDISON open integration platform that allows anyone like professors, researchers, industrial workers, students etc to upload their advanced research result such as simulation SW to use and share based on the cyber infrastructure of supercomputer and network. EDISON platform, which consists of 3 tiers (EDISON application framework, EDISON middleware, and EDISON infra resources) provides Web portal for education and research in 5 areas (CFD, Chemistry, Physics, Structural Dynamics, Computational Design) and user service.