• Title/Summary/Keyword: computational accuracy

Search Result 2,149, Processing Time 0.035 seconds

Spatiotemporal chlorine residual prediction in water distribution networks using a hierarchical water quality simulation technique (계층적 수질모의기법을 이용한 상수관망시스템의 시공간 잔류염소농도 예측)

  • Jeong, Gimoon;Kang, Doosun;Hwang, Taemun
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.9
    • /
    • pp.643-656
    • /
    • 2021
  • Recently, water supply management technology is highly developed, and a computer simulation model plays a critical role for estimating hydraulics and water quality in water distribution networks (WDNs). However, a simulation of complex large water networks is computationally intensive, especially for the water quality simulations, which require a short simulation time step and a long simulation time period. Thus, it is often prohibitive to analyze the water quality in real-scale water networks. In this study, in order to improve the computational efficiency of water quality simulations in complex water networks, a hierarchical water-quality-simulation technique was proposed. The water network is hierarchically divided into two sub-networks for improvement of computing efficiency while preserving water quality simulation accuracy. The proposed approach was applied to a large-scale real-life water network that is currently operating in South Korea, and demonstrated a spatiotemporal distribution of chlorine concentration under diverse chlorine injection scenarios.

Functional Prediction of Hypothetical Proteins from Shigella flexneri and Validation of the Predicted Models by Using ROC Curve Analysis

  • Gazi, Md. Amran;Mahmud, Sultan;Fahim, Shah Mohammad;Kibria, Mohammad Golam;Palit, Parag;Islam, Md. Rezaul;Rashid, Humaira;Das, Subhasish;Mahfuz, Mustafa;Ahmeed, Tahmeed
    • Genomics & Informatics
    • /
    • v.16 no.4
    • /
    • pp.26.1-26.12
    • /
    • 2018
  • Shigella spp. constitutes some of the key pathogens responsible for the global burden of diarrhoeal disease. With over 164 million reported cases per annum, shigellosis accounts for 1.1 million deaths each year. Majority of these cases occur among the children of the developing nations and the emergence of multi-drug resistance Shigella strains in clinical isolates demands the development of better/new drugs against this pathogen. The genome of Shigella flexneri was extensively analyzed and found 4,362 proteins among which the functions of 674 proteins, termed as hypothetical proteins (HPs) had not been previously elucidated. Amino acid sequences of all these 674 HPs were studied and the functions of a total of 39 HPs have been assigned with high level of confidence. Here we have utilized a combination of the latest versions of databases to assign the precise function of HPs for which no experimental information is available. These HPs were found to belong to various classes of proteins such as enzymes, binding proteins, signal transducers, lipoprotein, transporters, virulence and other proteins. Evaluation of the performance of the various computational tools conducted using receiver operating characteristic curve analysis and a resoundingly high average accuracy of 93.6% were obtained. Our comprehensive analysis will help to gain greater understanding for the development of many novel potential therapeutic interventions to defeat Shigella infection.

Acceleration of the SBR Technique using Grouping of Rays (광선 그룹화를 이용한 SBR 가속기법)

  • Lee, Jae-In;Yun, Dal-Jae;Yang, Seong-Jun;Yang, Woo-Yong;Bae, Jun-Woo;Kim, Si-Ho;Myung, Noh-Hoon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.21 no.6
    • /
    • pp.752-759
    • /
    • 2018
  • The SBR technique is one of the asymptotic high frequency method, where a dense grid of rays are launched and traced to analyze the scattering properties of the target. In this paper, we propose an accelerated SBR technique using grouping a central ray and 8 surrounding rays around the center ray. First, launched rays are grouped into groups consisting of a central ray and 8 surrounding rays. After the central ray of each group is preferentially traced, 8 surrounding rays are rapidly traced using the information of ray tracing for the central ray. Simulation result of scattering analysis for CAD models verifies that the proposed method reduces the computational time without decreasing the accuracy.

Coupling non-matching finite element discretizations in small-deformation inelasticity: Numerical integration of interface variables

  • Amaireh, Layla K.;Haikal, Ghadir
    • Coupled systems mechanics
    • /
    • v.8 no.1
    • /
    • pp.71-93
    • /
    • 2019
  • Finite element simulations of solid mechanics problems often involve the use of Non-Confirming Meshes (NCM) to increase accuracy in capturing nonlinear behavior, including damage and plasticity, in part of a solid domain without an undue increase in computational costs. In the presence of material nonlinearity and plasticity, higher-order variables are often needed to capture nonlinear behavior and material history on non-conforming interfaces. The most popular formulations for coupling non-conforming meshes are dual methods that involve the interpolation of a traction field on the interface. These methods are subject to the Ladyzhenskaya-Babuska-Brezzi (LBB) stability condition, and are therefore limited in their implementation with the higher-order elements needed to capture nonlinear material behavior. Alternatively, the enriched discontinuous Galerkin approach (EDGA) (Haikal and Hjelmstad 2010) is a primal method that provides higher order kinematic fields on the interface, and in which interface tractions are computed from local finite element estimates, therefore facilitating its implementation with nonlinear material models. The inclusion of higher-order interface variables, however, presents the issue of preserving material history at integration points when a increase in integration order is needed. In this study, the enriched discontinuous Galerkin approach (EDGA) is extended to the case of small-deformation plasticity. An interface-driven Gauss-Kronrod integration rule is proposed to enable adaptive enrichment on the interface while preserving history-dependent material data at existing integration points. The method is implemented using classical J2 plasticity theory as well as the pressure-dependent Drucker-Prager material model. We show that an efficient treatment of interface variables can improve algorithmic performance and provide a consistent approach for coupling non-conforming meshes in inelasticity.

A new method to predict the critical incidence angle for buildings under near-fault motions

  • Sebastiani, Paolo E.;Liberatore, Laura;Lucchini, Andrea;Mollaioli, Fabrizio
    • Structural Engineering and Mechanics
    • /
    • v.68 no.5
    • /
    • pp.575-589
    • /
    • 2018
  • It is well known that the incidence angle of seismic excitation has an influence on the structural response of buildings, and this effect can be more significant in the case of near-fault signals. However, current seismic codes do not include detailed requirements regarding the direction of application of the seismic action and they have only recently introduced specific provisions about near-fault earthquakes. Thus, engineers have the task of evaluating all the relevant directions or the most critical conditions case by case, in order to avoid underestimating structural demand. To facilitate the identification of the most critical incidence angle, this paper presents a procedure which makes use of a two-degree of freedom model for representing a building. The proposed procedure makes it possible to avoid the extensive computational effort of multiple dynamic analyses with varying angles of incidence of ground motion excitation, which is required if a spatial multi-degree of freedom model is used for representing a building. The procedure is validated through the analysis of two case studies consisting of an eight- and a six-storey reinforced concrete frame building, selected as representative of existing structures located in Italy. A set of 124 near-fault ground motion records oriented along 8 incidence angles, varying from 0 to 180 degrees, with increments of 22.5 degrees, is used to excite the structures. Comparisons between the results obtained with detailed models of the two structures and the proposed procedure are used to show the accuracy of the latter in the prediction of the most critical angle of seismic incidence.

Domain decomposition technique to simulate crack in nonlinear analysis of initially imperfect laminates

  • Ghannadpour, S. Amir M.;Karimi, Mona
    • Structural Engineering and Mechanics
    • /
    • v.68 no.5
    • /
    • pp.603-619
    • /
    • 2018
  • In this research, an effective computational technique is carried out for nonlinear and post-buckling analyses of cracked imperfect composite plates. The laminated plates are assumed to be moderately thick so that the analysis can be carried out based on the first-order shear deformation theory. Geometric non-linearity is introduced in the way of von-Karman assumptions for the strain-displacement equations. The Ritz technique is applied using Legendre polynomials for the primary variable approximations. The crack is modeled by partitioning the entire domain of the plates into several sub-plates and therefore the plate decomposition technique is implemented in this research. The penalty technique is used for imposing the interface continuity between the sub-plates. Different out-of-plane essential boundary conditions such as clamp, simply support or free conditions will be assumed in this research by defining the relevant displacement functions. For in-plane boundary conditions, lateral expansions of the unloaded edges are completely free while the loaded edges are assumed to move straight but restricted to move laterally. With the formulation presented here, the plates can be subjected to biaxial compressive loads, therefore a sensitivity analysis is performed with respect to the applied load direction, along the parallel or perpendicular to the crack axis. The integrals of potential energy are numerically computed using Gauss-Lobatto quadrature formulas to get adequate accuracy. Then, the obtained non-linear system of equations is solved by the Newton-Raphson method. Finally, the results are presented to show the influence of crack length, various locations of crack, load direction, boundary conditions and different values of initial imperfection on nonlinear and post-buckling behavior of laminates.

Regular Wave Generation Using Three Different Numerical Models under Perfect Reflection Condition and Validation with Experimental Data (세 가지 수치모델을 이용한 완전반사 조건에서의 규칙파 조파 및 수리실험 검증)

  • Oh, Sang-Ho;Ahn, Sukjin
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.31 no.4
    • /
    • pp.199-208
    • /
    • 2019
  • Regular waves were generated in a wave flume under perfect reflection condition to evaluate performance of three CFD models of CADMAS-SURF, olaFlow, and KIOSTFOAM. The experiments and numerical simulations were carried out for three different conditions of non-breaking, breaking of standing waves, and breaking of incident waves. Among the three CFD models, KIOSTFOAM showed best performance in reproducing the experimental results. Although the run time was reduced by using CADMAS-SURF, its computational accuracy was worse than KIOSTFOAM. olaFlow was the fastest model, but active wave absorption at the wave generation boundary was not satisfactory. In addition, the model excessively dissipated wave energy when wave breaking occurred.

Feature-selection algorithm based on genetic algorithms using unstructured data for attack mail identification (공격 메일 식별을 위한 비정형 데이터를 사용한 유전자 알고리즘 기반의 특징선택 알고리즘)

  • Hong, Sung-Sam;Kim, Dong-Wook;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.1-10
    • /
    • 2019
  • Since big-data text mining extracts many features and data, clustering and classification can result in high computational complexity and low reliability of the analysis results. In particular, a term document matrix obtained through text mining represents term-document features, but produces a sparse matrix. We designed an advanced genetic algorithm (GA) to extract features in text mining for detection model. Term frequency inverse document frequency (TF-IDF) is used to reflect the document-term relationships in feature extraction. Through a repetitive process, a predetermined number of features are selected. And, we used the sparsity score to improve the performance of detection model. If a spam mail data set has the high sparsity, detection model have low performance and is difficult to search the optimization detection model. In addition, we find a low sparsity model that have also high TF-IDF score by using s(F) where the numerator in fitness function. We also verified its performance by applying the proposed algorithm to text classification. As a result, we have found that our algorithm shows higher performance (speed and accuracy) in attack mail classification.

Plan-Class Specific Reference Quality Assurance for Volumetric Modulated Arc Therapy

  • Rahman, Mohammad Mahfujur;Kim, Chan Hyeong;Kim, Seonghoon
    • Journal of Radiation Protection and Research
    • /
    • v.44 no.1
    • /
    • pp.32-42
    • /
    • 2019
  • Background: There have been much efforts to develop the proper and realistic machine Quality Assurance (QA) reflecting on real Volumetric Modulated Arc Therapy (VMAT) plan. In this work we propose and test a special VMAT plan of plan-class specific (pcsr) QA, as a machine QA so that it might be a good solution to supplement weak point of present machine QA to make it more realistic for VMAT treatment. Materials and Methods: We divided human body into 5 treatment sites: brain, head and neck, chest, abdomen, and pelvis. One plan for each treatment site was selected from real VMAT cases and contours were mapped into the computational human phantom where the same plan as real VMAT plan was created and called plan-class specific reference (pcsr) QA plan. We delivered this pcsr QA plan on a daily basis over the full research period and tracked how much MLC movement and dosimetric error occurred in regular delivery. Several real patients under treatments were also tracked to test the usefulness of pcsr QA through comparisons between them. We used dynalog file viewer (DFV) and Dynalog file to analyze position and speed of individual MLC leaf. The gamma pass rate from portal dosimetry for different gamma criteria was analyzed to evaluate analyze dosimetric accuracy. Results and Discussion: The maxRMS of MLC position error for all plans were all within the tolerance limit of < 0.35 cm and the positional variation of maxPEs for both pcsr and real plans were observed very stable over the research session. Daily variations of maxRMS of MLC speed error and gamma pass rate for real VMAT plans were observed very comparable to those in their pcsr plans in good acceptable fluctuation. Conclusion: We believe that the newly proposed pcsr QA would be useful and helpful to predict the mid-term quality of real VMAT treatment delivery.

Improvement of Power Consumption of Canny Edge Detection Using Reduction in Number of Calculations at Square Root (제곱근 연산 횟수 감소를 이용한 Canny Edge 검출에서의 전력 소모개선)

  • Hong, Seokhee;Lee, Juseong;An, Ho-Myoung;Koo, Jihun;Kim, Byuncheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.6
    • /
    • pp.568-574
    • /
    • 2020
  • In this paper, we propose a method to reduce the square root computation having high computation complexity in Canny edge detection algorithm using image processing. The proposed method is to reduce the number of operation calculating gradient magnitude using pixel's continuity using make a specific pattern instead of square root computation in gradient magnitude calculating operation. Using various test images and changing number of hole pixels, we can check for calculate match rate about 97% for one hole, and 94%, 90%, 88% when the number of hole is increased and measure decreasing computation time about 0.2ms for one hole, and 0.398ms, 0.6ms, 0.8ms when the number of hole is increased. Through this method, we expect to implement low power embedded vision system through high accuracy and a reduced operation number using two-hole pixels.