• Title/Summary/Keyword: Computation reduction

Search Result 443, Processing Time 0.032 seconds

Probabilistic stability analysis of rock slopes with cracks

  • Zhu, J.Q.;Yang, X.L.
    • Geomechanics and Engineering
    • /
    • v.16 no.6
    • /
    • pp.655-667
    • /
    • 2018
  • To evaluate the stability of a rock slope with one pre-exiting vertical crack, this paper performs corresponding probabilistic stability analysis. The existence of cracks is generally ignored in traditional deterministic stability analysis. However, they are widely found in either cohesive soil or rock slopes. The influence of one pre-exiting vertical crack on a rock slope is considered in this study. The safety factor, which is usually adopted to quantity the stability of slopes, is derived through the deterministic computation based on the strength reduction technique. The generalized Hoek-Brown (HB) failure criterion is adopted to characterize the failure of rock masses. Considering high nonlinearity of the limit state function as using nonlinear HB criterion, the multivariate adaptive regression splines (MARS) is used to accurately approximate the implicit limit state function of a rock slope. Then the MARS is integrated with Monte Carlo simulation to implement reliability analysis, and the influences of distribution types, level of uncertainty, and constants on the probability density functions and failure probability are discussed. It is found that distribution types of random variables have little influence on reliability results. The reliability results are affected by a combination of the uncertainty level and the constants. Finally, a reliability-based design figure is provided to evaluate the safety factor of a slope required for a target failure probability.

Goal-oriented multi-collision source algorithm for discrete ordinates transport calculation

  • Wang, Xinyu;Zhang, Bin;Chen, Yixue
    • Nuclear Engineering and Technology
    • /
    • v.54 no.7
    • /
    • pp.2625-2634
    • /
    • 2022
  • Discretization errors are extremely challenging conundrums of discrete ordinates calculations for radiation transport problems with void regions. In previous work, we have presented a multi-collision source method (MCS) to overcome discretization errors, but the efficiency needs to be improved. This paper proposes a goal-oriented algorithm for the MCS method to adaptively determine the partitioning of the geometry and dynamically change the angular quadrature in remaining iterations. The importance factor based on the adjoint transport calculation obtains the response function to get a problem-dependent, goal-oriented spatial decomposition. The difference in the scalar fluxes from one high-order quadrature set to a lower one provides the error estimation as a driving force behind the dynamic quadrature. The goal-oriented algorithm allows optimizing by using ray-tracing technology or high-order quadrature sets in the first few iterations and arranging the integration order of the remaining iterations from high to low. The algorithm has been implemented in the 3D transport code ARES and was tested on the Kobayashi benchmarks. The numerical results show a reduction in computation time on these problems for the same desired level of accuracy as compared to the standard ARES code, and it has clear advantages over the traditional MCS method in solving radiation transport problems with reflective boundary conditions.

An Efficient Service Function Chains Orchestration Algorithm for Mobile Edge Computing

  • Wang, Xiulei;Xu, Bo;Jin, Fenglin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4364-4384
    • /
    • 2021
  • The dynamic network state and the mobility of the terminals make the service function chain (SFC) orchestration mechanisms based on static and deterministic assumptions hard to be applied in SDN/NFV mobile edge computing networks. Designing dynamic and online SFC orchestration mechanism can greatly improve the execution efficiency of compute-intensive and resource-hungry applications in mobile edge computing networks. In order to increase the overall profit of service provider and reduce the resource cost, the system running time is divided into a sequence of time slots and a dynamic orchestration scheme based on an improved column generation algorithm is proposed in each slot. Firstly, the SFC dynamic orchestration problem is formulated as an integer linear programming (ILP) model based on layered graph. Then, in order to reduce the computation costs, a column generation model is used to simplify the ILP model. Finally, a two-stage heuristic algorithm based on greedy strategy is proposed. Four metrics are defined and the performance of the proposed algorithm is evaluated based on simulation. The results show that our proposal significantly provides more than 30% reduction of run time and about 12% improvement in service deployment success ratio compared to the Viterbi algorithm based mechanism.

Numerical data-driven machine learning model to predict the strength reduction of fire damaged RC columns

  • HyunKyoung Kim;Hyo-Gyoung Kwak;Ju-Young Hwang
    • Computers and Concrete
    • /
    • v.32 no.6
    • /
    • pp.625-637
    • /
    • 2023
  • The application of ML approaches in determining the resisting capacity of fire damaged RC columns is introduced in this paper, on the basis of analysis data driven ML modeling. Considering the characteristics of the structural behavior of fire damaged RC columns, the representative five approaches of Kernel SVM, ANN, RF, XGB and LGBM are adopted and applied. Additional partial monotonic constraints are adopted in modelling, to ensure the monotone decrease of resisting capacity in RC column with fire exposure time. Furthermore, additional suggestions are also added to mitigate the heterogeneous composition of the training data. Since the use of ML approaches will significantly reduce the computation time in determining the resisting capacity of fire damaged RC columns, which requires many complex solution procedures from the heat transfer analysis to the rigorous nonlinear analyses and their repetition with time, the introduced ML approach can more effectively be used in large complex structures with many RC members. Because of the very small amount of experimental data, the training data are analytically determined from a heat transfer analysis and a subsequent nonlinear finite element (FE) analysis, and their accuracy was previously verified through a correlation study between the numerical results and experimental data. The results obtained from the application of ML approaches show that the resisting capacity of fire damaged RC columns can effectively be predicted by ML approaches.

A Tree-structured XPath Query Reduction Scheme for Enhancing XML Query Processing Performance (XML 질의의 수행성능 향상을 위한 트리 구조 XPath 질의의 축약 기법에 관한 연구)

  • Lee, Min-Soo;Kim, Yun-Mi;Song, Soo-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.585-596
    • /
    • 2007
  • XML data generally consists of a hierarchical tree-structure which is reflected in mechanisms to store and retrieve XML data. Therefore, when storing XML data in the database, the hierarchical relationships among the XML elements are taken into consideration during the restructuring and storing of the XML data. Also, in order to support the search queries from the user, a mechanism is needed to compute the hierarchical relationship between the element structures specified by the query. The structural join operation is one solution to this problem, and is an efficient computation method for hierarchical relationships in an in database based on the node numbering scheme. However, in order to process a tree structured XML query which contains a complex nested hierarchical relationship it still needs to carry out multiple structural joins and results in another problem of having a high query execution cost. Therefore, in this paper we provide a preprocessing mechanism for effectively reducing the cost of multiple nested structural joins by applying the concept of equivalence classes and suggest a query path reduction algorithm to shorten the path query which consists of a regular expression. The mechanism is especially devised to reduce path queries containing branch nodes. The experimental results show that the proposed algorithm can reduce the time requited for processing the path queries to 1/3 of the original execution time.

Improved Trajectory Calculation on the Semi-Lagrangian Advection Computation (Semi-Lagrangian 이류항 계산의 추적법 개선)

  • Park, Su-Wan;Baek, Nak-Hoon;Ryu, Kwan-Woo
    • The KIPS Transactions:PartA
    • /
    • v.16A no.6
    • /
    • pp.419-426
    • /
    • 2009
  • To realistically simulate fluid, the Navier-Stokes equations are generally used. Solving these Navier-Stokes equations on the Eulerian framework, the non-linear advection terms invoke heavy computation and thus Semi-Lagrangian methods are used as an approximated way of solving them. In the Semi-Lagrangian methods, the locations of advection sources are traced and the physical values at the traced locations are interpolated. In the case of Stam's method, there are relatively many chances of numerical losses, and thus there have been efforts to correct these numerical errors. In most cases, they have focused on the numerical interpolation processes, even simultaneously using particle-based methods. In this paper, we propose a new approach to reduce the numerical losses, through improving the tracing method during the advection calculations, without any modifications on the Eulerian framework itself. In our method, we trace the grids with the velocities which will let themselves to be moved to the current target position, differently from the previous approaches, where velocities of the current target positions are used. From the intuitive point of view, we adopted the simple physical observation: the physical quantities at a specific position will be moved to the new location due to the current velocity. Our method shows reasonable reduction on the numerical losses during the smoke simulations, finally to achieve real-time processing even with enhanced realities.

An effective transform hardware design for real-time HEVC encoder (HEVC 부호기의 실시간처리를 위한 효율적인 변환기 하드웨어 설계)

  • Jo, Heung-seon;Kumi, Fred Adu;Ryoo, Kwang-ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.416-419
    • /
    • 2015
  • In this paper, we propose an effective design of transform hardware for real-time HEVC(High Efficiency Video Coding) encoder. HEVC encoder determines the transform mode($4{\times}4$, $8{\times}8$, $16{\times}16$, $32{\times}32$) by comparing RDCost. RDCost require a significant amount of computation and time because it is determined by bit-rate and distortion which is computated via transform, quantization, dequantization, and inverse transform. This paper therefore proposes a new method for transform mode determination using sum of transform coefficient. Also, proposed hardware architecture is implemented with multiplexer, recursive adder/subtracter, and shifter only to derive reduction of the computation. Proposed method for transform mode determination results in an increase of 0.096 in BD-PSNR, 0.057 in BD-Bitrate, and decrease of 9.3% in encoding time by comparing HM 10.0. The hardware which is proposed is implemented by 256K logic gates in TSMC 130nm process. Its maximum operation frequency is 200MHz. At 140MHz, the proposed hardware can support 4K Ultra HD video encoding at 60fps in real time.

  • PDF

A Feature Selection for the Recognition of Handwritten Characters based on Two-Dimensional Wavelet Packet (2차원 웨이브렛 패킷에 기반한 필기체 문자인식의 특징선택방법)

  • Kim, Min-Soo;Back, Jang-Sun;Lee, Guee-Sang;Kim, Soo-Hyung
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.8
    • /
    • pp.521-528
    • /
    • 2002
  • We propose a new approach to the feature selection for the classification of handwritten characters using two-dimensional(2D) wavelet packet bases. To extract key features of an image data, for the dimension reduction Principal Component Analysis(PCA) has been most frequently used. However PCA relies on the eigenvalue system, it is not only sensitive to outliers and perturbations, but has a tendency to select only global features. Since the important features for the image data are often characterized by local information such as edges and spikes, PCA does not provide good solutions to such problems. Also solving an eigenvalue system usually requires high cost in its computation. In this paper, the original data is transformed with 2D wavelet packet bases and the best discriminant basis is searched, from which relevant features are selected. In contrast to PCA solutions, the fast selection of detailed features as well as global features is possible by virtue of the good properties of wavelets. Experiment results on the recognition rates of PCA and our approach are compared to show the performance of the proposed method.

Iterative Generalized Hough Transform using Multiresolution Search (다중해상도 탐색을 이용한 반복 일반화 허프 변환)

  • ;W. Nick Street
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.10
    • /
    • pp.973-982
    • /
    • 2003
  • This paper presents an efficient method for automatically detecting objects in a given image. The GHT is a robust template matching algorithm for automatic object detection in order to find objects of various shapes. Many different templates are applied by the GHT in order to find objects of various shapes and size. Every boundary detected by the GHT scan be used as an initial outline for more precise contour-finding techniques. The main weakness of the GHT is the excessive time and memory requirements. In order to overcome this drawback, the proposed algorithm uses a multiresolution search by scaling down the original image to half-sized and quarter-sized images. Using the information from the first iterative GHT on a quarter-sized image, the range of nuclear sizes is determined to limit the parameter space of the half-sized image. After the second iterative GHT on the half-sized image, nuclei are detected by the fine search and segmented with edge information which helps determine the exact boundary. The experimental results show that this method gives reduction in computation time and memory usage without loss of accuracy.

VLSI Design for Folded Wavelet Transform Processor using Multiple Constant Multiplication (MCM과 폴딩 방식을 적용한 웨이블릿 변환 장치의 VLSI 설계)

  • Kim, Ji-Won;Son, Chang-Hoon;Kim, Song-Ju;Lee, Bae-Ho;Kim, Young-Min
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.1
    • /
    • pp.81-86
    • /
    • 2012
  • This paper presents a VLSI design for lifting-based discrete wavelet transform (DWT) 9/7 filter using multiplierless multiple constant multiplication (MCM) architecture. This proposed design is based on the lifting scheme using pattern search for folded architecture. Shift-add operation is adopted to optimize the multiplication process. The conventional serial operations of the lifting data flow can be optimized into parallel ones by employing paralleling and pipelining techniques. This optimized design has simple hardware architecture and requires less computation without performance degradation. Furthermore, hardware utilization reaches 100%, and the number of registers required is significantly reduced. To compare our work with previous methods, we implemented the architecture using Verilog HDL. We also executed simulation based on the logic synthesis using $0.18{\mu}m$ CMOS standard cells. The proposed architecture shows hardware reduction of up to 60.1% and 44.1% respectively at 200 MHz clock compared to previous works. This implementation results indicate that the proposed design performs efficiently in hardware cost, area, and power consumption.