• Title/Summary/Keyword: Partitioning method

Search Result 599, Processing Time 0.029 seconds

Domain decomposition technique to simulate crack in nonlinear analysis of initially imperfect laminates

  • Ghannadpour, S. Amir M.;Karimi, Mona
    • Structural Engineering and Mechanics
    • /
    • v.68 no.5
    • /
    • pp.603-619
    • /
    • 2018
  • In this research, an effective computational technique is carried out for nonlinear and post-buckling analyses of cracked imperfect composite plates. The laminated plates are assumed to be moderately thick so that the analysis can be carried out based on the first-order shear deformation theory. Geometric non-linearity is introduced in the way of von-Karman assumptions for the strain-displacement equations. The Ritz technique is applied using Legendre polynomials for the primary variable approximations. The crack is modeled by partitioning the entire domain of the plates into several sub-plates and therefore the plate decomposition technique is implemented in this research. The penalty technique is used for imposing the interface continuity between the sub-plates. Different out-of-plane essential boundary conditions such as clamp, simply support or free conditions will be assumed in this research by defining the relevant displacement functions. For in-plane boundary conditions, lateral expansions of the unloaded edges are completely free while the loaded edges are assumed to move straight but restricted to move laterally. With the formulation presented here, the plates can be subjected to biaxial compressive loads, therefore a sensitivity analysis is performed with respect to the applied load direction, along the parallel or perpendicular to the crack axis. The integrals of potential energy are numerically computed using Gauss-Lobatto quadrature formulas to get adequate accuracy. Then, the obtained non-linear system of equations is solved by the Newton-Raphson method. Finally, the results are presented to show the influence of crack length, various locations of crack, load direction, boundary conditions and different values of initial imperfection on nonlinear and post-buckling behavior of laminates.

Quad Tree Based 2D Smoke Super-resolution with CNN (CNN을 이용한 Quad Tree 기반 2D Smoke Super-resolution)

  • Hong, Byeongsun;Park, Jihyeok;Choi, Myungjin;Kim, Changhun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.105-113
    • /
    • 2019
  • Physically-based fluid simulation takes a lot of time for high resolution. To solve this problem, there are studies that make up the limitation of low resolution fluid simulation by using deep running. Among them, Super-resolution, which converts low-resolution simulation data to high resolution is under way. However, traditional techniques require to the entire space where there are no density data, so there are problems that are inefficient in terms of the full simulation speed and that cannot be computed with the lack of GPU memory as input resolution increases. In this paper, we propose a new method that divides and classifies 2D smoke simulation data into the space using the quad tree, one of the spatial partitioning methods, and performs Super-resolution only required space. This technique accelerates the simulation speed by computing only necessary space. It also processes the divided input data, which can solve GPU memory problems.

Multiview-based Spectral Weighted and Low-Rank for Row-sparsity Hyperspectral Unmixing

  • Zhang, Shuaiyang;Hua, Wenshen;Liu, Jie;Li, Gang;Wang, Qianghui
    • Current Optics and Photonics
    • /
    • v.5 no.4
    • /
    • pp.431-443
    • /
    • 2021
  • Sparse unmixing has been proven to be an effective method for hyperspectral unmixing. Hyperspectral images contain rich spectral and spatial information. The means to make full use of spectral information, spatial information, and enhanced sparsity constraints are the main research directions to improve the accuracy of sparse unmixing. However, many algorithms only focus on one or two of these factors, because it is difficult to construct an unmixing model that considers all three factors. To address this issue, a novel algorithm called multiview-based spectral weighted and low-rank row-sparsity unmixing is proposed. A multiview data set is generated through spectral partitioning, and then spectral weighting is imposed on it to exploit the abundant spectral information. The row-sparsity approach, which controls the sparsity by the l2,0 norm, outperforms the single-sparsity approach in many scenarios. Many algorithms use convex relaxation methods to solve the l2,0 norm to avoid the NP-hard problem, but this will reduce sparsity and unmixing accuracy. In this paper, a row-hard-threshold function is introduced to solve the l2,0 norm directly, which guarantees the sparsity of the results. The high spatial correlation of hyperspectral images is associated with low column rank; therefore, the low-rank constraint is adopted to utilize spatial information. Experiments with simulated and real data prove that the proposed algorithm can obtain better unmixing results.

A Bi-objective Game-based Task Scheduling Method in Cloud Computing Environment

  • Guo, Wanwan;Zhao, Mengkai;Cui, Zhihua;Xie, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.11
    • /
    • pp.3565-3583
    • /
    • 2022
  • The task scheduling problem has received a lot of attention in recent years as a crucial area for research in the cloud environment. However, due to the difference in objectives considered by service providers and users, it has become a major challenge to resolve the conflicting interests of service providers and users while both can still take into account their respective objectives. Therefore, the task scheduling problem as a bi-objective game problem is formulated first, and then a task scheduling model based on the bi-objective game (TSBOG) is constructed. In this model, energy consumption and resource utilization, which are of concern to the service provider, and cost and task completion rate, which are of concern to the user, are calculated simultaneously. Furthermore, a many-objective evolutionary algorithm based on a partitioned collaborative selection strategy (MaOEA-PCS) has been developed to solve the TSBOG. The MaOEA-PCS can find a balance between population convergence and diversity by partitioning the objective space and selecting the best converging individuals from each region into the next generation. To balance the players' multiple objectives, a crossover and mutation operator based on dynamic games is proposed and applied to MaPEA-PCS as a player's strategy update mechanism. Finally, through a series of experiments, not only the effectiveness of the model compared to a normal many-objective model is demonstrated, but also the performance of MaOEA-PCS and the validity of DGame.

A Representative Pattern Generation Algorithm Based on Evaluation And Selection (평가와 선택기법에 기반한 대표패턴 생성 알고리즘)

  • Yih, Hyeong-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.3
    • /
    • pp.139-147
    • /
    • 2009
  • The memory based reasoning just stores in the memory in the form of the training pattern of the representative pattern. And it classifies through the distance calculation with the test pattern. Because it uses the techniques which stores the training pattern whole in the memory or in which it replaces training patterns with the representative pattern. Due to this, the memory in which it is a lot for the other machine learning techniques is required. And as the moreover stored training pattern increases, the time required for a classification is very much required. In this paper, We propose the EAS(Evaluation And Selection) algorithm in order to minimize memory usage and to improve classification performance. After partitioning the training space, this evaluates each partitioned space as MDL and PM method. The partitioned space in which the evaluation result is most excellent makes into the representative pattern. Remainder partitioned spaces again partitions and repeat the evaluation. We verify the performance of Proposed algorithm using benchmark data sets from UCI Machine Learning Repository.

Soft computing based mathematical models for improved prediction of rock brittleness index

  • Abiodun I. Lawal;Minju Kim;Sangki Kwon
    • Geomechanics and Engineering
    • /
    • v.33 no.3
    • /
    • pp.279-289
    • /
    • 2023
  • Brittleness index (BI) is an important property of rocks because it is a good index to predict rockburst. Due to its importance, several empirical and soft computing (SC) models have been proposed in the literature based on the punch penetration test (PPT) results. These models are very important as there is no clear-cut experimental means for measuring BI asides the PPT which is very costly and time consuming to perform. This study used a novel Multivariate Adaptive regression spline (MARS), M5P, and white-box ANN to predict the BI of rocks using the available data in the literature for an improved BI prediction. The rock density, uniaxial compressive strength (σc) and tensile strength (σt) were used as the input parameters into the models while the BI was the targeted output. The models were implemented in the MATLAB software. The results of the proposed models were compared with those from existing multilinear regression, linear and nonlinear particle swarm optimization (PSO) and genetic algorithm (GA) based models using similar datasets. The coefficient of determination (R2), adjusted R2 (Adj R2), root-mean squared error (RMSE) and mean absolute percentage error (MAPE) were the indices used for the comparison. The outcomes of the comparison revealed that the proposed ANN and MARS models performed better than the other models with R2 and Adj R2 values above 0.9 and least error values while the M5P gave similar performance to those of the existing models. Weight partitioning method was also used to examine the percentage contribution of model predictors to the predicted BI and tensile strength was found to have the highest influence on the predicted BI.

Hybrid Movie Recommendation System Using Clustering Technique (클러스터링 기법을 이용한 하이브리드 영화 추천 시스템)

  • Sophort Siet;Sony Peng;Yixuan Yang;Sadriddinov Ilkhomjon;DaeYoung Kim;Doo-Soon Park
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.357-359
    • /
    • 2023
  • This paper proposes a hybrid recommendation system (RS) model that overcomes the limitations of traditional approaches such as data sparsity, cold start, and scalability by combining collaborative filtering and context-aware techniques. The objective of this model is to enhance the accuracy of recommendations and provide personalized suggestions by leveraging the strengths of collaborative filtering and incorporating user context features to capture their preferences and behavior more effectively. The approach utilizes a novel method that combines contextual attributes with the original user-item rating matrix of CF-based algorithms. Furthermore, we integrate k-mean++ clustering to group users with similar preferences and finally recommend items that have highly rated by other users in the same cluster. The process of partitioning is the use of the rating matrix into clusters based on contextual information offers several advantages. First, it bypasses of the computations over the entire data, reducing runtime and improving scalability. Second, the partitioned clusters hold similar ratings, which can produce greater impacts on each other, leading to more accurate recommendations and providing flexibility in the clustering process. keywords: Context-aware Recommendation, Collaborative Filtering, Kmean++ Clustering.

A Case Study on the Fractional Sense and Fraction Operation Ability of Elementary Gifted Class Students (초등 영재학급 학생의 분수 감각과 분수 조작 능력 사례연구)

  • Hae Gyu, Kim;Hosoo Lee;Keunbae Choi
    • East Asian mathematical journal
    • /
    • v.40 no.2
    • /
    • pp.183-207
    • /
    • 2024
  • This study is a case study that considered fractional senses and fraction operation abilities for 107 gifted students in elementary school classes. In order to find out the fractional sense, in the first question comparing the sizes of fractions 2/3 and 4/5, the students showed a variety of strategies, but the utilization rate of strategies excluding reduction to a common denominator did not exceed 50%. The second question can be solved by using the first question. It is a problem of finding two fractions by selecting four from six numbers 1, 3, 4, 5, 6, and 7 to create two fractions of which sum does not exceed 1. The percentage of correct answers to this question was about 27% (29 out of 107). Only 5 out of 29 students found answers using the first question, and the rest of the students sought answers through trial and error in various calculations. It shows that the item arrangement method from a deductive perspective has no significant effect on elementary school students. The percentage of correct answers was about 27% in the questions to find out the fraction operation ability-the question of drawing a 4/3 bar using a given 3/8-sized bar and 30.7% (23 out of 75) of the students who had wrong answers showed insufficient splitting operation. In addition, it has been shown that the operation of partitioning and iterating to form numerical senses and fractional concepts related to the fractions of the students has no significant impact.

Study on Evaluation Method of Task-Specific Adaptive Differential Privacy Mechanism in Federated Learning Environment (연합 학습 환경에서의 Task-Specific Adaptive Differential Privacy 메커니즘 평가 방안 연구)

  • Assem Utaliyeva;Yoon-Ho Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.1
    • /
    • pp.143-156
    • /
    • 2024
  • Federated Learning (FL) has emerged as a potent methodology for decentralized model training across multiple collaborators, eliminating the need for data sharing. Although FL is lauded for its capacity to preserve data privacy, it is not impervious to various types of privacy attacks. Differential Privacy (DP), recognized as the golden standard in privacy-preservation techniques, is widely employed to counteract these vulnerabilities. This paper makes a specific contribution by applying an existing, task-specific adaptive DP mechanism to the FL environment. Our comprehensive analysis evaluates the impact of this mechanism on the performance of a shared global model, with particular attention to varying data distribution and partitioning schemes. This study deepens the understanding of the complex interplay between privacy and utility in FL, providing a validated methodology for securing data without compromising performance.

Efficient Policy for ECC Parity Storing of NAND Flash Memory (낸드플래시 메모리의 효율적인 ECC 패리티 저장 방법)

  • Kim, Seokman;Oh, Minseok;Cho, Kyoungrok
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.10
    • /
    • pp.477-482
    • /
    • 2016
  • This paper presents a new method of parity storing for ECC(error correcting code) in SSD (solid-state drive) and suitable structure of the controller. In general usage of NAND flash memory, we partition a page into data and spare area. ECC parity is stored in the spare area. The method has overhead on area and timing due to access of the page memory discontinuously. This paper proposes a new parity policy storing method that reduces overhead and R(read)/W(write) timing by using whole page area continuously without partitioning. We analyzed overhead and R/W timing. As a result, the proposed parity storing has 13.6% less read access time than the conventional parity policy with 16KB page size. For 4GB video file transfer, it has about a minute less than the conventional parity policy. It will enhance the system performance because the read operation is key function in SSD.