• Title/Summary/Keyword: dynamic partitioning

Search Result 100, Processing Time 0.022 seconds

Dynamic Linking System Using Related Web Documents Classification and Users' Browsing Patterns (연관 웹 문서 분류와 사용자 브라우징 패턴을 이용한 동적 링킹 시스템)

  • Park, Young-Kyu;Kim, Jin-Su;Kim, Tae-Yong;Lee, Jung-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.10a
    • /
    • pp.305-308
    • /
    • 2000
  • 웹사이트 설계자의 주관적 판단에 의한 정적 하이퍼텍스트 링킹은 모든 사용자들에게 동일한 링크를 제공한다는 단점을 가지고 있다. 이러한 문제점을 개선하고, 각 사용자들의 브라우징 패턴에 적합한 웹 문서들을 동적 링크로 제공해주기 위한 여러 동적 링킹 시스템들이 제안되었다. 그러나 대부분의 동적 링킹 시스템들은 사용자의 현재 브라우징 패턴과 가장 유사한 패턴 정보만을 이용해 동적 링크를 제공하기 때문에 연관성이 없는 웹 문서들에 대한 링크를 수시로 제공한다는 또 다른 문제를 지니고 있다. 본 논문에서는 데이터 마이닝의 한 응용 분야인 웹 마이닝 기법을 이용하여 웹 서버의 로그파일로부터 사용자들의 브라우징 패턴을 분석해내고, 다차원 데이터 집합에 적합한 Association Rule Hypergraph Partitioning(ARHP) 알고리즘을 이용하여 서로 연관성이 있는 웹 문서들을 분류한다. 사용자 브라우징 패턴 정보로부터 사용자에게 추천해줄 1차 링크 집합을 생성하고, 연관 웹 문서 정보를 이용하여 2차 링크 집합을 생성한다. 그리고 두 링크 집합에 공통으로 포함된 링크 집합만을 사용자에게 동적으로 추천해줌으로써 사용자가 보다 편리하고 정확하게 웹사이트를 브라우징 할 수 있도록 하는 동적 링킹 시스템을 제안한다.

  • PDF

A Web Server Load Balancing Mechanism for Supporting Service Level Agreement (SLA를 지원하는 웹 서버 부하 분산 기법)

  • Go Hyeon-Joo;Park Kie-Jin;Park Mi-Sun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.8
    • /
    • pp.505-513
    • /
    • 2006
  • To satisfy SLA(Service Level Agreement) contract between a client and a service provider, the client requests are classified and processed by priorities. In providing differentiated service, a request from a client who has low priority can be dealt with less important. In this paper, we study static and dynamic partitioning mechanism of web servers and also admission control policy for multiclass request distribution. Through simulation experiments, we analyze web server throughput and response time considering SLA.

Parallel Finite Element Simulation of the Incompressible Navier-stokes Equations (병렬 유한요소 해석기법을 이용한 유동장 해석)

  • Choi H. G.;Kim B. J.;Kang S. W.;Yoo J. Y.
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2002.05a
    • /
    • pp.8-15
    • /
    • 2002
  • For the large scale computation of turbulent flows around an arbitrarily shaped body, a parallel LES (large eddy simulation) code has been recently developed in which domain decomposition method is adopted. METIS and MPI (message Passing interface) libraries are used for domain partitioning and data communication between processors, respectively. For unsteady computation of the incompressible Wavier-Stokes equation, 4-step splitting finite element algorithm [1] is adopted and Smagorinsky or dynamic LES model can be chosen fur the modeling of small eddies in turbulent flows. For the validation and performance-estimation of the parallel code, a three-dimensional laminar flow generated by natural convection inside a cube has been solved. Then, we have solved the turbulent flow around MIRA (Motor Industry Research Association) model at $Re = 2.6\times10^6$, which is based on the model height and inlet free stream velocity, using 32 processors on IBM SMP cluster and compared with the existing experiment.

  • PDF

Dynamic Analysis of Constrained Mechanical System Moving on a Flexible Beam Structure(II) : Application (유연한 보 구조물 위를 이동하는 구속 기계계의 동력학 해석(II) : 응용)

  • Park, Chan-Jong;Park, Tae-Won
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.11
    • /
    • pp.176-184
    • /
    • 2000
  • Recently, it becomes a very important issue to consider the mechanical systems such as high-speed vehicle and railway train moving on a flexible beam structure. Using general approach proposed in the first part of this paper, it tis possible to predict planar motion of constrained mechanical system and elastic structure with various kinds of foundation supporting condition. Combined differential-algebraic equations of motion derived from both multibody dynamics theory and Finite Element Method can be analyzed numerically using generalized coordinate partitioning algorithm. To verify the validity of this approach, results from simply supported elastic beam subjected to a moving load are compared with exact solution from a reference. Finally, parameter study is conducted for a moving vehicle model on a simply supported 3-span bridge.

  • PDF

Goal-oriented multi-collision source algorithm for discrete ordinates transport calculation

  • Wang, Xinyu;Zhang, Bin;Chen, Yixue
    • Nuclear Engineering and Technology
    • /
    • v.54 no.7
    • /
    • pp.2625-2634
    • /
    • 2022
  • Discretization errors are extremely challenging conundrums of discrete ordinates calculations for radiation transport problems with void regions. In previous work, we have presented a multi-collision source method (MCS) to overcome discretization errors, but the efficiency needs to be improved. This paper proposes a goal-oriented algorithm for the MCS method to adaptively determine the partitioning of the geometry and dynamically change the angular quadrature in remaining iterations. The importance factor based on the adjoint transport calculation obtains the response function to get a problem-dependent, goal-oriented spatial decomposition. The difference in the scalar fluxes from one high-order quadrature set to a lower one provides the error estimation as a driving force behind the dynamic quadrature. The goal-oriented algorithm allows optimizing by using ray-tracing technology or high-order quadrature sets in the first few iterations and arranging the integration order of the remaining iterations from high to low. The algorithm has been implemented in the 3D transport code ARES and was tested on the Kobayashi benchmarks. The numerical results show a reduction in computation time on these problems for the same desired level of accuracy as compared to the standard ARES code, and it has clear advantages over the traditional MCS method in solving radiation transport problems with reflective boundary conditions.

Fast Coding Unit Decision Algorithm Based on Region of Interest by Motion Vector in HEVC (움직임 벡터에 의한 관심영역 기반의 HEVC 고속 부호화 유닛 결정 방법)

  • Hwang, In Seo;Sunwoo, Myung Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.11
    • /
    • pp.41-47
    • /
    • 2016
  • High efficiency video coding (HEVC) employs a coding tree unit (CTU) to improve the coding efficiency. A CTU consists of coding units (CU), prediction units (PU), and transform units (TU). All possible block partitions should be performed on each depth level to obtain the best combination of CUs, PUs, and TUs. To reduce the complexity of block partitioning process, this paper proposes the PU mode skip algorithm with region of interest (RoI) selection using motion vector. In addition, this paper presents the CU depth level skip algorithm using the co-located block information in the previously encoded frames. First, the RoI selection algorithm distinguishes between dynamic CTUs and static CTUs and then, asymmetric motion partitioning (AMP) blocks are skipped in the static CTUs. Second, the depth level skip algorithm predicts the most probable target depth level from average depth in one CTU. The experimental results show that the proposed fast CU decision algorithm can reduce the total encoding time up to 44.8% compared to the HEVC test model (HM) 14.0 reference software encoder. Moreover, the proposed algorithm shows only 2.5% Bjontegaard delta bit rate (BDBR) loss.

Source and Sink Limitations to Soybean Yield (콩의 동화기관과 수용기관의 능력평가)

  • Suk Ha, Lee;Yeul Gue, Seung;Seok Dong, Kim
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.40 no.2
    • /
    • pp.255-259
    • /
    • 1995
  • Improvement in potential Crop yield could be achieved through either the improve-ment of source potential or sink capacity, but preferably both simultaneously. The field experiment was performed to evaluate the genotypic difference in partitioning of dry matter into each plant part in response to photosynthetic manipulation as well as to assess whether the soybean yield is source or sink-limited. Four soybean genotypes, which were 'Baekunkong', 'Suwon 168', and two local soy-beans with black seed coat(hereafter referred to as the 'black soybean', 'Kangleungjarae' and 'Keumleungjarae', were grown in four different environments in which one or two layers of shading net during grain filling and two different planting densities(55,000 and 110,000 plants $ha^{-1}$) were applied to manipulate photosynthesis. Significant effects of genotype (G), photosynthetic manipulation(P), and$G^p$P were shown in top and grain dry weight. Relative grain to top dry weight was the lowest in soybean plants grown at 110,000 plants$ha^{-1}$and covered with two layers of shading net during grain filling, Evaluation of dynamic changes in shoot harvest index in response to photosynthetic manipulation treatments revealed that sink was more limited in local black soybeans than Suwon 168 and Baekunkong, indicating that the availability of photosynthate during grain filling did not limit the grain yield in local black soybeans when compared to Baekunkong and Suwon 168.oybeans when compared to Baekunkong and Suwon 168.

  • PDF

Dynamic of heat production partitioning in rooster by indirect calorimetry

  • Rony Lizana, Riveros;Rosiane, de Sousa Camargos;Marcos, Macari;Matheus, de Paula Reis;Bruno Balbino, Leme;Nilva Kazue, Sakomura
    • Animal Bioscience
    • /
    • v.36 no.1
    • /
    • pp.75-83
    • /
    • 2023
  • Objective: The objective of this study was to describe a methodological procedure to quantify the heat production (HP) partitioning in basal metabolism or fasting heat production (FHP), heat production due to physical activity (HPA), and the thermic effect of feeding (TEF) in roosters. Methods: Eighteen 54-wk-old Hy Line Brown roosters (2.916±0.15 kg) were allocated in an open-circuit chamber of respirometry for O2 consumption (VO2), CO2 production (VCO2), and physical activity (PA) measurements, under environmental comfort conditions, following the protocol: adaptation (3 d), ad libitum feeding (1 d), and fasting conditions (1 d). The Brouwer equation was used to calculate the HP from VO2 and VCO2. The plateau-FHP (parameter L) was estimated through the broken line model: HP = U×(R-t)×I+L; I = 1 if t<R or I = 0 if t>R; Where the broken-point (R) was assigned as the time (t) that defined the difference between a short and long fasting period, I is conditional, and U is the decreasing rate after the feed was withdrawn. The HP components description was characterized by three events: ad libitum feeding and short and long fasting periods. Linear regression was adjusted between physical activity (PA) and HP to determine the HPA and to estimate the standardized FHP (st-FHP) as the intercept of PA = 0. Results: The time when plateau-FHP was reached at 11.7 h after withdrawal feed, with a mean value of 386 kJ/kg0.75/d, differing in 32 kJ from st-FHP (354 kJ/kg0.75/d). The slope of HP per unit of PA was 4.52 kJ/mV. The total HP in roosters partitioned into the st-FHP, termal effect of feeding (TEF), and HPA was 56.6%, 25.7%, and 17.7%, respectively. Conclusion: The FHP represents the largest fraction of energy expenditure in roosters, followed by the TEF. Furthermore, the PA increased the variation of HP measurements.

The Contact and Parallel Analysis of Smoothed Particle Hydrodynamics (SPH) Using Polyhedral Domain Decomposition (다면체영역분할을 이용한 SPH의 충돌 및 병렬해석)

  • Moonho Tak
    • Journal of the Korean GEO-environmental Society
    • /
    • v.25 no.4
    • /
    • pp.21-28
    • /
    • 2024
  • In this study, a polyhedral domain decomposition method for Smoothed Particle Hydrodynamics (SPH) analysis is introduced. SPH which is one of meshless methods is a numerical analysis method for fluid flow simulation. It can be useful for analyzing fluidic soil or fluid-structure interaction problems. SPH is a particle-based method, where increased particle count generally improves accuracy but diminishes numerical efficiency. To enhance numerical efficiency, parallel processing algorithms are commonly employed with the Cartesian coordinate-based domain decomposition method. However, for parallel analysis of complex geometric shapes or fluidic problems under dynamic boundary conditions, the Cartesian coordinate-based domain decomposition method may not be suitable. The introduced polyhedral domain decomposition technique offers advantages in enhancing parallel efficiency in such problems. It allows partitioning into various forms of 3D polyhedral elements to better fit the problem. Physical properties of SPH particles are calculated using information from neighboring particles within the smoothing length. Methods for sharing particle information physically separable at partitioning and sharing information at cross-points where parallel efficiency might diminish are presented. Through numerical analysis examples, the proposed method's parallel efficiency approached 95% for up to 12 cores. However, as the number of cores is increased, parallel efficiency is decreased due to increased information sharing among cores.

Dynamic Block Reassignment for Load Balancing of Block Centric Graph Processing Systems (블록 중심 그래프 처리 시스템의 부하 분산을 위한 동적 블록 재배치 기법)

  • Kim, Yewon;Bae, Minho;Oh, Sangyoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.5
    • /
    • pp.177-188
    • /
    • 2018
  • The scale of graph data has been increased rapidly because of the growth of mobile Internet applications and the proliferation of social network services. This brings upon the imminent necessity of efficient distributed and parallel graph processing approach since the size of these large-scale graphs are easily over a capacity of a single machine. Currently, there are two popular parallel graph processing approaches, vertex-centric graph processing and block centric processing. While a vertex-centric graph processing approach can easily be applied to the parallel processing system, a block-centric graph processing approach is proposed to compensate the drawbacks of the vertex-centric approach. In these systems, the initial quality of graph partition affects to the overall performance significantly. However, it is a very difficult problem to divide the graph into optimal states at the initial phase. Thus, several dynamic load balancing techniques have been studied that suggest the progressive partitioning during the graph processing time. In this paper, we present a load balancing algorithms for the block-centric graph processing approach where most of dynamic load balancing techniques are focused on vertex-centric systems. Our proposed algorithm focus on an improvement of the graph partition quality by dynamically reassigning blocks in runtime, and suggests block split strategy for escaping local optimum solution.