• Title/Summary/Keyword: Partitioning methods

Search Result 241, Processing Time 0.027 seconds

Small-cell Resource Partitioning Allocation for Machine-Type Communications in 5G HetNets (5G 이기종 네트워크 환경에서 머신타입통신을 위한 스몰셀 자원 분리 할당 방법)

  • Ilhak Ban;Se-Jin Kim
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.1-7
    • /
    • 2023
  • This paper proposes a small cell resource partitioning allocation method to solve interference to machine type communication devices (MTCD) and improve performance in 5G heterogeneous networks (HetNet) where macro base station (MBS) and many small cell base stations (SBS) are overlaid. In the 5G HetNet, since various types of MTCDs generate data traffic, the load on the MBS increases. Therefore, in order to reduce the MBS load, a cell range expansion (CRE) method is applied in which a bias value is added to the received signal strength from the SBS and MTCDs satisfying the condition is connected to the SBS. More MTCDs connecting to the SBS through the CRE will reduce the load on the MBS, but performance of MTCDs will degrade due to interference, so a method to solve this problem is needed. The proposed small cell resource partitioning allocation method allocates resources with less interference from the MBS to mitigate interference of MTCDs newly added in the SBS with CRE, and improve the overall MTCD performace using separating resources according to the performance of existing MTCDs in the SBS. Through simulation results, the proposed small cell resource partitioning allocation method shows performance improvement of 21% and 126% in MTCDs capacity connected to MBS and SBS respectively, compared to the existing resource allocation methods.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.

On Using Near-surface Remote Sensing Observation for Evaluation Gross Primary Productivity and Net Ecosystem CO2 Partitioning (근거리 원격탐사 기법을 이용한 총일차생산량 추정 및 순생태계 CO2 교환량 배분의 정확도 평가에 관하여)

  • Park, Juhan;Kang, Minseok;Cho, Sungsik;Sohn, Seungwon;Kim, Jongho;Kim, Su-Jin;Lim, Jong-Hwan;Kang, Mingu;Shim, Kyo-Moon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.251-267
    • /
    • 2021
  • Remotely sensed vegetation indices (VIs) are empirically related with gross primary productivity (GPP) in various spatio-temporal scales. The uncertainties in GPP-VI relationship increase with temporal resolution. Uncertainty also exists in the eddy covariance (EC)-based estimation of GPP, arising from the partitioning of the measured net ecosystem CO2 exchange (NEE) into GPP and ecosystem respiration (RE). For two forests and two agricultural sites, we correlated the EC-derived GPP in various time scales with three different near-surface remotely sensed VIs: (1) normalized difference vegetation index (NDVI), (2) enhanced vegetation index (EVI), and (3) near infrared reflectance from vegetation (NIRv) along with NIRvP (i.e., NIRv multiplied by photosynthetically active radiation, PAR). Among the compared VIs, NIRvP showed highest correlation with half-hourly and monthly GPP at all sites. The NIRvP was used to test the reliability of GPP derived by two different NEE partitioning methods: (1) original KoFlux methods (GPPOri) and (2) machine-learning based method (GPPANN). GPPANN showed higher correlation with NIRvP at half-hourly time scale, but there was no difference at daily time scale. The NIRvP-GPP correlation was lower under clear sky conditions due to co-limitation of GPP by other environmental conditions such as air temperature, vapor pressure deficit and soil moisture. However, under cloudy conditions when photosynthesis is mainly limited by radiation, the use of NIRvP was more promising to test the credibility of NEE partitioning methods. Despite the necessity of further analyses, the results suggest that NIRvP can be used as the proxy of GPP at high temporal-scale. However, for the VIs-based GPP estimation with high temporal resolution to be meaningful, complex systems-based analysis methods (related to systems thinking and self-organization that goes beyond the empirical VIs-GPP relationship) should be developed.

Semantic Segmentation using Convolutional Neural Network with Conditional Random Field (조건부 랜덤 필드와 컨볼루션 신경망을 이용한 의미론적인 객체 분할 방법)

  • Lim, Su-Chang;Kim, Do-Yeon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.3
    • /
    • pp.451-456
    • /
    • 2017
  • Semantic segmentation, which is the most basic and complicated problem in computer vision, classifies each pixel of an image into a specific object and performs a task of specifying a label. MRF and CRF, which have been studied in the past, have been studied as effective methods for improving the accuracy of pixel level labeling. In this paper, we propose a semantic partitioning method that combines CNN, a kind of deep running, which is in the spotlight recently, and CRF, a probabilistic model. For learning and performance verification, Pascal VOC 2012 image database was used and the test was performed using arbitrary images not used for learning. As a result of the study, we showed better partitioning performance than existing semantic partitioning algorithm.

Background Removal from XRF Spectrum using the Interval Partitioning and Classifying (구간 분할과 영역 분류를 이용한 XRF 스펙트럼의 백그라운드 제거)

  • Yang, Sanghoon;Lee, Jaehwan;Yoon, Sook;Park, Dong Sun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.9
    • /
    • pp.164-171
    • /
    • 2013
  • XRF spectrum data of a material include a lot of background signals which are not related to its components. Since an XRF analyzer analyzes components and concentrations of an analyte using the locations and magnitudes of gaussian-shaped peaks extracted from a spectrum, its background signals need to be removed completely from the spectrum for the accurate analysis. Morphology-based method, SNIP-based method and thresholding-based method have been used to remove background signals. In the paper, a background removal method, an improved version of an interval-thresholding-based method, is proposed. The proposed method consists of interval partitioning, interval classifying, and background estimation. Experimental results show that the proposed method has better performance on background removal from the spectrum than the existing methods, morphology-based method and SNIP-based method.

Deep Learning based HEVC Double Compression Detection (딥러닝 기술 기반 HEVC로 압축된 영상의 이중 압축 검출 기술)

  • Uddin, Kutub;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1134-1142
    • /
    • 2019
  • Detection of double compression is one of the most efficient ways of remarking the validity of videos. Many methods have been introduced to detect HEVC double compression with different coding parameters. However, HEVC double compression detection under the same coding environments is still a challenging task in video forensic. In this paper, we introduce a novel method based on the frame partitioning information in intra prediction mode for detecting double compression in with the same coding environments. We propose to extract statistical feature and Deep Convolution Neural Network (DCNN) feature from the difference of partitioning picture including Coding Unit (CU) and Transform Unit (TU) information. Finally, a softmax layer is integrated to perform the classification of the videos into single and double compression by combing the statistical and the DCNN features. Experimental results show the effectiveness of the statistical and the DCNN features with an average accuracy of 87.5% for WVGA and 84.1% for HD dataset.

Rate-Distortion Optimized Zerotree Image Coding using Wavelet Transform (웨이브렛 변환을 이용한 비트율-왜곡 최적화 제로트리 영상 부호화)

  • 이병기;호요성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.101-109
    • /
    • 2004
  • In this paper, we propose an efficient algerian for wavelet-based sti image coding method that utilizes the rate-distortion (R-D) theory. Since conventional tree-structured image coding schemes do not consider the rate-distortion theory properly, they show reduced coding performance. In this paper, we apply an rate-distortion optimized embedding (RDE) operation into the set partitioning in hierarchical trees (SPIHT) algorithm. In this algorithm, we use the rate-distortion slope as a criterion for the coding order of wavelet coefficients in SPIHT lists. We also describe modified set partitioning and rate-distortion optimized list scan methods. Experimental results demonstrate that the proposed method outperforms the SPIHT algorithm and the rate-distortion optimized embedding algerian with respect to the PSNR (peak signal-to-noise ratio) performance.

A New Low-Skew Clock Network Design Method (새로운 낮은 스큐의 클락 분배망 설계 방법)

  • 이성철;신현철
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.5
    • /
    • pp.43-50
    • /
    • 2004
  • The clock skew is one of the major constraints for high-speed operation of synchronous integrated circuits. In this paper, we propose a hierarchical partitioning based clock network design algorithm called Advanced Clock Tree Generation (ACTG). Especially new effective partitioning and refinement techniques have been developed in which the capacitance and edge length to each sink are considered from the early stage of clock design. Hierarchical structures obtained by parhtioning and refinement are utilized for balanced clock routing. We use zero skew routing in which Elmore delay model is used to estimate the delay. An overlap avoidance routing algorithm for clock tree generation is proposed. Experimental results show significant improvement over conventional methods.

A Memory-based Learning using Repetitive Fixed Partitioning Averaging (반복적 고정분할 평균기법을 이용한 메모리기반 학습기법)

  • Yih, Hyeong-Il
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.11
    • /
    • pp.1516-1522
    • /
    • 2007
  • We had proposed the FPA(Fixed Partition Averaging) method in order to improve the storage requirement and classification rate of the Memory Based Reasoning. The algorithm worked not bad in many area, but it lead to some overhead for memory usage and lengthy computation in the multi classes area. We propose an Repetitive FPA algorithm which repetitively partitioning pattern space in the multi classes area. Our proposed methods have been successfully shown to exhibit comparable performance to k-NN with a lot less number of patterns and better result than EACH system which implements the NGE theory.

  • PDF

Development of Three-Dimensional Layered Finite Element for Thermo-Mechanical Analysis (열 및 응력 해석용 3차원 적층 유한요소의 개발)

  • Jo, Seong-Su;Ha, Seong-Gyu
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.25 no.11
    • /
    • pp.1785-1795
    • /
    • 2001
  • A multi-layered brick element fur the finite element method is developed for analyzing the three-dim-ensionally layered composite structures subjected to both thermal and mechanical boundary conditions. The element has eight nodes with one degree of freedom for the temperature and three for the display-ements at each node, and can contain arbitrary number of layers with different material properties with-in the element; the conventional element should contain one material within an element. Thus the total number of nodes and elements, which are needed to analyze the multi-layered composite structures, can be tremendously reduced. In solving the global equation, a partitioning technique is used to obtain the temperature and the displacements which are caused by both the mechanical boundary conditions and temperature distributions. The results by using the developed element are compared wish the commercial package, ANSYS and the conventional finite element methods, and they are in good agreement. It is also shown that the Number of nodes and elements can be tremendously reduced using the element without losing the numerical accuracies.