• 제목/요약/키워드: scalable computing

검색결과 189건 처리시간 0.025초

Scalable Multi-view Video Coding based on HEVC

  • Lim, Woong;Nam, Junghak;Sim, Donggyu
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권6호
    • /
    • pp.434-442
    • /
    • 2015
  • In this paper, we propose an integrated spatial and view scalable video codec based on high efficiency video coding (HEVC). The proposed video codec is developed based on similarity and uniqueness between the scalable extension and 3D multi-view extension of HEVC. To improve compression efficiency using the proposed scalable multi-view video codec, inter-layer and inter-view predictions are jointly employed by using high-level syntaxes that are defined to identify view and layer information. For the inter-view and inter-layer predictions, a decoded picture buffer (DPB) management algorithm is also proposed. The inter-view and inter-layer motion predictions are integrated into a consolidated prediction by harmonizing with the temporal motion prediction of HEVC. We found that the proposed scalable multi-view codec achieves bitrate reduction of 36.1%, 31.6% and 15.8% on the top of ${\times}2$, ${\times}1.5$ parallel scalable codec and parallel multi-view codec, respectively.

분할형 CSA를 이용한 Montgomery 곱셈기 (The Montgomery Multiplier Using Scalable Carry Save Adder)

  • 하재철;문상재
    • 정보보호학회논문지
    • /
    • 제10권3호
    • /
    • pp.77-83
    • /
    • 2000
  • This paper presents a new modular multiplier for Montgomery multiplication using iterative small carry save adder. The proposed multiplier is more flexible and suitable for long bit multiplication due to its scalable property according to design area and required computing time. We describe the word-based Montgomery algorithm and design architecture of the multiplier. Our analysis and simulation show that the proposed multiplier provides area/time tradeoffs in limited design area such as IC cards.

Scalable Approach to Failure Analysis of High-Performance Computing Systems

  • Shawky, Doaa
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.1023-1031
    • /
    • 2014
  • Failure analysis is necessary to clarify the root cause of a failure, predict the next time a failure may occur, and improve the performance and reliability of a system. However, it is not an easy task to analyze and interpret failure data, especially for complex systems. Usually, these data are represented using many attributes, and sometimes they are inconsistent and ambiguous. In this paper, we present a scalable approach for the analysis and interpretation of failure data of high-performance computing systems. The approach employs rough sets theory (RST) for this task. The application of RST to a large publicly available set of failure data highlights the main attributes responsible for the root cause of a failure. In addition, it is used to analyze other failure characteristics, such as time between failures, repair times, workload running on a failed node, and failure category. Experimental results show the scalability of the presented approach and its ability to reveal dependencies among different failure characteristics.

확장형 컴퓨팅 구조를 위한 범용 컴파일러 설계 (Designing a Generic Compiler for Scalable Computing Fabric)

  • 타로파 에마뉴엘;이원종;바슨 스리니;한탁돈
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2006년도 한국컴퓨터종합학술대회 논문집 Vol.33 No.1 (B)
    • /
    • pp.394-396
    • /
    • 2006
  • A blooming area of processor design is represented by scalable computing fabric. As the structure of the processors developed using seal-able computing fabric evolved from simple programmable units to processors supporting change of flow instructions and function calls, an increasing interest is in developing the compiling technology that will allow us to harness not only the full power of their hardware but also to target multiple architectures. In this paper we present the front-end of a generic compiler, able to accept a various source languages and transform them in a common intermediate representation.

  • PDF

Scalable Prediction Models for Airbnb Listing in Spark Big Data Cluster using GPU-accelerated RAPIDS

  • Muralidharan, Samyuktha;Yadav, Savita;Huh, Jungwoo;Lee, Sanghoon;Woo, Jongwook
    • Journal of information and communication convergence engineering
    • /
    • 제20권2호
    • /
    • pp.96-102
    • /
    • 2022
  • We aim to build predictive models for Airbnb's prices using a GPU-accelerated RAPIDS in a big data cluster. The Airbnb Listings datasets are used for the predictive analysis. Several machine-learning algorithms have been adopted to build models that predict the price of Airbnb listings. We compare the results of traditional and big data approaches to machine learning for price prediction and discuss the performance of the models. We built big data models using Databricks Spark Cluster, a distributed parallel computing system. Furthermore, we implemented models using multiple GPUs using RAPIDS in the spark cluster. The model was developed using the XGBoost algorithm, whereas other models were developed using traditional central processing unit (CPU)-based algorithms. This study compared all models in terms of accuracy metrics and computing time. We observed that the XGBoost model with RAPIDS using GPUs had the highest accuracy and computing time.

Inter-layer Texture and Syntax Prediction for Scalable Video Coding

  • Lim, Woong;Choi, Hyomin;Nam, Junghak;Sim, Donggyu
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권6호
    • /
    • pp.422-433
    • /
    • 2015
  • In this paper, we demonstrate inter-layer prediction tools for scalable video coders. The proposed scalable coder is designed to support not only spatial, quality and temporal scalabilities, but also view scalability. In addition, we propose quad-tree inter-layer prediction tools to improve coding efficiency at enhancement layers. The proposed inter-layer prediction tools generate texture prediction signal with exploiting texture, syntaxes, and residual information from a reference layer. Furthermore, the tools can be used with inter and intra prediction blocks within a large coding unit. The proposed framework guarantees the rate distortion performance for a base layer because it does not have any compulsion such as constraint intra prediction. According to experiments, the framework supports the spatial scalable functionality with about 18.6%, 18.5% and 25.2% overhead bits against to the single layer coding. The proposed inter-layer prediction tool in multi-loop decoding design framework enables to achieve coding gains of 14.0%, 5.1%, and 12.1% in BD-Bitrate at the enhancement layer, compared to a single layer HEVC for all-intra, low-delay, and random access cases, respectively. For the single-loop decoding design, the proposed quad-tree inter-layer prediction can achieve 14.0%, 3.7%, and 9.8% bit saving.

High Performance Computing 환경을 위한 고성능, 무정지 파일시스템 구현 (The development of the high effective and stoppageless file system for high performance computing)

  • 박영배;최승환;이상호;김경수;공용준
    • 한국콘텐츠학회:학술대회논문집
    • /
    • 한국콘텐츠학회 2004년도 추계 종합학술대회 논문집
    • /
    • pp.395-401
    • /
    • 2004
  • In the current high network-centralized computing and enterprising environment, it is getting essential to transmit data reliably at very high rates. Until now previous client/server model based NFS(Network File System) or AFS(Andrew's Files System) have met the various demands but from now couldn't satisfy those of the today's scalable high-performance computing environment. Not only performance but data sharing service redundancy have risen as a serious problem. In case of NFS, the locking issue and cache cause file system to reboot and make problem when it is used simply as ip-take over for H/A service. In case of AFS, it provides file sharing redundancy but it is not possible until the storage supporting redundancy and equipments are prepared. Lustre is an open source based cluster file system developed to meet both demands. Lustre consists of three types of subsystems : MDS(Meta-Data Server) which offers the meta-data services, OST(Objec Storage Targets) which provide file I/O, and Lustre Clients which interact with OST and MDS. These subsystems with message exchanging and pursuing scalable high-performance file system service. In this paper, we compare the transmission speed of gigabytes file between Lustre and NFS on the basis of concurrent users and also present the high availability of the file system by removing more than one OST in operation.

  • PDF

모바일 컴퓨팅 환경에서 확장 가능한 ID 연동 시스템 설계 및 구현 (Design and Implementation of Scalable ID Federation System in Mobile Computing Environments)

  • 유인태;김배현;문영준;조영섭;진승헌
    • 인터넷정보학회논문지
    • /
    • 제6권5호
    • /
    • pp.155-166
    • /
    • 2005
  • 현재의 네트워크 환경에서는 사용자들이 인터넷상의 여러 서버에 대하여 각각의 독립된 ID(Identity)를 사용하고 있기 때문에 사용자들이 많은 수의 ID와 패스워드를 관리해야하는 불편함이 있다. 이러한 문제를 해결하기 위해 ID 관리 시스템을 사용하지만, 앞으로 도래할 유비쿼터스 컴퓨팅 환경에서는 유무선 네트워크상의 수많은 컴퓨터들이 유기적으로 연결되기 때문에 사용자 ID 및 패스워드 관리가 더욱 복잡해지고, 기존의 단일 신뢰영역(COT: Circle of Trust)의 ID 관리 시스템으로는 이러한 어려움을 해결하기에 충분하지 않다. 본 논문에서는 이러한 문제를 해결하기 위해 다중 신뢰영역 간의 ID 연동(ID Federation)을 유선 컴퓨팅 환경에서뿐만 아니라 모바일 컴퓨팅 환경으로 확장하기 위한 ID 연동 모델을 도출하고 시스템을 구현하였다. 제안한 ID 연동 모델은 ID 확장성 실험을 통해 서로 다른 신뢰 영역에 있는 시스템간에 ID 연동이 가능함을 검증하였다.

  • PDF

An Optimized CLBP Descriptor Based on a Scalable Block Size for Texture Classification

  • Li, Jianjun;Fan, Susu;Wang, Zhihui;Li, Haojie;Chang, Chin-Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권1호
    • /
    • pp.288-301
    • /
    • 2017
  • In this paper, we propose an optimized algorithm for texture classification by computing a completed modeling of the local binary pattern (CLBP) instead of the traditional LBP of a scalable block size in an image. First, we show that the CLBP descriptor is a better representative than LBP by extracting more information from an image. Second, the CLBP features of scalable block size of an image has an adaptive capability in representing both gross and detailed features of an image and thus it is suitable for image texture classification. This paper successfully implements a machine learning scheme by applying the CLBP features of a scalable size to the Support Vector Machine (SVM) classifier. The proposed scheme has been evaluated on Outex and CUReT databases, and the evaluation result shows that the proposed approach achieves an improved recognition rate compared to the previous research results.