• Title/Summary/Keyword: scalable data

Search Result 386, Processing Time 0.027 seconds

Method of scalable video application in the advanced T-DMB (지상파 DMB 고도화 망에서의 스케일러블 비디오 부호화 기술)

  • Jun, Dong-San;Kwak, Sang-Min;Lim, Hyung-Soo;Choi, Hae-Chul;Kim, Jae-Gon;Lim, Jong-Soo;Hong, Jin-Woo
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.1
    • /
    • pp.1-9
    • /
    • 2007
  • Digital Multimedia Broadcasting is the next generation broadcasting service which enables various digital multimedia contents, i.e., audio and video, and data access for mobile users. However, due to the bandwidth limitation, the spatial resolution is limited to CIF(Common Interleaved Frame). The Advanced Terrestrial DMB (AT-DMB) secures additional bandwidth by adopting hierarchical modulation transmission technology and provides high data rate and quality for mobile multimedia broadcasting services with scalable video coding(SVC). This paper proposes scalable video coding technology for AT-DMB which enables high quality mobile multimedia broadcasting services that exceeds current DMB service's quality and contents capability.

Distributed Database Design using Evolutionary Algorithms

  • Tosun, Umut
    • Journal of Communications and Networks
    • /
    • v.16 no.4
    • /
    • pp.430-435
    • /
    • 2014
  • The performance of a distributed database system depends particularly on the site-allocation of the fragments. Queries access different fragments among the sites, and an originating site exists for each query. A data allocation algorithm should distribute the fragments to minimize the transfer and settlement costs of executing the query plans. The primary cost for a data allocation algorithm is the cost of the data transmission across the network. The data allocation problem in a distributed database is NP-complete, and scalable evolutionary algorithms were developed to minimize the execution costs of the query plans. In this paper, quadratic assignment problem heuristics were designed and implemented for the data allocation problem. The proposed algorithms find near-optimal solutions for the data allocation problem. In addition to the fast ant colony, robust tabu search, and genetic algorithm solutions to this problem, we propose a fast and scalable hybrid genetic multi-start tabu search algorithm that outperforms the other well-known heuristics in terms of execution time and solution quality.

An Information Security Scheme Based on Video Watermarking and Encryption for H.264 Scalable Extension (H.264 Scalable Extension을 위한 비디오 워터마킹 및 암호화 기반의 정보보호 기법)

  • Kim, Won-Jei;Seung, Teak-Young;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.3
    • /
    • pp.299-311
    • /
    • 2012
  • Recently, H.264 SE(scalable extension) has become a standard of next generation multimedia service which is one source, multi-user service in the telecommunication environment of different kinds of networks and terminal equipments. But existing DRM schemes for multimedia service are not fit for H.264 SE system. Because the amount of transmitted multimedia data is changed considering network environment and terminal equipments' performance by the system, but in the existing DRM schemes, the amount of handled multimedia data are not variable according to network environment and terminal equipments' performance. In this paper, an information security scheme combined video watermarking and encryption is presented for H.264 SE. Amount of watermarks and embedding positions are calculated by the frame number of enhancement layers which are created according to the state of networks and terminal equipments. In order to minimize delayed time by video watermarking and encryption, the video data are watermarked and encrypted in the H.264 SE compression process. In the experimental results, we confirmed that proposed scheme is robust against video compression, general signal processing and geometric processing.

Scalable Prediction Models for Airbnb Listing in Spark Big Data Cluster using GPU-accelerated RAPIDS

  • Muralidharan, Samyuktha;Yadav, Savita;Huh, Jungwoo;Lee, Sanghoon;Woo, Jongwook
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.2
    • /
    • pp.96-102
    • /
    • 2022
  • We aim to build predictive models for Airbnb's prices using a GPU-accelerated RAPIDS in a big data cluster. The Airbnb Listings datasets are used for the predictive analysis. Several machine-learning algorithms have been adopted to build models that predict the price of Airbnb listings. We compare the results of traditional and big data approaches to machine learning for price prediction and discuss the performance of the models. We built big data models using Databricks Spark Cluster, a distributed parallel computing system. Furthermore, we implemented models using multiple GPUs using RAPIDS in the spark cluster. The model was developed using the XGBoost algorithm, whereas other models were developed using traditional central processing unit (CPU)-based algorithms. This study compared all models in terms of accuracy metrics and computing time. We observed that the XGBoost model with RAPIDS using GPUs had the highest accuracy and computing time.

A Scalable OWL Horst Lite Ontology Reasoning Approach based on Distributed Cluster Memories (분산 클러스터 메모리 기반 대용량 OWL Horst Lite 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.3
    • /
    • pp.307-319
    • /
    • 2015
  • Current ontology studies use the Hadoop distributed storage framework to perform map-reduce algorithm-based reasoning for scalable ontologies. In this paper, however, we propose a novel approach for scalable Web Ontology Language (OWL) Horst Lite ontology reasoning, based on distributed cluster memories. Rule-based reasoning, which is frequently used for scalable ontologies, iteratively executes triple-format ontology rules, until the inferred data no longer exists. Therefore, when the scalable ontology reasoning is performed on computer hard drives, the ontology reasoner suffers from performance limitations. In order to overcome this drawback, we propose an approach that loads the ontologies into distributed cluster memories, using Spark (a memory-based distributed computing framework), which executes the ontology reasoning. In order to implement an appropriate OWL Horst Lite ontology reasoning system on Spark, our method divides the scalable ontologies into blocks, loads each block into the cluster nodes, and subsequently handles the data in the distributed memories. We used the Lehigh University Benchmark, which is used to evaluate ontology inference and search speed, to experimentally evaluate the methods suggested in this paper, which we applied to LUBM8000 (1.1 billion triples, 155 gigabytes). When compared with WebPIE, a representative mapreduce algorithm-based scalable ontology reasoner, the proposed approach showed a throughput improvement of 320% (62k/s) over WebPIE (19k/s).

An Approach of Scalable SHIF Ontology Reasoning using Spark Framework (Spark 프레임워크를 적용한 대용량 SHIF 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1195-1206
    • /
    • 2015
  • For the management of a knowledge system, systems that automatically infer and manage scalable knowledge are required. Most of these systems use ontologies in order to exchange knowledge between machines and infer new knowledge. Therefore, approaches are needed that infer new knowledge for scalable ontology. In this paper, we propose an approach to perform rule based reasoning for scalable SHIF ontologies in a spark framework which works similarly to MapReduce in distributed memories on a cluster. For performing efficient reasoning in distributed memories, we focus on three areas. First, we define a data structure for splitting scalable ontology triples into small sets according to each reasoning rule and loading these triple sets in distributed memories. Second, a rule execution order and iteration conditions based on dependencies and correlations among the SHIF rules are defined. Finally, we explain the operations that are adapted to execute the rules, and these operations are based on reasoning algorithms. In order to evaluate the suggested methods in this paper, we perform an experiment with WebPie, which is a representative ontology reasoner based on a cluster using the LUBM set, which is formal data used to evaluate ontology inference and search speed. Consequently, the proposed approach shows that the throughput is improved by 28,400% (157k/sec) from WebPie(553/sec) with LUBM.

Gate Length Dependence of Intrinsic Equivalent Circuit Parameters for RF CMOS Devices (RF CMOS 소자 내부 등가회로 파라미터의 게이트길이에 대한 종속성)

  • Choi, Mun-Sung;Lee, Yong-Taek;Lee, Seong-Hearn
    • Proceedings of the IEEK Conference
    • /
    • 2004.06b
    • /
    • pp.505-508
    • /
    • 2004
  • Gate length dependent data of intrinsic MOSFET equivalent circuit parameters are extracted using a direct extraction technique based on simple 2-port parameter equations. The relatively scalable data with respect to gate length are obtained. These data are verified to be acrurate by observing good correspondence between modeled and measured S-parameters up to 30GHz. These data will be helpful to construct RF scalable MOSFET model.

  • PDF

Scalable Approach to Failure Analysis of High-Performance Computing Systems

  • Shawky, Doaa
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.1023-1031
    • /
    • 2014
  • Failure analysis is necessary to clarify the root cause of a failure, predict the next time a failure may occur, and improve the performance and reliability of a system. However, it is not an easy task to analyze and interpret failure data, especially for complex systems. Usually, these data are represented using many attributes, and sometimes they are inconsistent and ambiguous. In this paper, we present a scalable approach for the analysis and interpretation of failure data of high-performance computing systems. The approach employs rough sets theory (RST) for this task. The application of RST to a large publicly available set of failure data highlights the main attributes responsible for the root cause of a failure. In addition, it is used to analyze other failure characteristics, such as time between failures, repair times, workload running on a failed node, and failure category. Experimental results show the scalability of the presented approach and its ability to reveal dependencies among different failure characteristics.

Scalable Video Coding with Low Complex Wavelet Transform (공간 웨이블릿 변환의 복잡도를 줄인 스케일러블 비디오 코딩에 관한 연구)

  • Park, Seong-Ho;Kim, Won-Ha;Jeong, Se-Yoon
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.298-300
    • /
    • 2004
  • In the decoding process of interframe wavelet coding, the inverse wavelet transform requires huge computational complexity. However, the decoder may need to be used in various devices such as PDAs, notebooks, PCs or set-top Boxes. Therefore, the decoder's complexity should be adapted to the processor's computational power. A decoder designed in accordance with the processor's computational power would provide optimal services for such devices. So, it is natural that the complexity scalability and the low complexity codec are also listed in the requirements for scalable video coding. In this contribution, we develop a method of controlling and lowering the complexity of the spatial wavelet transform while sustaining almost the same coding efficiency as the conventional spatial wavelet transform. In addition, the proposed method may alleviate the ringing effect for certain video data.

  • PDF

High-level framework for scalable 3D video coding based on HEVC (HEVC 기반 삼차원 영상의 스케일러블 전송을 위한 확장 시스템)

  • Choi, Byeongdoo;Cho, Yongjin;Park, Min Woo;Lee, Jin Young;Wey, Hocheon;Kim, Chanyul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2013.06a
    • /
    • pp.182-184
    • /
    • 2013
  • A HEVC-based scalable 3D video coding system is proposed. The proposed system supports scalable transmission of multiview video data with depth maps. Key technologies in this system are reference picture management, reference picture list construction, and cross-layer dependency signaling. All the proposed technologies are used for the development of video coding system for UHD stereo display and glassless 3D display.

  • PDF