• Title/Summary/Keyword: computation scalability

Search Result 72, Processing Time 0.027 seconds

A Reliable Group Key Management Scheme for Broadcast Encryption

  • Hur, Junbeom;Lee, Younho
    • Journal of Communications and Networks
    • /
    • v.18 no.2
    • /
    • pp.246-260
    • /
    • 2016
  • A major challenge achieving scalable access control for a large number of subscribers in a public broadcast is to distribute key update messages reliably to all stateless receivers. However, in a public broadcast, the rekeying messages can be dropped or compromised during transmission over an insecure broadcast channel, or transmitted to receivers while they were off-line. In this study, we propose a novel group key management scheme. It features a mechanism to allow legitimate receivers to recover the current group key, even if they lose key update messages for long-term sessions. The scheme uses short hint messages and member computation. Performance analysis shows that the proposed scheme has the advantages of scalability and efficient rekeying compared to previous reliable group key distribution schemes. The proposed key management scheme targets a conditional access system in a media broadcast in which there is no feedback channel from receivers to the broadcasting station.

k-NN Join Based on LSH in Big Data Environment

  • Ji, Jiaqi;Chung, Yeongjee
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.99-105
    • /
    • 2018
  • k-Nearest neighbor join (k-NN Join) is a computationally intensive algorithm that is designed to find k-nearest neighbors from a dataset S for every object in another dataset R. Most related studies on k-NN Join are based on single-computer operations. As the data dimensions and data volume increase, running the k-NN Join algorithm on a single computer cannot generate results quickly. To solve this scalability problem, we introduce the locality-sensitive hashing (LSH) k-NN Join algorithm implemented in Spark, an approach for high-dimensional big data. LSH is used to map similar data onto the same bucket, which can reduce the data search scope. In order to achieve parallel implementation of the algorithm on multiple computers, the Spark framework is used to accelerate the computation of distances between objects in a cluster. Results show that our proposed approach is fast and accurate for high-dimensional and big data.

Cooperative Coevolution Differential Evolution Based on Spark for Large-Scale Optimization Problems

  • Tan, Xujie;Lee, Hyun-Ae;Shin, Seong-Yoon
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.155-160
    • /
    • 2021
  • Differential evolution is an efficient algorithm for solving continuous optimization problems. However, its performance deteriorates rapidly, and the runtime increases exponentially when differential evolution is applied for solving large-scale optimization problems. Hence, a novel cooperative coevolution differential evolution based on Spark (known as SparkDECC) is proposed. The divide-and-conquer strategy is used in SparkDECC. First, the large-scale problem is decomposed into several low-dimensional subproblems using the random grouping strategy. Subsequently, each subproblem can be addressed in a parallel manner by exploiting the parallel computation capability of the resilient distributed datasets model in Spark. Finally, the optimal solution of the entire problem is obtained using the cooperation mechanism. The experimental results on 13 high-benchmark functions show that the new algorithm performs well in terms of speedup and scalability. The effectiveness and applicability of the proposed algorithm are verified.

Efficient Threshold Schnorr's Signature Scheme (Schnorr 전자서명을 이용한 효율적인 Threshold 서명 기법)

  • 양대헌;권태경
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.14 no.2
    • /
    • pp.69-74
    • /
    • 2004
  • Threshold digital signature is very useful for networks that have no infrastructure such as ad hoc network Up to date, research on threshold digital signature is mainly focused on RSA and DSA. Though Schnorr's digital signature scheme is very efficient in terms of both computation and communication. its hard structure using interactive proof prevents conversion to threshold version. This paper proposes an efficient threshold signature. scheme based on the Schnorr's signature. It has a desirable property of scalability and reduces runtime costs by precomputation.

Concurrent blockchain architecture with small node network (소규모 노드로 구성된 고속 병렬 블록체인 아키텍처)

  • Joi, YongJoon;Shin, DongMyung
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.19-29
    • /
    • 2021
  • Blockchain technology fulfills the reliance requirement and is now entering a new stage of performance. However, the current blockchain technology has significant disadvantages in scalability and latency because of its architecture. Therefore, to adopt blockchain technology to real industry, we must overcome the performance issue by redesigning blockchain architecture. This paper introduces several element technologies and a novel blockchain architecture TPAC, that preserves blockchain's technical advantage but shows more stable and faster transaction processing performance and low latency.

ADMM for least square problems with pairwise-difference penalties for coefficient grouping

  • Park, Soohee;Shin, Seung Jun
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.4
    • /
    • pp.441-451
    • /
    • 2022
  • In the era of bigdata, scalability is a crucial issue in learning models. Among many others, the Alternating Direction of Multipliers (ADMM, Boyd et al., 2011) algorithm has gained great popularity in solving large-scale problems efficiently. In this article, we propose applying the ADMM algorithm to solve the least square problem penalized by the pairwise-difference penalty, frequently used to identify group structures among coefficients. ADMM algorithm enables us to solve the high-dimensional problem efficiently in a unified fashion and thus allows us to employ several different types of penalty functions such as LASSO, Elastic Net, SCAD, and MCP for the penalized problem. Additionally, the ADMM algorithm naturally extends the algorithm to distributed computation and real-time updates, both desirable when dealing with large amounts of data.

Deep learning-based scalable and robust channel estimator for wireless cellular networks

  • Anseok Lee;Yongjin Kwon;Hanjun Park;Heesoo Lee
    • ETRI Journal
    • /
    • v.44 no.6
    • /
    • pp.915-924
    • /
    • 2022
  • In this paper, we present a two-stage scalable channel estimator (TSCE), a deep learning (DL)-based scalable, and robust channel estimator for wireless cellular networks, which is made up of two DL networks to efficiently support different resource allocation sizes and reference signal configurations. Both networks use the transformer, one of cutting-edge neural network architecture, as a backbone for accurate estimation. For computation-efficient global feature extractions, we propose using window and window averaging-based self-attentions. Our results show that TSCE learns wireless propagation channels correctly and outperforms both traditional estimators and baseline DL-based estimators. Additionally, scalability and robustness evaluations are performed, revealing that TSCE is more robust in various environments than the baseline DL-based estimators.

A Scalable Data Integrity Mechanism Based on Provable Data Possession and JARs

  • Zafar, Faheem;Khan, Abid;Ahmed, Mansoor;Khan, Majid Iqbal;Jabeen, Farhana;Hamid, Zara;Ahmed, Naveed;Bashir, Faisal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2851-2873
    • /
    • 2016
  • Cloud storage as a service provides high scalability and availability as per need of user, without large investment on infrastructure. However, data security risks, such as confidentiality, privacy, and integrity of the outsourced data are associated with the cloud-computing model. Over the year's techniques such as, remote data checking (RDC), data integrity protection (DIP), provable data possession (PDP), proof of storage (POS), and proof of retrievability (POR) have been devised to frequently and securely check the integrity of outsourced data. In this paper, we improve the efficiency of PDP scheme, in terms of computation, storage, and communication cost for large data archives. By utilizing the capabilities of JAR and ZIP technology, the cost of searching the metadata in proof generation process is reduced from O(n) to O(1). Moreover, due to direct access to metadata, disk I/O cost is reduced and resulting in 50 to 60 time faster proof generation for large datasets. Furthermore, our proposed scheme achieved 50% reduction in storage size of data and respective metadata that result in providing storage and communication efficiency.

High-Speed Reed-Solomon Decoder Using New Degree Computationless Modified Euclid´s Algorithm (새로운 DCME 알고리즘을 사용한 고속 Reed-Solomon 복호기)

  • 백재현;선우명훈
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.6
    • /
    • pp.459-468
    • /
    • 2003
  • This paper proposes a novel low-cost and high-speed Reed-Solomon (RS) decoder based on a new degree computationless modified Euclid´s (DCME) algorithm. This architecture has quite low hardware complexity compared with conventional modified Euclid´s (ME) architectures, since it can remove completely the degree computation and comparison circuits. The architecture employing a systolic away requires only the latency of 2t clock cycles to solve the key equation without initial latency. In addition, the DCME architecture using 3t+2 basic cells has regularity and scalability since it uses only one processing element. The RS decoder has been synthesized using the 0.25${\mu}{\textrm}{m}$. Faraday CMOS standard cell library and operates at 200MHz and its data rate suppots up to 1.6Gbps. For tile (255, 239, 8) RS code, the gate counts of the DCME architecture and the whole RS decoder excluding FIFO memory are only 21,760 and 42,213, respectively. The proposed RS decoder can reduce the total fate count at least 23% and the total latency at least 10% compared with conventional ME architectures.

Analysis of Execution Behavior for Multprocess-based Web Robots (다중 프로세스 기반 웹 로봇의 수행동작 분석)

  • Kim Hie-Cheol;Lee Yong-Doo
    • Journal of Digital Contents Society
    • /
    • v.2 no.1
    • /
    • pp.9-19
    • /
    • 2001
  • Web robot is an important Internet software technology used in a variety of Internet application software which includes search engines. As Internet continues to grow, implementations of high performance Web robots are urgently demanded. For this, researches specially geared toward performance scalability of Web robots are required. Hoover, because researches are focused mostly on addressing issues related to commercial implementations, scientific researches and studies are not still made on the performance scalability. In this research, Ive choose a Web robot model implemented by fork-join based. multiprocesses. With respect to the model, we evaluate the effect on the collection efficiency that the timeout values set to requests from Web robots to Web servers have. Also, we analysed the behaviors of Web robots by comparing the execution time between the URL extraction and the uniqueness checking for the extracted URLs. as well as by comparing between the computation time and the network time. Based on the analysis result, we suggest the direction for the design of high performance Web robots.

  • PDF