• Title/Summary/Keyword: time-memory trade-off

Search Result 13, Processing Time 0.019 seconds

Memory-Efficient Time-Memory Trade-Off Cryptanalysis (메모리 효율적인 TMTO 암호 해독 방법)

  • Kim, Young-Sik;Lim, Dae-Woon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.1C
    • /
    • pp.28-36
    • /
    • 2009
  • Time-memory trade-off (TMTO) cryptanalysis proposed by Hellman can be applied for the various crypto-systems such as block ciphers, stream ciphers, and hash functions. In this paper, we propose a novel method to reduce memory size for storing TMTO tables. The starting points in a TMTO table can be substituted by the indices of n-bit samples from a sequence in a family of pseudo-random sequences with good cross-correlation, which results in the reduction of memory size for the starting points. By using this method, it is possible to reduce the memory size by the factor of 1/10 at the cost of the slightly increasing of operation time in the online phase. Because the memory is considered as more expensive resource than the time, the TMTO cryptanalysis will be more feasible for many real crypto systems.

Efficient Hardware Implementation of Real-time Rectification using Adaptively Compressed LUT

  • Kim, Jong-hak;Kim, Jae-gon;Oh, Jung-kyun;Kang, Seong-muk;Cho, Jun-Dong
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.1
    • /
    • pp.44-57
    • /
    • 2016
  • Rectification is used as a preprocessing to reduce the computation complexity of disparity estimation. However, rectification also requires a complex computation. To minimize the computing complexity, rectification using a lookup-table (R-LUT) has been introduced. However, since, the R-LUT consumes large amount of memory, rectification with compressed LUT (R-CLUT) has been introduced. However, the more we reduce the memory consumption, the more we need decoding overhead. Therefore, we need to attain an acceptable trade-off between the size of LUT and decoding overhead. In this paper, we present such a trade-off by adaptively combining simple coding methods, such as differential coding, modified run-length coding (MRLE), and Huffman coding. Differential coding is applied to transform coordinate data into a differential form in order to further improve the coding efficiency along with Huffman coding for better stability and MRLE for better performance. Our experimental results verified that our coding scheme yields high performance with maintaining robustness. Our method showed about ranging from 1 % to 16 % lower average inverse of compression ratio than the existing methods. Moreover, we maintained low latency with tolerable hardware overhead for real-time implementation.

CReMeS: A CORBA COmpliant Reflective Memory based Real-time Communication Service

  • Chung, Sun-Tae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.10B
    • /
    • pp.1675-1689
    • /
    • 2000
  • We present CReMeS a CORBA-compliant design and implementation of a new real-time communication service. It provides for efficient predictable and scalable communication between information producers and consumers. The CReMeS architecture is based on MidART's Real-Time Channel-based Reflective Memory (RT-CRM) abstraction. This architecture supports the separation of QoS specification between producer and consumer of data and employs a user-level scheduling scheme for communicating real-time tasks. These help us achieve end-to-end predictability and allows our service to scale. The CReMeS architecture provides a CORBA interface to applications and demands no changes to the ORB layer and the language mapping layer. Thus it can run on non real-time Off-The-Shelf ORBs enables applications on these ORBs to have scalable and end-to-end predictable asynchronous communication facility. In addition an application designer can select whether to use an out-of-band channel or the ORB GIOP/IIOP for data communication. This permits a trade-off between performance predictability and reliability. Experimental results demonstrate that our architecture can achieve better performance and predictability than a real-time implementation of the CORBA Even Service when the out-of-band channel is employed for data communication it delivers better predictability with comparable performance when the ORB GIOP/IIOP is used.

  • PDF

A Real-time Dynamic Storage Allocation Algorithm Supporting Various Allocation Policies (다양한 할당 정책을 지원하는 실시간 동적 메모리 할당 알고리즘)

  • 정성무
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.10B
    • /
    • pp.1648-1664
    • /
    • 2000
  • This paper proposes a real-time dynamic storage allocation algorithm QSHF(quick-segregated-half-fit) that provides various memory allocation policies. that manages a free block list per each word size for memory requests of small size good(segregated)-fit policy that manages a free list per proper range size for medium size requests and half-fit policy that manages a free list per proper range size for medium size requests and half-fit policy that manages a free list per each power of 2 size for large size requests. The proposed algorithm has the time complexit O(1) and makes us able to easily estimate the worst case execution time(WCET). This paper also suggests two algorithm that finds the proper free list for the requested memory size in predictable time and if the found list is empty then finds next available non-empty free list in fixed time. In order to confirm efficiency of the proposed algorithm we simulated the memory utilization of each memory allocation policy. The simulation result showed that each policy guarantees the constant WCET regardless of memory size but they have trade-off between memory utilization and list management overhead.

  • PDF

Implementation of the MPEG-1 Layer II Decoder Using the TMS320C64x DSP Processor (TMS320C64x 기반 MPEG-1 LayerII Decoder의 DSP 구현)

  • Cho, Choong-Sang;Lee, Young-Han;Oh, Yoo-Rhee;Kim, Hong-Kook
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.257-258
    • /
    • 2006
  • In this paper, we address several issues in the real time implementation of MPEG-1 Layer II decoder on a fixed-point digital signal processor (DSP), especially TMS320C6416. There is a trade-off between processing speed and the size of program/data memory for the optimal implementation. In a view of the speed optimization, we first convert the floating point operations into fixed point ones with little degradation in audio quality, and then the look-up tables used for the inverse quantization of the audio codec are forced to be located into the internal memory of the DSP. And then, window functions and filter coefficients in the decoder are precalculated and stored as constant, which makes the decoder faster even larger memory size is required. It is shown from the real-time experiments that the fixed-point implementation enables us to make the decoder with a sampling rate of 48 kHz operate with 3 times faster than real-time on TMS320C6416 at a clock rate of 600 MHz.

  • PDF

Numerical Ballistic Modeling in Game Engines

  • YoungBo Go;YunJeong Kang
    • International journal of advanced smart convergence
    • /
    • v.12 no.2
    • /
    • pp.117-126
    • /
    • 2023
  • To improve the overall performance and realism of your game, it is important to calculate the trajectory of a projectile accurately and quickly. One way to increase realism is to use a ballistic model that takes into account factors such as air resistance, density, and wind when calculating a projectile's trajectory. However, the more these factors are taken into account, the more computationally time-consuming and expensive it becomes, creating a trade-off between overall performance and efficiency. Therefore, we present an optimal solution to find a balance between ballistic model accuracy and computation time. We perform ballistic calculations using numerical methods such as Euler, Velocity Verlet, RK2, RK4, and Akima interpolation, and measure and compare the computation time, memory usage (RSS, Resident Set Size), and accuracy of each method. We show developers how to implement more accurate and efficient ballistic models and help them choose the right computational method for their numerical applications.

Low power-high performance embedded SRAM circuit techniques with enhanced array ground potential (어레이 접지전압 조정에 의한 저전력, 고성능 내장형 SRAM 회로 기술)

  • 정경아;손일헌
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.2
    • /
    • pp.36-47
    • /
    • 1998
  • Low power circuit techniques have been developed to realize the highest possible performance of embedded SRAM at 1V power supply with$0.5\mu\textrm{m}$ single threshold CMOS technology in which the unbalance between NMOS and PMOS threshold voltages is utilized to optimize the low power CMOS IC design. To achieve the best trade-off between the transistor drivability and the subthreshold current increase, the ground potential of memory array is raised to suppressthe subthreshold current. The problems of lower cellstability and bit-line dealy increase due to the enhanced array ground potential are evaluated to be controlled within the allowable range by careful circuit design. 160MHz, 128kb embedded SRAM with 3.4ns access time is demonstrated with the power consumption of 14.8mW in active $21.4{mu}W$ in standby mode at 1V power supply.

  • PDF

The Compression of Normal Vectors to Prevent Visulal Distortion in Shading 3D Mesh Models (3D 메쉬 모델의 쉐이딩 시 시각적 왜곡을 방지하는 법선 벡터 압축에 관한 연구)

  • Mun, Hyun-Sik;Jeong, Chae-Bong;Kim, Jay-Jung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.1
    • /
    • pp.1-7
    • /
    • 2008
  • Data compression becomes increasingly an important issue for reducing data storage spaces as well as transmis-sion time in network environments. In 3D geometric models, the normal vectors of faces or meshes take a major portion of the data so that the compression of the vectors, which involves the trade off between the distortion of the images and compression ratios, plays a key role in reducing the size of the models. So, raising the compression ratio when the normal vector is compressed and minimizing the visual distortion of shape model's shading after compression are important. According to the recent papers, normal vector compression is useful to heighten com-pression ratio and to improve memory efficiency. But, the study about distortion of shading when the normal vector is compressed is rare relatively. In this paper, new normal vector compression method which is clustering normal vectors and assigning Representative Normal Vector (RNV) to each cluster and using the angular deviation from actual normal vector is proposed. And, using this new method, Visually Undistinguishable Lossy Compression (VULC) algorithm which distortion of shape model's shading by angular deviation of normal vector cannot be identified visually has been developed. And, being applied to the complicated shape models, this algorithm gave a good effectiveness.

File-System-Level SSD Caching for Improving Application Launch Time (응용프로그램의 기동시간 단축을 위한 파일 시스템 수준의 SSD 캐싱 기법)

  • Han, Changhee;Ryu, Junhee;Lee, Dongeun;Kang, Kyungtae;Shin, Heonshik
    • Journal of KIISE
    • /
    • v.42 no.6
    • /
    • pp.691-698
    • /
    • 2015
  • Application launch time is an important performance metric to user experience in desktop and laptop environment, which mostly depends on the performance of secondary storage. Application launch times can be reduced by utilizing solid-state drive (SSD) instead of hard disk drive (HDD). However, considering a cost-performance trade-off, utilizing SSDs as caches for slow HDDs is a practicable alternative in reducing the application launch times. We propose a new SSD caching scheme which migrates data blocks from HDDs to SSDs. Our scheme operates entirely in the file system level and does not require an extra layer for mapping SSD-cached data that is essential in most other schemes. In particular, our scheme does not incur mapping overheads that cause significant burdens on the main memory, CPU, and SSD space for mapping table. Experimental results conducted with 8 popular applications demonstrate our scheme yields 56% of performance gain in application launch, when data blocks along with metadata are migrated.

Analysis of Random Variations and Variation-Robust Advanced Device Structures

  • Nam, Hyohyun;Lee, Gyo Sub;Lee, Hyunjae;Park, In Jun;Shin, Changhwan
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.14 no.1
    • /
    • pp.8-22
    • /
    • 2014
  • In the past few decades, CMOS logic technologies and devices have been successfully developed with the steady miniaturization of the feature size. At the sub-30-nm CMOS technology nodes, one of the main hurdles for continuously and successfully scaling down CMOS devices is the parametric failure caused by random variations such as line edge roughness (LER), random dopant fluctuation (RDF), and work-function variation (WFV). The characteristics of each random variation source and its effect on advanced device structures such as multigate and ultra-thin-body devices (vs. conventional planar bulk MOSFET) are discussed in detail. Further, suggested are suppression methods for the LER-, RDF-, and WFV-induced threshold voltage (VTH) variations in advanced CMOS logic technologies including the double-patterning and double-etching (2P2E) technique and in advanced device structures including the fully depleted silicon-on-insulator (FD-SOI) MOSFET and FinFET/tri-gate MOSFET at the sub-30-nm nodes. The segmented-channel MOSFET (SegFET) and junctionless transistor (JLT) that can suppress the random variations and the SegFET-/JLT-based static random access memory (SRAM) cell that enhance the read and write margins at a time, though generally with a trade-off between the read and the write margins, are introduced.