• Title/Summary/Keyword: memory access time

Search Result 409, Processing Time 0.026 seconds

MR-Tree: A Mapping-based R-Tree for Efficient Spatial Searching (Mr-Tree: 효율적인 공간 검색을 위한 매핑 기반 R-Tree)

  • Kang, Hong-Koo;Shin, In-Su;Kim, Joung-Joon;Han, Ki-Joon
    • Spatial Information Research
    • /
    • v.18 no.4
    • /
    • pp.109-120
    • /
    • 2010
  • Recently, due to rapid increasement of spatial data collected from various geosensors in u-GIS environments, the importance of spatial index for efficient search of large spatial data is rising gradually. Especially, researches based R-Tree to improve search performance of spatial data have been actively performed. These previous researches focus on reducing overlaps between nodes or the height of the R -Tree. However, these can not solve an unnecessary node access problem efficiently occurred in tree traversal. In this paper, we propose a MR-Tree(Mapping-based R-Tree) to solve this problem and to support efficient search of large spatial data. The MR-Tree can improve search performance by using a mapping tree for direct access to leaf nodes of the R-Tree without tree traversal. The mapping tree is composed with MBRs and pointers of R-Tree leaf nodes associating each partition which is made by splitting data area repeatedly along dimensions. Especially, the MR-Tree can be adopted in various variations of the R-Tree easily without a modification of the R-Tree structure. In addition, because the mapping tree is constructed in main memory, search time can be greatly reduced. Finally, we proved superiority of MR-Tree performance through experiments.

Efficient DRAM Buffer Access Scheduling Techniques for SSD Storage System (SSD 스토리지 시스템을 위한 효율적인 DRAM 버퍼 액세스 스케줄링 기법)

  • Park, Jun-Su;Hwang, Yong-Joong;Han, Tae-Hee
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.7
    • /
    • pp.48-56
    • /
    • 2011
  • Recently, new storage device SSD(Solid State Disk) based on NAND flash memory is gradually replacing HDD(Hard Disk Drive) in mobile device and thus a variety of research efforts are going on to find the cost-effective ways of performance improvement. By increasing the NAND flash channels in order to enhance the bandwidth through parallel processing, DRAM buffer which acts as a buffer cache between host(PC) and NAND flash has become the bottleneck point. To resolve this problem, this paper proposes an efficient low-cost scheme to increase SSD performance by improving DRAM buffer bandwidth through scheduling techniques which utilize DRAM multi-banks. When both host and NAND flash multi-channels request access to DRAM buffer concurrently, the proposed technique checks their destination and then schedules appropriately considering properties of DRAMs. It can reduce overheads of bank active time and row latency significantly and thus optimizes DRAM buffer bandwidth utilization. The result reveals that the proposed technique improves the SSD performance by 47.4% in read and 47.7% in write operation respectively compared to conventional methods with negligible changes and increases in the hardware.

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

Acceleration of Viewport Extraction for Multi-Object Tracking Results in 360-degree Video (360도 영상에서 다중 객체 추적 결과에 대한 뷰포트 추출 가속화)

  • Heesu Park;Seok Ho Baek;Seokwon Lee;Myeong-jin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.306-313
    • /
    • 2023
  • Realistic and graphics-based virtual reality content is based on 360-degree videos, and viewport extraction through the viewer's intention or automatic recommendation function is essential. This paper designs a viewport extraction system based on multiple object tracking in 360-degree videos and proposes a parallel computing structure necessary for multiple viewport extraction. The viewport extraction process in 360-degree videos is parallelized by composing pixel-wise threads, through 3D spherical surface coordinate transformation from ERP coordinates and 2D coordinate transformation of 3D spherical surface coordinates within the viewport. The proposed structure evaluated the computation time for up to 30 viewport extraction processes in aerial 360-degree video sequences and confirmed up to 5240 times acceleration compared to the CPU-based computation time proportional to the number of viewports. When using high-speed I/O or memory buffers that can reduce ERP frame I/O time, viewport extraction time can be further accelerated by 7.82 times. The proposed parallelized viewport extraction structure can be applied to simultaneous multi-access services for 360-degree videos or virtual reality contents and video summarization services for individual users.

A Study on the Meaning of Zelkova serrata as a Medium of Place Memory - Focused on the Natives of the Village and the Migrant of Keangnam Apartment in Dogok-dong - (장소기억의 매개로서 느티나무의 의미 고찰 - 역말 원주민과 도곡동 경남아파트 이주민을 중심으로 -)

  • Hamm, Yeon-Su;Sung, Jong-Sang
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.39 no.3
    • /
    • pp.42-55
    • /
    • 2021
  • This study investigated the memories of the natives and the migrants who had been living with the 760-year-old Zelkova serrata located in the Keangnam Apartment Complex in Dogok 1-dong. Place memory is a newly illuminated concept since the 1980s, and is also used as a new research methodology for studying and recording multi-layered memories left in a place based on feelings and traces of vivid memories. The urban development of Gangnam, which began in the 1970s, quickly changed rural to apartment complexes. The natives of Yeokmal were scattered throughout the country, and new migrants moved in. In the process, zelkova serrata was managed in different ways from time to time, and residents also establish relationship in different ways. Natives used to take a rest in the tree or swing at Dan-o, and recognized it as a place to receive the god during the village ritual. In other words, they shared the entire process of life and death and were given various roles depending on the lives of the residents. It is also a direct experience that was experienced in detail and a place where collective memories of residents are melted. On the other hand, with the construction of Keangnam Apartment, the management of zelkova tree has become stricter, making it impossible for migrants to access. Migrants have come to enjoy zelkova serrata visually, and the annual Yeokmal Traditional Festival makes common memories in the city. In addition, many people personified trees and received mental comfort. In addition, the nature of the old big tree was highlighted in the background of the city, and the symbol of "uniqueness and speciality" was newly formed, which led to the formation of pride and attachment. Through the memories of the two subjects' zelkova tree, we were able to examine the memories of the tree value, and management of protected tree in the city.

Thermally Stimulated Current Analysis of (Ba, Sr)TiO$_3$ Capacitor ((Ba, Sr)TiO$_3$ 커패시터의 Thermally Stimulated Current분석)

  • Kim, Yong-Ju;Cha, Seon-Yong;Lee, Hui-Cheol;Lee, Gi-Seon;Seo, Gwang-Seok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.38 no.5
    • /
    • pp.329-337
    • /
    • 2001
  • It has been known that the leakage current in the low field region consists of the dielectric relaxation current and intrinsic leakage current, which cause the charge loss in dynamic random access memory (DRAM) storage capacitor using (Ba,Sr)TiO$_{3}$ (BST) thin film. Especially, the dielectric relaxation current should be seriously considered since its magnitude is much larger than that of the intrinsic leakage current in giga-bit DRAM operation voltage (~IY). In this study, thermally stimulated current (TSC) measurement was at first applied to investigate the activation energy of traps and relative evaluation of the density of traps according to process change. And, through comparing TSC to early methods of I-V or I-t measurement and analyzing, we identify the origin of the dielectric relaxation current and investigate the reliability of TSC measurement. First, the polarization condition such as electric field, time, temperature and heating rate was investigated for reliable TSC measurement. From the TSC measurement, the energy level of traps in the BST thin film has been investigated and evaluated to be 0.20($\pm$0.01) eV and 0.45($\pm$0.02) eV. Based on the TSC measurement results before and after rapid thermal annealing (RTA) process, oxygen vacancy is concluded to be the origin of the traps. TSC characteristics with thermal annealing in the MIM BST capacitor have shown the same trends with the current-voltage (I-V) and current-time (I-t) characteristics. This means that the TSC measurement is one of the effective methods to characterize the traps in the BST thin film.

  • PDF

Data Cache System based on the Selective Bank Algorithm for Embedded System (내장형 시스템을 위한 선택적 뱅크 알고리즘을 이용한 데이터 캐쉬 시스템)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • The KIPS Transactions:PartA
    • /
    • v.16A no.2
    • /
    • pp.69-78
    • /
    • 2009
  • One of the most effective way to improve cache performance is to exploit both temporal and spatial locality given by any program executive characteristics. In this paper we present a high performance and low power cache structure with a bank selection mechanism that enhances exploitation of spatial and temporal locality. The proposed cache system consists of two parts, i.e., a main direct-mapped cache with a small block size and a fully associative buffer with a large block size as a multiple of the small block size. Especially, the main direct-mapped cache is constructed as two banks for low power consumption and stores a small block which is selected from fully associative buffer by the proposed bank selection algorithm. By using the bank selection algorithm and three state bits, We selectively extend the lifetime of those small blocks with high temporal locality by storing them in the main direct-mapped caches. This approach effectively reduces conflict misses and cache pollution at the same time. According to the simulation results, the average miss ratio, compared with the Victim and STAS caches with the same size, is improved by about 23% and 32% for Mibench applications respectively. The average memory access time is reduced by about 14% and 18% compared with the he victim and STAS caches respectively. It is also shown that energy consumption of the proposed cache is around 10% lower than other cache systems that we examine.

GIS Optimization for Bigdata Analysis and AI Applying (Bigdata 분석과 인공지능 적용한 GIS 최적화 연구)

  • Kwak, Eun-young;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.171-173
    • /
    • 2022
  • The 4th industrial revolution technology is developing people's lives more efficiently. GIS provided on the Internet services such as traffic information and time information makes people getting more quickly to destination. National geographic information service(NGIS) and each local government are making basic data to investigate SOC accessibility for analyzing optimal point. To construct the shortest distance, the accessibility from the starting point to the arrival point is analyzed. Applying road network map, the starting point and the ending point, the shortest distance, the optimal accessibility is calculated by using Dijkstra algorithm. The analysis information from multiple starting points to multiple destinations was required more than 3 steps of manual analysis to decide the position for the optimal point, within about 0.1% error. It took more time to process the many-to-many (M×N) calculation, requiring at least 32G memory specification of the computer. If an optimal proximity analysis service is provided at a desired location more versatile, it is possible to efficiently analyze locations that are vulnerable to business start-up and living facilities access, and facility selection for the public.

  • PDF

A Dynamic Prefetch Filtering Schemes to Enhance Usefulness Of Cache Memory (캐시 메모리의 유용성을 높이는 동적 선인출 필터링 기법)

  • Chon Young-Suk;Lee Byung-Kwon;Lee Chun-Hee;Kim Suk-Il;Jeon Joong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.123-136
    • /
    • 2006
  • The prefetching technique is an effective way to reduce the latency caused memory access. However, excessively aggressive prefetch not only leads to cache pollution so as to cancel out the benefits of prefetch but also increase bus traffic leading to overall performance degradation. In this thesis, a prefetch filtering scheme is proposed which dynamically decides whether to commence prefetching by referring a filtering table to reduce the cache pollution due to unnecessary prefetches In this thesis, First, prefetch hashing table 1bitSC filtering scheme(PHT1bSC) has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete block address table filtering scheme(CBAT) has been introduced to be used as a reference for the comparative study. A prefetch block address lookup table scheme(PBALT) has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the PHT1bSC scheme, the contents of each entry have the fields the same as CBAT scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. On commonly used prefetch schemes and general benchmarks and multimedia programs simulates change cache parameters. The PBALT scheme compared with no filtering has shown enhanced the greatest 22%, the cache miss ratio has been decreased by 7.9% by virtue of enhanced filtering accuracy compared with conventional PHT2bSC. The MADT of the proposed PBALT scheme has been decreased by 6.1% compared with conventional schemes to reduce the total execution time.

Hardware-Based High Performance XML Parsing Technique Using an FPGA (FPGA를 이용한 하드웨어 기반 고성능 XML 파싱 기법)

  • Lee, Kyu-hee;Seo, Byeong-seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2469-2475
    • /
    • 2015
  • A structured XML has been widely used to present services on various Web-services. The XML is also used for digital documents and digital signatures and for the representation of multimedia files in email systems. The XML document should be firstly parsed to access elements in the XML. The parsing is the most compute-instensive task in the use of XML documents. Most of the previous work has focused on hardware based XML parsers in order to improve parsing performance, while a little work has studied parsing techniques. We present the high performance parsing technique which can be used all of XML parsers and design hardware based XML parser using an FPGA. The proposed parsing technique uses element analyzers instead of the state machine and performs multibyte-based element matching. As a result, our parsing technique can reduce the number of clock cycles per byte(CPB) and does not need to require any preprocessing, such as loading XML data into memory. Compared to other parsers, our parser acheives 1.33~1.82 times improvement in the system performance. Therefore, the proposed parsing technique can process XML documents in real time and is suitable for applying to all of XML parsers.