• Title/Summary/Keyword: memory efficiency

Search Result 709, Processing Time 0.033 seconds

Memory-based Pattern Completion in Database Semantics

  • Hausser Roland
    • Language and Information
    • /
    • v.9 no.1
    • /
    • pp.69-92
    • /
    • 2005
  • Pattern recognition in cognitive agents is based on (i) the uninterpreted input data (e.g. parameter values) provided by the agent's hardware devices and (ii) and interpreted patterns (e.g. templates) provided by the agent's memory. Computationally, the task consists in finding the memory data corresponding best to the input data, for any given input. Once the best fitting memory data have been found, the input is recognized by applying to it the interpretation which happens to be stored with the memorized pattern. This paper presents a fast converging procedure which starts from a few initially recognized items and then analyzes the remainder of the input by systematically checking for items shown by memory to have been related to the initial items in previous encounters. In this way, known patterns are tried first, and only when they have been exhausted, an elementary exploration of the input is commenced. Efficiency is improved further by choosing the candidate to be tested next according to frequency.

  • PDF

Locally weighted linear regression prefetching method for hybrid memory system (하이브리드 메모리 시스템의 지역 가중 선형회귀 프리페치 방법)

  • Tang, Qian;Kim, Jeong-Geun;Kim, Shin-Dug
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.12-15
    • /
    • 2020
  • Data access characteristics can directly affect the efficiency of the system execution. This research is to design an accurate predictor by using historical memory access information, where highly accessible data can be migrated from low-speed storage (SSD/HHD) to high-speed memory (Memory/CPU Cache) in advance, thereby reducing data access latency and further improving overall performance. For this goal, we design a locally weighted linear regression prefetch scheme to cope with irregular access patterns in large graph processing applications for a DARM-PCM hybrid memory structure. By analyzing the testing result, the appropriate structural parameters can be selected, which greatly improves the cache prefetching performance, resulting in overall performance improvement.

Comparison of Fall Detection Systems Based on YOLOPose and Long Short-Term Memory

  • Seung Su Jeong;Nam Ho Kim;Yun Seop Yu
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.2
    • /
    • pp.139-144
    • /
    • 2024
  • In this study, four types of fall detection systems - designed with YOLOPose, principal component analysis (PCA), convolutional neural network (CNN), and long short-term memory (LSTM) architectures - were developed and compared in the detection of everyday falls. The experimental dataset encompassed seven types of activities: walking, lying, jumping, jumping in activities of daily living, falling backward, falling forward, and falling sideways. Keypoints extracted from YOLOPose were entered into the following architectures: RAW-LSTM, PCA-LSTM, RAW-PCA-LSTM, and PCA-CNN-LSTM. For the PCA architectures, the reduced input size stemming from a dimensionality reduction enhanced the operational efficiency in terms of computational time and memory at the cost of decreased accuracy. In contrast, the addition of a CNN resulted in higher complexity and lower accuracy. The RAW-LSTM architecture, which did not include either PCA or CNN, had the least number of parameters, which resulted in the best computational time and memory while also achieving the highest accuracy.

An Efficient Memory Allocation Scheme for Space Constrained Sensor Operating Systems (공간 제약적인 센서 운영체제를 위한 효율적인 메모리 할당 기법)

  • Yi Sang-Ho;Min Hong;Heo Jun-Youg;Cho Yoo-Kun;Hong Ji-Man
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.626-633
    • /
    • 2006
  • The wireless sensor networks are sensing, computing and communication infrastructures that allow us to monitor, instrument, observe, and respond to phenomena in the harsh environment. Sensor operating systems that run on tiny sensor nodes are the key to the performance of the distributed computing environment for the wireless sensor networks. Therefore, sensor operating systems should be able to operate efficiently in terms of energy consumption and resource management. In this paper, we present an efficient memory allocation scheme to improve the time and space efficiency of memory management for the sensor operating systems. Our experimental results show that the proposed scheme performs efficiently in both time and space compared with existing memory allocation mechanisms.

Quantitative comparison and analysis of next generation mobile memory technologies (차세대 모바일 메모리 기술의 정량적 비교 및 분석)

  • Yoon, Changho;Moon, Byungin;Kong, Joonho
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.4
    • /
    • pp.40-51
    • /
    • 2017
  • Recently, as mobile workloads are becoming more data-intensive, high data bandwidth is required for mobile memory which also consumes non-negligible system energy. A variety of researches and technologies are under development to improve and optimize mobile memory technologies. However, a comprehensive study on the latest mobile memory technologies (LPDDR or Wide I/O) has not been extensively performed yet. To construct high-performance and energy-efficient mobile memory systems, quantitative and detailed analysis of these technologies is crucial. In this paper, we simulate the computer system which adopts mobile DRAM technologies (Wide I/O and LPDDR3). Based on our detailed and comprehensive results, we analyze important factors that affect performance and energy-efficiency of mobile DRAM technologies and show which part can be improved to construct better systems.

A Locality-Aware Write Filter Cache for Energy Reduction of STTRAM-Based L1 Data Cache

  • Kong, Joonho
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.1
    • /
    • pp.80-90
    • /
    • 2016
  • Thanks to superior leakage energy efficiency compared to SRAM cells, STTRAM cells are considered as a promising alternative for a memory element in on-chip caches. However, the main disadvantage of STTRAM cells is high write energy and latency. In this paper, we propose a low-cost write filter (WF) cache which resides between the load/store queue and STTRAM-based L1 data cache. To maximize efficiency of the WF cache, the line allocation and access policies are optimized for reducing energy consumption of STTRAM-based L1 data cache. By efficiently filtering the write operations in the STTRAM-based L1 data cache, our proposed WF cache reduces energy consumption of the STTRAM-based L1 data cache by up to 43.0% compared to the case without the WF cache. In addition, thanks to the fast hit latency of the WF cache, it slightly improves performance by 0.2%.

Performance Optimization of Parallel Algorithms

  • Hudik, Martin;Hodon, Michal
    • Journal of Communications and Networks
    • /
    • v.16 no.4
    • /
    • pp.436-446
    • /
    • 2014
  • The high intensity of research and modeling in fields of mathematics, physics, biology and chemistry requires new computing resources. For the big computational complexity of such tasks computing time is large and costly. The most efficient way to increase efficiency is to adopt parallel principles. Purpose of this paper is to present the issue of parallel computing with emphasis on the analysis of parallel systems, the impact of communication delays on their efficiency and on overall execution time. Paper focuses is on finite algorithms for solving systems of linear equations, namely the matrix manipulation (Gauss elimination method, GEM). Algorithms are designed for architectures with shared memory (open multiprocessing, openMP), distributed-memory (message passing interface, MPI) and for their combination (MPI + openMP). The properties of the algorithms were analytically determined and they were experimentally verified. The conclusions are drawn for theory and practice.

A Built-In Redundancy Analysis with a Minimized Binary Search Tree

  • Cho, Hyung-Jun;Kang, Woo-Heon;Kang, Sung-Ho
    • ETRI Journal
    • /
    • v.32 no.4
    • /
    • pp.638-641
    • /
    • 2010
  • With the growth of memory capacity and density, memory testing and repair with the goal of yield improvement have become more important. Therefore, the development of high efficiency redundancy analysis algorithms is essential to improve yield rate. In this letter, we propose an improved built-in redundancy analysis (BIRA) algorithm with a minimized binary search tree made by simple calculations. The tree is constructed until finding a solution from the most probable branch. This greatly reduces the search spaces for a solution. The proposed BIRA algorithm results in 100% repair efficiency and fast redundancy analysis.

Fixed Size Memory Pool Management Method for Mobile Game Servers (모바일 게임 서버를 위한 고정크기 메모리 풀 관리 방법)

  • Park, Seyoung;Choi, Jongsun;Choi, Jaeyoung;Kim, Eunhoe
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.9
    • /
    • pp.327-336
    • /
    • 2015
  • Mobile game servers usually execute frequent dynamic memory allocation for generating the buffers that deal with clients requests. It causes to deteriorate the performance of game servers since it increases system workload and memory fragmentation. In this paper, we propose fixed-sized memory pool management method. Memory pool for the proposed method has a sequential memory structure based on circular linked list data structure. It solves memory fragmentation problem and saves time for searching the memory blocks which are required for memory allocation and deallocation. We showed the efficiency of the proposed method by evaluating the performance of dynamic memory allocation, through the proposed method and the memory pool management method based on boost open source library.

A Real-time Dynamic Storage Allocation Algorithm Supporting Various Allocation Policies (다양한 할당 정책을 지원하는 실시간 동적 메모리 할당 알고리즘)

  • 정성무
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.10B
    • /
    • pp.1648-1664
    • /
    • 2000
  • This paper proposes a real-time dynamic storage allocation algorithm QSHF(quick-segregated-half-fit) that provides various memory allocation policies. that manages a free block list per each word size for memory requests of small size good(segregated)-fit policy that manages a free list per proper range size for medium size requests and half-fit policy that manages a free list per proper range size for medium size requests and half-fit policy that manages a free list per each power of 2 size for large size requests. The proposed algorithm has the time complexit O(1) and makes us able to easily estimate the worst case execution time(WCET). This paper also suggests two algorithm that finds the proper free list for the requested memory size in predictable time and if the found list is empty then finds next available non-empty free list in fixed time. In order to confirm efficiency of the proposed algorithm we simulated the memory utilization of each memory allocation policy. The simulation result showed that each policy guarantees the constant WCET regardless of memory size but they have trade-off between memory utilization and list management overhead.

  • PDF