• 제목/요약/키워드: Memory Allocation Mechanism

검색결과 16건 처리시간 0.322초

A Slot Allocated Blocking Anti-Collision Algorithm for RFID Tag Identification

  • Qing, Yang;Jiancheng, Li;Hongyi, Wang;Xianghua, Zeng;Liming, Zheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권6호
    • /
    • pp.2160-2179
    • /
    • 2015
  • In many Radio Frequency Identification (RFID) applications, the reader recognizes the tags within its scope repeatedly. For these applications, some algorithms such as the adaptive query splitting algorithm (AQS) and the novel semi-blocking AQS (SBA) were proposed. In these algorithms, a staying tag retransmits its ID to the reader to be identified, even though the ID of the tag is stored in the reader's memory. When the length of tag ID is long, the reader consumes a long time to identify the staying tags. To overcome this deficiency, we propose a slot allocated blocking anti-collision algorithm (SABA). In SABA, the reader assigns a unique slot to each tag in its range by using a slot allocation mechanism. Based on the allocated slot, each staying tag only replies a short data to the reader in the identification process. As a result, the amount of data transmitted by the staying tags is reduced greatly and the identification rate of the reader is improved effectively. The identification rate and the data amount transmitted by tags of SABA are analyzed theoretically and verified by various simulations. The simulation and analysis results show that the performance of SABA is superior to the existing algorithms significantly.

Design and Implementation of a Concuuuency Control Manager for Main Memory Databases (주기억장치 데이터베이스를 위한 동시성 제어 관리자의 설계 및 구현)

  • Kim, Sang-Wook;Jang, Yeon-Jeong;Kim, Yun-Ho;Kim, Jin-Ho;Lee, Seung-Sun;Choi, Wan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • 제25권4B호
    • /
    • pp.646-680
    • /
    • 2000
  • In this paper, we discuss the design and implementation of a concurrency control manager for a main memory DBMS(MMDBMS). Since an MMDBMS, unlike a disk-based DBMS, performs all of data update or retrieval operations by accessing main memory only, the portion of the cost for concurrency control in the total cost for a data update or retrieval is fairly high. Thus, the development of an efficient concurrency control manager highly accelerates the performance of the entire system. Our concurrency control manager employs the 2-phase locking protocol, and has the following characteristics. First, it adapts the partition, an allocation unit of main memory, as a locking granule, and thus, effectively adjusts the trade-off between the system concurrency and locking cost through the analysis of applications. Second, it enjoys low locking costs by maintaining the lock information directly in the partition itself. Third, it provides the latch as a mechanism for physical consistency of system data. Our latch supports both of the shared and exclusive modes, and maximizes the CPU utilization by combining the Bakery algorithm and Unix semaphore facility. Fourth, for solving the deadlock problem, it periodically examines whether a system is in a deadlock state using lock waiting information. In addition, we discuss various issues arising in development such as mutual exclusion of a transaction table, mutual exclusion of indexes and system catalogs, and realtime application supports.

  • PDF

Design and Performance Evaluation of Software RAID for Video-on-Demand Servers (주문형 비디오 서버를 위한 소프트웨어 RAID의 설계 및 성능 분석)

  • Koh, Jeong-Gook
    • Journal of the Korean Society of Industry Convergence
    • /
    • 제3권2호
    • /
    • pp.167-178
    • /
    • 2000
  • Software RAID(Redundant Arrays of Inexpensive Disks) is defined as a storage system that provides capabilities of hardware RAID, and guarantees high reliability as well as high performance. In this paper, we propose an enhanced disk scheduling algorithm and a scheme to guarantee reliability of data. We also design and implement software RAID by utilizing these mechanism to develop a storage system for multimedia applications. Because the proposed algorithm improves a defect of traditional GSS algorithm that disk I/O requests arc served in a fixed order, it minimizes buffer consumption and reduces the number of deadline miss through service group exchange. Software RAID also alleviates data copy overhead during disk services by sharing kernel memory. Even though the implemented software RAID uses the parity approach to guarantee reliability of data, it adopts different data allocation scheme. Therefore, we reduce disk accesses in logical XOR operations to compute the new parity data on all write operations. In the performance evaluation experiments, we found that if we apply the proposed schemes to implement the Software RAID, it can be used as a storage system for small-sized video-on-demand servers.

  • PDF

A Study on the Analysis Method to API Wrapping that Difficult to Normalize in the Latest Version of Themida (최신 버전의 Themida가 보이는 정규화가 어려운 API 난독화 분석방안 연구)

  • Lee, Jae-hwi;Lee, Byung-hee;Cho, Sang-hyun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • 제29권6호
    • /
    • pp.1375-1382
    • /
    • 2019
  • The latest version of commercial protector, Themida, has been updated, it is impossible to apply a normalized unpacking mechanism from previous studies by disable the use of a virtual memory allocation that provides initial data to be tracked. In addition, compared to the previous version, which had many values that determined during execution and easy to track dynamically, it is difficult to track dynamically due to values determined at the time of applying the protector. We will look at how the latest version of Themida make it difficult to normalize the API wrapping process by adopted techniques and examine the possibilities of applying the unpacking techniques to further develop an automated unpacking system.

Dynamic Threads Stack Management Scheme for Sensor Operating Systems under Space-Constrained (공간 제약하의 센서 운영체제를 위한 동적 쓰레드 스택관리 기법)

  • Yi, Sang-Ho;Cho, Yoo-Kun;Hong, Ji-Man
    • Journal of KIISE:Computer Systems and Theory
    • /
    • 제34권11호
    • /
    • pp.572-580
    • /
    • 2007
  • Wireless sensor networks are sensing, computing and communication infrastructures that allow us to monitor, instrument, observe, and respond to phenomena in the harsh environment. Generally, the wireless sensor networks are composed of many deployed sensor nodes that were designed to be very cost-efficient in terms of production cost. For example, UC Berkeley's MICA motes have only 8-bit CPU, 4KB RAM, and 128KB FLASH memory space. Therefore, sensor operating systems that run on the sensor nodes should be able to operate efficiently in terms of the resource management. In this paper, we present a dynamic threads stack management scheme for space-constrained and multi-threaded sensor operating systems. In this scheme, the necessary stack space of each function is measured on compile-time. Then, the information is used to dynamically allocate and release each function's stack space on run-time. It was implemented in Nano-Qplus sensor operating system. Our experimental results show that the proposed scheme outperforms the existing fixed-size stack allocation mechanism.

Dynamic Buffer Allocation Scheme for Caching in Realtime Multimedia Systems (실시간 멀티미디어 시스템에서의 캐슁을 위한 동적 버퍼 할당 기법)

  • Kwon, Jin-Baek;Yeom, Heon-Young;Lee, Kyung-Oh
    • Journal of KIISE:Computer Systems and Theory
    • /
    • 제27권4호
    • /
    • pp.420-430
    • /
    • 2000
  • Several caching schemes for realtime multimedia systems have been proposed, but they focus only on increasing the hit ratio without providing any means to utilize the saved disk bandwidth due to cache hits. One of the most important metrics in multimedia systems is the number of clients that the systems can service simultaneously guaranteeing Quality of Service(QoS). Preemptive but Safe Interval Caching(PSIC) was proposed as a caching scheme which makes it possible to provide deterministic QoS.. However, it has no ability to adapt to the change of system environments since it has no mechanism to change the cache size. In this paper, we present a new caching scheme, Dynamic Interval Caching(DIC), which maximizes the performance, regardless of the change of system environments, providing hiccup-free service, by managing memory buffers dynamically. And it is demonstrated that DIC allocates buffer cache optimally, by comparing with PSIC through trace-driven simulations.

  • PDF