• Title/Summary/Keyword: System Latency Time Reduction

Search Result 20, Processing Time 0.025 seconds

A Fast Distributed Video Decoding by Frame Adaptive Parity Bit Request Estimation (프레임간 적응적 연산을 이용한 패리티 비트의 예측에 의한 고속 분산 복호화)

  • Kim, Man-Jae;Kim, Jin-Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.161-162
    • /
    • 2011
  • Recently, many research works are focusing on DVC (Distributed Video Coding) system for low complexity encoder. However the feedback channel-based parity bit control is a major cause of the high decoding time latency. Spatial and temporal correlation is high in video and, therefore, the statistical property can be applied for the parity bit request of LDPCA frame. By introducing frame adaptive parity bit request estimation method, this paper proposes a new method for reducing the decoding time latency. Through computer simulations, it is shown that the proposed method achieves about 80% of complexity reduction, compared to the conventional no-estimation method.

  • PDF

Design and Implementation of High-Performance Cryptanalysis System Based on GPUDirect RDMA (GPUDirect RDMA 기반의 고성능 암호 분석 시스템 설계 및 구현)

  • Lee, Seokmin;Shin, Youngjoo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1127-1137
    • /
    • 2022
  • Cryptographic analysis and decryption technology utilizing the parallel operation of GPU has been studied in the direction of shortening the computation time of the password analysis system. These studies focus on optimizing the code to improve the speed of cryptographic analysis operations on a single GPU or simply increasing the number of GPUs to enhance parallel operations. However, using a large number of GPUs without optimization for data transmission causes longer data transmission latency than using a single GPU and increases the overall computation time of the cryptographic analysis system. In this paper, we investigate GPUDirect RDMA and related technologies for high-performance data processing in deep learning or HPC research fields in GPU clustering environments. In addition, we present a method of designing a high-performance cryptanalysis system using the relevant technologies. Furthermore, based on the suggested system topology, we present a method of implementing a cryptanalysis system using password cracking and GPU reduction. Finally, the performance evaluation results are presented according to demonstration of high-performance technology is applied to the implemented cryptanalysis system, and the expected effects of the proposed system design are shown.

Implementation of a Window-Masking Method and the Soft-core Processor based TDD Switching Control SoC FPGA System (윈도 마스킹 기법과 Soft-core Processor 기반 TDD 스위칭 제어 SoC 시스템 FPGA 구현)

  • Hee-Jin Yang;Jeung-Sub Lee;Han-Sle Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.3
    • /
    • pp.166-175
    • /
    • 2024
  • In this paper, the Window-Masking Method and HAT (Hardware Attached Top) CPU SoM (System on Module) are used to improve the performance and reduce the weight of the MANET (Mobile Ad-hoc Network) network synchronization system using time division redundancy. We propose converting it into a RISC-V based soft-core MCU and mounting it on an FPGA, a hardware accelerator. It was also verified through experiment. In terms of performance, by applying the proposed technique, the synchronization acquisition range is from -50dBm to +10dBm to -60dBm to +10dBm, the lowest input level for synchronization is increased by 20% from -50dBm to -60dBm, and the detection delay (Latency) is 220ns. Reduced by 43% to 125ns. In terms of weight reduction, computing resources (48%), size (33%), and weight (27%) were reduced by an average of 36% by replacing with soft-core MCU.

Delayed Dual Buffering: Reducing Page Fault Latency in Demand Paging for OneNAND Flash Memory (지연 이중 버퍼링: OneNAND 플래시를 이용한 페이지 반입 비용 절감 기법)

  • Joo, Yong-Soo;Park, Jae-Hyun;Chung, Sung-Woo;Chung, Eui-Young;Chang, Nae-Hyuck
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.3 s.357
    • /
    • pp.43-51
    • /
    • 2007
  • OneNAND flash combines the advantages of NAND and NOR flash, and has become an alternative to the former. But the advanced features of OneNAND flash are not utilized effectively in demand paging systems designed for NAND flash. We propose delayed dual buffering, a demand paging system which fully exploits the random-access I/O interface and dual page buffers of OneNAND flash demand paging system. It effectively reduces the time of page transfer from the OneNAND page buffer to the main memory. On average, it achieves and 28.5% reduction in execution time and 4.4% reduction in paging system energy consumption.

Collaborative Streamlined On-Chip Software Architecture on Heterogenous Multi-Cores for Low-Power Reactive Control in Automotive Embedded Processors (차량용 임베디드 프로세서에서 저전력 반응적 제어를 위한 이기종 멀티코어 협력적 스트리밍 온-칩 소프트웨어 구조)

  • Jisu, Kwon;Daejin, Park
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.375-382
    • /
    • 2022
  • This paper proposes a multi-core cooperative computing structure considering the heterogeneous features of automotive embedded on-chip software. The automotive embedded software has the heterogeneous execution flow properties for various hardware drives. Software developed with a homogeneous execution flow without considering these properties will incur inefficient overhead due to core latency and load. The proposed method was evaluated on an target board on which a automotive MCU (micro-controller unit) with built-in multi-cores was mounted. We demonstrate an overhead reduction when software including common embedded system tasks, such as ADC sampling, DSP operations, and communication interfaces, are implemented in a heterogeneous execution flow. When we used the proposed method, embedded software was able to take advantage of idle states that occur between heterogeneous tasks to make efficient use of the resources on the board. As a result of the experiments, the power consumption of the board decreased by 42.11% compared to the baseline. Furthermore, the time required to process the same amount of sampling data was reduced by 27.09%. Experimental results validate the efficiency of the proposed multi-core cooperative heterogeneous embedded software execution technique.

A Novel Duty Cycle Based Cross Layer Model for Energy Efficient Routing in IWSN Based IoT Application

  • Singh, Ghanshyam;Joshi, Pallavi;Raghuvanshi, Ajay Singh
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1849-1876
    • /
    • 2022
  • Wireless Sensor Network (WSN) is considered as an integral part of the Internet of Things (IoT) for collecting real-time data from the site having many applications in industry 4.0 and smart cities. The task of nodes is to sense the environment and send the relevant information over the internet. Though this task seems very straightforward but it is vulnerable to certain issues like energy consumption, delay, throughput, etc. To efficiently address these issues, this work develops a cross-layer model for the optimization between MAC and the Network layer of the OSI model for WSN. A high value of duty cycle for nodes is selected to control the delay and further enhances data transmission reliability. A node measurement prediction system based on the Kalman filter has been introduced, which uses the constraint based on covariance value to decide the scheduling scheme of the nodes. The concept of duty cycle for node scheduling is employed with a greedy data forwarding scheme. The proposed Duty Cycle-based Greedy Routing (DCGR) scheme aims to minimize the hop count, thereby mitigating the energy consumption rate. The proposed algorithm is tested using a real-world wastewater treatment dataset. The proposed method marks an 87.5% increase in the energy efficiency and reduction in the network latency by 61% when validated with other similar pre-existing schemes.

Fuzzy Logic-based Grid Job Scheduling Model for omputational Grid (계산 그리드를 위한 퍼지로직 기반의 그리드 작업 스케줄링 모델)

  • Park, Yang-Jae;Jang, Sung-Ho;Cho, Kyu-Cheol;Lee, Jong-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.5
    • /
    • pp.49-56
    • /
    • 2007
  • This paper deals with grid job allocation and grid resource scheduling to provide a stable and quicker job processing service to grid users. In this paper, we proposed a fuzzy logic-based grid job scheduling model for an effective job scheduling in computational grid environment. The fuzzy logic-based grid job scheduling model measures resource efficiency of all grid resources by a fuzzy logic system based on diverse input parameters like CPU speed and network latency and divides resources into several groups by resource efficiency. And, the model allocates jobs to resources of a group with the highest resource efficiency. For performance evaluation, we implemented the fuzzy logic-based grid job scheduling model on the DEVS modeling and simulation environment and measured reduction rates of turnaround time, job loss, and communication messages in comparison with existing job scheduling models such as the random scheduling model and the MCT(Minimum Completion time) model. Experiment results that the proposed model is useful to improve the QoS of the grid job processing service.

  • PDF

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Smart Fog : Advanced Fog Server-centric Things Abstraction Framework for Multi-service IoT System (Smart Fog : 다중 서비스 사물 인터넷 시스템을 위한 포그 서버 중심 사물 추상화 프레임워크)

  • Hong, Gyeonghwan;Park, Eunsoo;Choi, Sihoon;Shin, Dongkun
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.710-717
    • /
    • 2016
  • Recently, several research studies on things abstraction framework have been proposed in order to implement the multi-service Internet of Things (IoT) system, where various IoT services share the thing devices. Distributed things abstraction has an IoT service duplication problem, which aggravates power consumption of mobile devices and network traffic. On the other hand, cloud server-centric things abstraction cannot cover real-time interactions due to long network delay. Fog server-centric things abstraction has limits in insufficient IoT interfaces. In this paper, we propose Smart Fog which is a fog server-centric things abstraction framework to resolve the problems of the existing things abstraction frameworks. Smart Fog consists of software modules to operate the Smart Gateway and three interfaces. Smart Fog is implemented based on IoTivity framework and OIC standard. We construct a smart home prototype on an embedded board Odroid-XU3 using Smart Fog. We evaluate the network performance and energy efficiency of Smart Fog. The experimental results indicate that the Smart Fog shows short network latency, which can perform real-time interaction. The results also show that the proposed framework has reduction in the network traffic of 74% and power consumption of 21% in mobile device, compared to distributed things abstraction.

Jet Lag and Circadian Rhythms (비행시차와 일중리듬)

  • Kim, Leen
    • Sleep Medicine and Psychophysiology
    • /
    • v.4 no.1
    • /
    • pp.57-65
    • /
    • 1997
  • As jet lag of modern travel continues to spread, there has been an exponential growth in popular explanations of jet lag and recommendations for curing it. Some of this attention are misdirected, and many of those suggested solutions are misinformed. The author reviewed the basic science of jet lag and its practical outcome. The jet lag symptoms stemed from several factors, including high-altitude flying, lag effect, and sleep loss before departure and on the aircraft, especially during night flight. Jet lag has three major components; including external de synchronization, internal desynchronization, and sleep loss. Although external de synchronization is the major culprit, it is not at all uncommon for travelers to experience difficulty falling asleep or remaining asleep because of gastrointestinal distress, uncooperative bladders, or nagging headaches. Such unwanted intrusions most likely to reflect the general influence of internal desynchronization. From the free-running subjects, the data has revealed that sleep tendency, sleepiness, the spontaneous duration of sleep, and REM sleep propensity, each varied markedly with the endogenous circadian phase of the temperature cycle, despite the facts that the average period of the sleep-wake cycle is different from that of the temperature cycle under these conditions. However, whereas the first ocurrence of slow wave sleep is usually associated with a fall in temperature, the amount of SWS is determined primarily by the length of prior wakefulness and not by circadian phase. Another factor to be considered for flight in either direction is the amount of prior sleep loss or time awake. An increase in sleep loss or time awake would be expected to reduce initial sleep latency and enhance the amount of SWS. By combining what we now know about the circadian characteristics of sleep and homeostatic process, many of the diverse findings about sleep after transmeridian flight can be explained. The severity of jet lag is directly related to two major variables that determine the reaction of the circadian system to any transmeridian flight, eg., the direction of flight, and the number of time zones crossed. Remaining factor is individual differences in resynchmization. After a long flight, the circadian timing system and homeostatic process can combine with each other to produce a considerable reduction in well-being. The author suggested that by being exposed to local zeit-gebers and by being awake sufficient to get sleep until the night, sleep improves rapidly with resynchronization following time zone change.

  • PDF