• Title/Summary/Keyword: Computing time-delay

Search Result 223, Processing Time 0.025 seconds

Emotion-aware Task Scheduling for Autonomous Vehicles in Software-defined Edge Networks

  • Sun, Mengmeng;Zhang, Lianming;Mei, Jing;Dong, Pingping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.11
    • /
    • pp.3523-3543
    • /
    • 2022
  • Autonomous vehicles are gradually being regarded as the mainstream trend of future development of the automobile industry. Autonomous driving networks generate many intensive and delay-sensitive computing tasks. The storage space, computing power, and battery capacity of autonomous vehicle terminals cannot meet the resource requirements of the tasks. In this paper, we focus on the task scheduling problem of autonomous driving in software-defined edge networks. By analyzing the intensive and delay-sensitive computing tasks of autonomous vehicles, we propose an emotion model that is related to task urgency and changes with execution time and propose an optimal base station (BS) task scheduling (OBSTS) algorithm. Task sentiment is an important factor that changes with the length of time that computing tasks with different urgency levels remain in the queue. The algorithm uses task sentiment as a performance indicator to measure task scheduling. Experimental results show that the OBSTS algorithm can more effectively meet the intensive and delay-sensitive requirements of vehicle terminals for network resources and improve user service experience.

Enhancing Service Availability in Multi-Access Edge Computing with Deep Q-Learning

  • Lusungu Josh Mwasinga;Syed Muhammad Raza;Duc-Tai Le ;Moonseong Kim ;Hyunseung Choo
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.1-10
    • /
    • 2023
  • The Multi-access Edge Computing (MEC) paradigm equips network edge telecommunication infrastructure with cloud computing resources. It seeks to transform the edge into an IT services platform for hosting resource-intensive and delay-stringent services for mobile users, thereby significantly enhancing perceived service quality of experience. However, erratic user mobility impedes seamless service continuity as well as satisfying delay-stringent service requirements, especially as users roam farther away from the serving MEC resource, which deteriorates quality of experience. This work proposes a deep reinforcement learning based service mobility management approach for ensuring seamless migration of service instances along user mobility. The proposed approach focuses on the problem of selecting the optimal MEC resource to host services for high mobility users, thereby reducing service migration rejection rate and enhancing service availability. Efficacy of the proposed approach is confirmed through simulation experiments, where results show that on average, the proposed scheme reduces service delay by 8%, task computing time by 36%, and migration rejection rate by more than 90%, when comparing to a baseline scheme.

A Study of Time Synchronization Methods for IoT Network Nodes

  • Yoo, Sung Geun;Park, Sangil;Lee, Won-Young
    • International journal of advanced smart convergence
    • /
    • v.9 no.1
    • /
    • pp.109-112
    • /
    • 2020
  • Many devices are connected on the internet to give functionalities for interconnected services. In 2020', The number of devices connected to the internet will be reached 5.8 billion. Moreover, many connected service provider such as Google and Amazon, suggests edge computing and mesh networks to cope with this situation which the many devices completely connected on their networks. This paper introduces the current state of the introduction of the wireless mesh network and edge cloud in order to efficiently manage a large number of nodes in the exploding Internet of Things (IoT) network and introduces the existing Network Time Protocol (NTP). On the basis of this, we propose a relatively accurate time synchronization method, especially in heterogeneous mesh networks. Using this NTP, multiple time coordinators can be placed in a mesh network to find the delay error using the average delay time and the delay time of the time coordinator. Therefore, accurate time can be synchronized when implementing IoT, remote metering, and real-time media streaming using IoT mesh network.

Macromodel for Short Circuit Power and Propagation Delay Estimation of CMOS Circuits

  • Jung, Seung-Ho;Baek, Jong-Humn;Kim, Seok-Yoon
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.1005-1008
    • /
    • 2000
  • This paper presents a simple method to estimate short-circuit power dissipation and propagation delay for static CMOS logic circuits. Short-circuit current expression is derived by accurately interpolating peak points of actual current curves which is influenced by the gate-to-drain coupling capacitance. The macro model and its expressions estimating the delay of CMOS circuits, which is based on the current modeling expression, are also proposed after investigating the voltage waveforms at transistor output modes. It is shown through simulations that the proposed technique yields better accuracy than previous methods when signal transition time and/or load capacitance decreases, which is a characteristic of the present technological evolution.

  • PDF

An Improved Timing-level Gate-delay Calculation Algorithm (개선된 타이밍 수준 게이트 지연 계산 알고리즘)

  • Kim, Boo-Sung;Kim, Seok-Yoon
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.8
    • /
    • pp.1-9
    • /
    • 1999
  • Timing-level circuit analyses are used to obtain fast and accurate results, and the analysis of gate and interconnect delay is necessary to validate the correctness of circuit design. This paper proposes an efficient algorithm which simultaneously calculates the gate delay and the transition time of linearized voltage source for subsequent interconnect delay calculation. The notion of effective capacitance is used to calculate the gate delay and the transition time of linearized voltage source which considers the on-resistance of driving gate. The procedure for obtaining the gate delay and the transition time of linearized voltage source has been developed through an iterative operation using the precharacterized data of gates. While previous methods require extra information for the transition time calculation of linearized voltage sources, our method uses the derived data during the gate delay calculation process, which does not require any change in the precharacterization process.

  • PDF

Design of High-Speed Sense Amplifier for In-Memory Computing (인 메모리 컴퓨팅을 위한 고속 감지 증폭기 설계)

  • Na-Hyun Kim;Jeong-Beom Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.5
    • /
    • pp.777-784
    • /
    • 2023
  • A sense amplifier is an essential peripheral circuit for designing a memory and is used to sense a small differential input signal and amplify it into digital signal. In this paper, a high-speed sense amplifier applicable to in-memory computing circuits is proposed. The proposed circuit reduces sense delay time through transistor Mtail that provides an additional discharge path and improves the circuit performance of the sense amplifier by applying m-GDI (: modified Gate Diffusion Input). Compared with previous structure, the sense delay time was reduced by 16.82%, the PDP(: Power Delay Product) by 17.23%, the EDP(: Energy Delay Product) by 31.1%. The proposed circuit was implemented using TSMC's 65nm CMOS process, while its feasibility was verified through SPECTRE simulation in this study.

Design of A new Algorithm by Using Standard Deviation Techniques in Multi Edge Computing with IoT Application

  • HASNAIN A. ALMASHHADANI;XIAOHENG DENG;OSAMAH R. AL-HWAIDI;SARMAD T. ABDUL-SAMAD;MOHAMMED M. IBRAHM;SUHAIB N. ABDUL LATIF
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1147-1161
    • /
    • 2023
  • The Internet of Things (IoT) requires a new processing model that will allow scalability in cloud computing while reducing time delay caused by data transmission within a network. Such a model can be achieved by using resources that are closer to the user, i.e., by relying on edge computing (EC). The amount of IoT data also grows with an increase in the number of IoT devices. However, building such a flexible model within a heterogeneous environment is difficult in terms of resources. Moreover, the increasing demand for IoT services necessitates shortening time delay and response time by achieving effective load balancing. IoT devices are expected to generate huge amounts of data within a short amount of time. They will be dynamically deployed, and IoT services will be provided to EC devices or cloud servers to minimize resource costs while meeting the latency and quality of service (QoS) constraints of IoT applications when IoT devices are at the endpoint. EC is an emerging solution to the data processing problem in IoT. In this study, we improve the load balancing process and distribute resources fairly to tasks, which, in turn, will improve QoS in cloud and reduce processing time, and consequently, response time.

A Task Scheduling Strategy in Cloud Computing with Service Differentiation

  • Xue, Yuanzheng;Jin, Shunfu;Wang, Xiushuang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5269-5286
    • /
    • 2018
  • Task scheduling is one of the key issues in improving system performance and optimizing resource management in cloud computing environment. In order to provide appropriate services for heterogeneous users, we propose a novel task scheduling strategy with service differentiation, in which the delay sensitive tasks are assigned to the rapid cloud with high-speed processing, whereas the fault sensitive tasks are assigned to the reliable cloud with service restoration. Considering that a user can receive service from either local SaaS (Software as a Service) servers or public IaaS (Infrastructure as a Service) cloud, we establish a hybrid queueing network based system model. With the assumption of Poisson arriving process, we analyze the system model in steady state. Moreover, we derive the performance measures in terms of average response time of the delay sensitive tasks and utilization of VMs (Virtual Machines) in reliable cloud. We provide experimental results to validate the proposed strategy and the system model. Furthermore, we investigate the Nash equilibrium behavior and the social optimization behavior of the delay sensitive tasks. Finally, we carry out an improved intelligent searching algorithm to obtain the optimal arrival rate of total tasks and present a pricing policy for the delay sensitive tasks.

A City-Level Boundary Nodes Identification Algorithm Based on Bidirectional Approaching

  • Tao, Zhiyuan;Liu, Fenlin;Liu, Yan;Luo, Xiangyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.2764-2782
    • /
    • 2021
  • Existing city-level boundary nodes identification methods need to locate all IP addresses on the path to differentiate which IP is the boundary node. However, these methods are susceptible to time-delay, the accuracy of location information and other factors, and the resource consumption of locating all IPes is tremendous. To improve the recognition rate and reduce the locating cost, this paper proposes an algorithm for city-level boundary node identification based on bidirectional approaching. Different from the existing methods based on time-delay information and location results, the proposed algorithm uses topological analysis to construct a set of candidate boundary nodes and then identifies the boundary nodes. The proposed algorithm can identify the boundary of the target city network without high-precision location information and dramatically reduces resource consumption compared with the traditional algorithm. Meanwhile, it can label some errors in the existing IP address database. Based on 45,182,326 measurement results from Zhengzhou, Chengdu and Hangzhou in China and New York, Los Angeles and Dallas in the United States, the experimental results show that: The algorithm can accurately identify the city boundary nodes using only 20.33% location resources, and more than 80.29% of the boundary nodes can be mined with a precision of more than 70.73%.

Design of In-Memory Computing Adder Using Low-Power 8+T SRAM (저 전력 8+T SRAM을 이용한 인 메모리 컴퓨팅 가산기 설계)

  • Chang-Ki Hong;Jeong-Beom Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.2
    • /
    • pp.291-298
    • /
    • 2023
  • SRAM-based in-memory computing is one of the technologies to solve the bottleneck of von Neumann architecture. In order to achieve SRAM-based in-memory computing, it is essential to design efficient SRAM bit-cell. In this paper, we propose a low-power differential sensing 8+T SRAM bit-cell which reduces power consumption and improves circuit performance. The proposed 8+T SRAM bit-cell is applied to ripple carry adder which performs SRAM read and bitwise operations simultaneously and executes each logic operation in parallel. Compared to the previous work, the designed 8+T SRAM-based ripple carry adder is reduced power consumption by 11.53%, but increased propagation delay time by 6.36%. Also, this adder is reduced power-delay-product (PDP) by 5.90% and increased energy-delay- product (EDP) by 0.08%. The proposed circuit was designed using TSMC 65nm CMOS process, and its feasibility was verified through SPECTRE simulation.