• Title/Summary/Keyword: Edge Computing Server

Search Result 82, Processing Time 0.017 seconds

Flow Prediction-Based Dynamic Clustering Method for Traffic Distribution in Edge Computing (엣지 컴퓨팅에서 트래픽 분산을 위한 흐름 예측 기반 동적 클러스터링 기법)

  • Lee, Chang Woo
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1136-1140
    • /
    • 2022
  • This paper is a method for efficient traffic prediction in mobile edge computing, where many studies have recently been conducted. For distributed processing in mobile edge computing, tasks offloading from each mobile edge must be processed within the limited computing power of the edge. As a result, in the mobile nodes, it is necessary to efficiently select the surrounding edge server in consideration of performance dynamically. This paper aims to suggest the efficient clustering method by selecting edges in a cloud environment and predicting mobile traffic. Then, our dynamic clustering method is to reduce offloading overload to the edge server when offloading required by mobile terminals affects the performance of the edge server compared with the existing offloading schemes.

A Performance Comparison of Parallel Programming Models on Edge Devices (엣지 디바이스에서의 병렬 프로그래밍 모델 성능 비교 연구)

  • Dukyun Nam
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.4
    • /
    • pp.165-172
    • /
    • 2023
  • Heterogeneous computing is a technology that utilizes different types of processors to perform parallel processing. It maximizes task processing and energy efficiency by leveraging various computing resources such as CPUs, GPUs, and FPGAs. On the other hand, edge computing has developed with IoT and 5G technologies. It is a distributed computing that utilizes computing resources close to clients, thereby offloading the central server. It has evolved to intelligent edge computing combined with artificial intelligence. Intelligent edge computing enables total data processing, such as context awareness, prediction, control, and simple processing for the data collected on the edge. If heterogeneous computing can be successfully applied in the edge, it is expected to maximize job processing efficiency while minimizing dependence on the central server. In this paper, experiments were conducted to verify the feasibility of various parallel programming models on high-end and low-end edge devices by using benchmark applications. We analyzed the performance of five parallel programming models on the Raspberry Pi 4 and Jetson Orin Nano as low-end and high-end devices, respectively. In the experiment, OpenACC showed the best performance on the low-end edge device and OpenSYCL on the high-end device due to the stability and optimization of system libraries.

Implementation and Performance Aanalysis of Efficient Big Data Processing System Through Dynamic Configuration of Edge Server Computing and Storage Modules (BigCrawler: 엣지 서버 컴퓨팅·스토리지 모듈의 동적 구성을 통한 효율적인 빅데이터 처리 시스템 구현 및 성능 분석)

  • Kim, Yongyeon;Jeon, Jaeho;Kang, Sungjoo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.6
    • /
    • pp.259-266
    • /
    • 2021
  • Edge Computing enables real-time big data processing by performing computing close to the physical location of the user or data source. However, in an edge computing environment, various situations that affect big data processing performance may occur depending on temporary service requirements or changes of physical resources in the field. In this paper, we proposed a BigCrawler system that dynamically configures the computing module and storage module according to the big data collection status and computing resource usage status in the edge computing environment. And the feature of big data processing workload according to the arrangement of computing module and storage module were analyzed.

A Study of Mobile Edge Computing System Architecture for Connected Car Media Services on Highway

  • Lee, Sangyub;Lee, Jaekyu;Cho, Hyeonjoong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5669-5684
    • /
    • 2018
  • The new mobile edge network architecture has been required for an increasing amount of traffic, quality requirements, advanced driver assistance system for autonomous driving and new cloud computing demands on highway. This article proposes a hierarchical cloud computing architecture to enhance performance by using adaptive data load distribution for buses that play the role of edge computing server. A vehicular dynamic cloud is based on wireless architecture including Wireless Local Area Network and Long Term Evolution Advanced communication is used for data transmission between moving buses and cars. The main advantages of the proposed architecture include both a reduction of data loading for top layer cloud server and effective data distribution on traffic jam highway where moving vehicles require video on demand (VOD) services from server. Through the description of real environment based on NS-2 network simulation, we conducted experiments to validate the proposed new architecture. Moreover, we show the feasibility and effectiveness for the connected car media service on highway.

Strategy for Task Offloading of Multi-user and Multi-server Based on Cost Optimization in Mobile Edge Computing Environment

  • He, Yanfei;Tang, Zhenhua
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.615-629
    • /
    • 2021
  • With the development of mobile edge computing, how to utilize the computing power of edge computing to effectively and efficiently offload data and to compute offloading is of great research value. This paper studies the computation offloading problem of multi-user and multi-server in mobile edge computing. Firstly, in order to minimize system energy consumption, the problem is modeled by considering the joint optimization of the offloading strategy and the wireless and computing resource allocation in a multi-user and multi-server scenario. Additionally, this paper explores the computation offloading scheme to optimize the overall cost. As the centralized optimization method is an NP problem, the game method is used to achieve effective computation offloading in a distributed manner. The decision problem of distributed computation offloading between the mobile equipment is modeled as a multi-user computation offloading game. There is a Nash equilibrium in this game, and it can be achieved by a limited number of iterations. Then, we propose a distributed computation offloading algorithm, which first calculates offloading weights, and then distributedly iterates by the time slot to update the computation offloading decision. Finally, the algorithm is verified by simulation experiments. Simulation results show that our proposed algorithm can achieve the balance by a limited number of iterations. At the same time, the algorithm outperforms several other advanced computation offloading algorithms in terms of the number of users and overall overheads for beneficial decision-making.

Edge Computing Server Deployment Technique for Cloud VR-based Multi-User Metaverse Content (클라우드 VR 기반 다중 사용자 메타버스 콘텐츠를 위한 엣지 컴퓨팅 서버 배치 기법)

  • Kim, Won-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1090-1100
    • /
    • 2021
  • Recently, as indoor activities increase due to the spread of infectious diseases, the metaverse is attracting attention. Metaverse refers to content in which the virtual world and the real world are closely related, and its representative platform technology is VR(Virtual Reality). However, since VR hardware is difficult to access in terms of cost, the concept of streaming-based cloud VR has emerged. This study proposes a server configuration and deployment method in an edge network when metaverse content involving multiple users operates based on cloud VR. The proposed algorithm deploys the edge server in consideration of the network and computing resources and client location for cloud VR, which requires a high level of computing resources while at the same time is very sensitive to latency. Based on simulation, it is confirmed that the proposed algorithm can effectively reduce the total network traffic load regardless of the number of applications or the number of users through comparison with the existing deployment method.

Tracking Data through Tracking Data Server in Edge Computing (엣지 컴퓨팅 환경에서 추적 데이터 서버를 통한 데이터 추적)

  • Lim, Han-wool;Byoun, Won-jun;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.3
    • /
    • pp.443-452
    • /
    • 2021
  • One of the key technologies in edge computing is that it always provides services close to the user by moving data between edge servers according to the user's movements. As such, the movement of data between edge servers is frequent. As IoT technology advances and usage areas expand, the data generated also increases, requiring technology to accurately track and process each data to properly manage the data present in the edge computing environment. Currently, cloud systems do not have data disposal technology based on tracking technology for data movement and distribution in their environment, so users cannot see where it is now, whether it is properly removed or not left in the cloud system if users request it to be deleted. In this paper, we propose a tracking data server to create and manage the movement and distribution of data for each edge server and data stored in the central cloud in an edge computing environment.

Energy-Aware Data-Preprocessing Scheme for Efficient Audio Deep Learning in Solar-Powered IoT Edge Computing Environments (태양 에너지 수집형 IoT 엣지 컴퓨팅 환경에서 효율적인 오디오 딥러닝을 위한 에너지 적응형 데이터 전처리 기법)

  • Yeontae Yoo;Dong Kun Noh
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.4
    • /
    • pp.159-164
    • /
    • 2023
  • Solar energy harvesting IoT devices prioritize maximizing the utilization of collected energy due to the periodic recharging nature of solar energy, rather than minimizing energy consumption. Meanwhile, research on edge AI, which performs machine learning near the data source instead of the cloud, is actively conducted for reasons such as data confidentiality and privacy, response time, and cost. One such research area involves performing various audio AI applications using audio data collected from multiple IoT devices in an IoT edge computing environment. However, in most studies, IoT devices only perform sensing data transmission to the edge server, and all processes, including data preprocessing, are performed on the edge server. In this case, it not only leads to overload issues on the edge server but also causes network congestion by transmitting unnecessary data for learning. On the other way, if data preprocessing is delegated to each IoT device to address this issue, it leads to another problem of increased blackout time due to energy shortages in the devices. In this paper, we aim to alleviate the problem of increased blackout time in devices while mitigating issues in server-centric edge AI environments by determining where the data preprocessed based on the energy state of each IoT device. In the proposed method, IoT devices only perform the preprocessing process, which includes sound discrimination and noise removal, and transmit to the server if there is more energy available than the energy threshold required for the basic operation of the device.

A Study on Finding Emergency Conditions for Automatic Authentication Applying Big Data Processing and AI Mechanism on Medical Information Platform

  • Ham, Gyu-Sung;Kang, Mingoo;Joo, Su-Chong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2772-2786
    • /
    • 2022
  • We had researched an automatic authentication-supported medical information platform[6]. The proposed automatic authentication consists of user authentication and mobile terminal authentication, and the authentications are performed simultaneously in patients' emergency conditions. In this paper, we studied on finding emergency conditions for the automatic authentication by applying big data processing and AI mechanism on the extended medical information platform with an added edge computing system. We used big data processing, SVM, and 1-Dimension CNN of AI mechanism to find emergency conditions as authentication means considering patients' underlying diseases such as hypertension, diabetes mellitus, and arrhythmia. To quickly determine a patient's emergency conditions, we placed edge computing at the end of the platform. The medical information server derives patients' emergency conditions decision values using big data processing and AI mechanism and transmits the values to an edge node. If the edge node determines the patient emergency conditions, the edge node notifies the emergency conditions to the medical information server. The medical server transmits an emergency message to the patient's charge medical staff. The medical staff performs the automatic authentication using a mobile terminal. After the automatic authentication is completed, the medical staff can access the patient's upper medical information that was not seen in the normal condition.

Multi-access Edge Computing Scheduler for Low Latency Services (저지연 서비스를 위한 Multi-access Edge Computing 스케줄러)

  • Kim, Tae-Hyun;Kim, Tae-Young;Jin, Sunggeun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.6
    • /
    • pp.299-305
    • /
    • 2020
  • We have developed a scheduler that additionally consider network performance by extending the Kubernetes developed to manage lots of containers in cloud computing nodes. The network delay adapt characteristics of the compute nodes were learned during server operation and the learned results were utilized to develop placement algorithm by considering the existing measurement units, CPU, memory, and volume together, and it was confirmed that the low delay network service was provided through placement algorithm.