• Title/Summary/Keyword: Edge computing.

Search Result 501, Processing Time 0.031 seconds

THE UNIQUE EXISTENCE OF WEAK SOLUTION TO THE CURL-BASED VECTOR WAVE EQUATION WITH FIRST ORDER ABSORBING BOUNDARY CONDITION

  • HYESUN NA;YOONA JO;EUNJUNG LEE
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.27 no.1
    • /
    • pp.23-36
    • /
    • 2023
  • The vector wave equation is widely used in electromagnetic wave analysis. This paper solves the vector wave equation using curl-conforming finite elements. The variational problem is established from Riesz functional based on vector wave equation and the unique existence of weak solution is explored. The edge elements are used in computation and the simulation results are compared with those obtained from a commercial simulator, ANSYS HFSS (high-frequency structure simulator).

Important Facility Guard System Using Edge Computing for LiDAR (LiDAR용 엣지 컴퓨팅을 활용한 중요시설 경계 시스템)

  • Jo, Eun-Kyung;Lee, Eun-Seok;Shin, Byeong-Seok
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.10
    • /
    • pp.345-352
    • /
    • 2022
  • Recent LiDAR(Light Detection And Ranging) sensor is used for scanning object around in real-time. This sensor can detect movement of the object and how it has changed. As the production cost of the sensors has been decreased, LiDAR begins to be used for various industries such as facility guard, smart city and self-driving car. However, LiDAR has a large input data size due to its real-time scanning process. So another way for processing a large amount of data are needed in LiDAR system because it can cause a bottleneck. This paper proposes edge computing to compress massive point cloud for processing quickly. Since laser's reflection range of LiDAR sensor is limited, multiple LiDAR should be used to scan a large area. In this reason multiple LiDAR sensor's data should be processed at once to detect or recognize object in real-time. Edge computer compress point cloud efficiently to accelerate data processing and decompress every data in the main cloud in real-time. In this way user can control LiDAR sensor in the main system without any bottleneck. The system we suggest solves the bottleneck which was problem on the cloud based method by applying edge computing service.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

Fluid analysis of edge Tones at low Mach number using the finite difference lattice Boltzmann method (차분격자볼츠만법에 의한 저Mach수 영역 edge tone의 유체해석)

  • Kang H. K.;Kim J. H.;Kim Y. T.;Lee Y. H.
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2004.03a
    • /
    • pp.113-118
    • /
    • 2004
  • This paper presents a two-dimensional edge tone to predict the frequency characteristics of the discrete oscillations of a jet-edge feedback cycle by the finite difference lattice Boltzmann method (FDLBM). We use a new lattice BGK compressible fluid model that has an additional term and allow larger time increment comparing the conventional FDLBM, and also use a boundary fitted coordinates. The jet is chosen long enough in order to guarantee the parabolic velocity profile of the jet at the outlet, and the edge consists of a wedge with an angle of $\alpha=23^0$. At a stand-off distance $\omega$, the edge is inserted along the centreline of the jet, and a sinuous instability wave with real frequency f is assumed to be created in the vicinity of the nozzle and th propagate towards the downstream. We have succeeded in capturing very small pressure fluctuations result from periodically oscillation of jet around the edge. That pressure fluctuations propagate with the sound speed. Its interaction with the wedge produces an irrotational feedback field which, near the nozzle exit, is a periodic transverse flow producing the singularities at the nozzle lips. The lattice BGK model for compressible fluids is shown to be one of powerful tool for computing sound generation and propagation for a wide range of flows.

  • PDF

A Study of Time Synchronization Methods for IoT Network Nodes

  • Yoo, Sung Geun;Park, Sangil;Lee, Won-Young
    • International journal of advanced smart convergence
    • /
    • v.9 no.1
    • /
    • pp.109-112
    • /
    • 2020
  • Many devices are connected on the internet to give functionalities for interconnected services. In 2020', The number of devices connected to the internet will be reached 5.8 billion. Moreover, many connected service provider such as Google and Amazon, suggests edge computing and mesh networks to cope with this situation which the many devices completely connected on their networks. This paper introduces the current state of the introduction of the wireless mesh network and edge cloud in order to efficiently manage a large number of nodes in the exploding Internet of Things (IoT) network and introduces the existing Network Time Protocol (NTP). On the basis of this, we propose a relatively accurate time synchronization method, especially in heterogeneous mesh networks. Using this NTP, multiple time coordinators can be placed in a mesh network to find the delay error using the average delay time and the delay time of the time coordinator. Therefore, accurate time can be synchronized when implementing IoT, remote metering, and real-time media streaming using IoT mesh network.

Implementation of AIoT Edge Cluster System via Distributed Deep Learning Pipeline

  • Jeon, Sung-Ho;Lee, Cheol-Gyu;Lee, Jae-Deok;Kim, Bo-Seok;Kim, Joo-Man
    • International journal of advanced smart convergence
    • /
    • v.10 no.4
    • /
    • pp.278-288
    • /
    • 2021
  • Recently, IoT systems are cloud-based, so that continuous and large amounts of data collected from sensor nodes are processed in the data server through the cloud. However, in the centralized configuration of large-scale cloud computing, computational processing must be performed at a physical location where data collection and processing take place, and the need for edge computers to reduce the network load of the cloud system is gradually expanding. In this paper, a cluster system consisting of 6 inexpensive Raspberry Pi boards was constructed to perform fast data processing. And we propose "Kubernetes cluster system(KCS)" for processing large data collection and analysis by model distribution and data pipeline method. To compare the performance of this study, an ensemble model of deep learning was built, and the accuracy, processing performance, and processing time through the proposed KCS system and model distribution were compared and analyzed. As a result, the ensemble model was excellent in accuracy, but the KCS implemented as a data pipeline proved to be superior in processing speed..

Design and Evaluation of a Fault-tolerant Publish/Subscribe System for IoT Applications (IoT 응용을 위한 결함 포용 발행/구독 시스템의 설계 및 평가)

  • Bae, Ihn-Han
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1101-1113
    • /
    • 2021
  • The rapid growth of sense-and-respond applications and the emerging cloud computing model present a new challenge: providing publish/subscribe middleware as a scalable and elastic cloud service. The publish/subscribe interaction model is a promising solution for scalable data dissemination over wide-area networks. In addition, there have been some work on the publish/subscribe messaging paradigm that guarantees reliability and availability in the face of node and link failures. These publish/subscribe systems are commonly used in information-centric networks and edge-fog-cloud infrastructures for IoT. The IoT has an edge-fog cloud infrastructure to efficiently process massive amounts of sensing data collected from the surrounding environment. In this paper. we propose a quorum-based hierarchical fault-tolerant publish/subscribe systems (QHFPS) to enable reliable delivery of messages in the presence of link and node failures. The QHFPS efficiently distributes IoT messages to the publish/subscribe brokers in fog overlay layers on the basis of proposing extended stepped grid (xS-grid) quorum for providing tolerance when faced with node failures and network partitions. We evaluate the performance of QHFPS in three aspects: number of transmitted Pub/Sub messages, average subscription delay, and subscritpion delivery rate with an analytical model.

A Reliability Computational Algorithm for Reliability Block Diagram Using Factoring Method (팩토링 기법을 이용한 신뢰성 구조도의 신뢰도 계산 알고리즘)

  • Lie, Chang-Hoon;Kim, Myung-Gyu;Lee, Sang-Cheon
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.20 no.3
    • /
    • pp.3-14
    • /
    • 1994
  • In this study, two reliability computational algorithms which respectively utilize a factoring method are proposed for a system represented by reliability block diagram. First, vertex factoring algorithm is proposed. In this algorithm, a reliability block diagram is considered as a network graph with vertex reliabilities. Second algorithm is mainly concerned with conversion of a reliabilities block diagram into a network graph with edge reliabilities. In this algorithm, the independence of edges is preserved by eliminating replicated edges, and in computing the reliability of a converted network graph, existing edge factoring algorithm is applied. The efficiency of two algorithms are compared for example systems with respect to computing times. The results shows that the second algorithm is shown to be more efficient than the first algorithm.

  • PDF

Design of Block-based Modularity Architecture for Machine Learning (머신러닝을 위한 블록형 모듈화 아키텍처 설계)

  • Oh, Yoosoo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.476-482
    • /
    • 2020
  • In this paper, we propose a block-based modularity architecture design method for distributed machine learning. The proposed architecture is a block-type module structure with various machine learning algorithms. It allows free expansion between block-type modules and allows multiple machine learning algorithms to be organically interlocked according to the situation. The architecture enables open data communication using the metadata query protocol. Also, the architecture makes it easy to implement an application service combining various edge computing devices by designing a communication method suitable for surrounding applications. To confirm the interlocking between the proposed block-type modules, we implemented a hardware-based modularity application system.

Modular Cellular Neural Network Structure for Wave-Computing-Based Image Processing

  • Karami, Mojtaba;Safabakhsh, Reza;Rahmati, Mohammad
    • ETRI Journal
    • /
    • v.35 no.2
    • /
    • pp.207-217
    • /
    • 2013
  • This paper introduces the modular cellular neural network (CNN), which is a new CNN structure constructed from nine one-layer modules with intercellular interactions between different modules. The new network is suitable for implementing many image processing operations. Inputting an image into the modules results in nine outputs. The topographic characteristic of the cell interactions allows the outputs to introduce new properties for image processing tasks. The stability of the system is proven and the performance is evaluated in several image processing applications. Experiment results on texture segmentation show the power of the proposed structure. The performance of the structure in a real edge detection application using the Berkeley dataset BSDS300 is also evaluated.