• Title/Summary/Keyword: Edge-Cloud Systems

Search Result 71, Processing Time 0.025 seconds

Direct Actuation Update Scheme based on Actuator in Wireless Networked Control System (Wireless Networked Control System에서 Actuator 기반 Direct Actuation Update 방법)

  • Yeunwoong Kyung;Tae-Kook Kim;Youngjun Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.125-129
    • /
    • 2023
  • Age of Information (AoI) has been introduced in wireless networked control systems (WNCSs) to guarantee timely status updates. In addition, as the edge computing (EC) architecture has been deployed in NCS, EC close to sensors can be exploited to collect status updates from sensors and provide control decisions to actuators. However, when lots of sensors simultaneously deliver status updates, EC can be overloaded, which cannot satisfy the AoI requirement. To mitigate this problem, this paper uses actuators with computing capability that can directly receive the status updates from sensors and determine the control decision without the help of EC. To analyze the AoI of the actuation update via EC or directly using actuators, this paper developed an analytic model based on timing diagrams. Extensive simulation results are included to verify the analytic model and to show the AoI with various settings.

Study on Memory Performance Improvement based on Machine Learning (머신러닝 기반 메모리 성능 개선 연구)

  • Cho, Doosan
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.615-619
    • /
    • 2021
  • This study focuses on memory systems that are optimized to increase performance and energy efficiency in many embedded systems such as IoT, cloud computing, and edge computing, and proposes a performance improvement technique. The proposed technique improves memory system performance based on machine learning algorithms that are widely used in many applications. The machine learning technique can be used for various applications through supervised learning, and can be applied to a data classification task used in improving memory system performance. Data classification based on highly accurate machine learning techniques enables data to be appropriately arranged according to data usage patterns, thereby improving overall system performance.

A Comprehensive Review of Emerging Computational Methods for Gene Identification

  • Yu, Ning;Yu, Zeng;Li, Bing;Gu, Feng;Pan, Yi
    • Journal of Information Processing Systems
    • /
    • v.12 no.1
    • /
    • pp.1-34
    • /
    • 2016
  • Gene identification is at the center of genomic studies. Although the first phase of the Encyclopedia of DNA Elements (ENCODE) project has been claimed to be complete, the annotation of the functional elements is far from being so. Computational methods in gene identification continue to play important roles in this area and other relevant issues. So far, a lot of work has been performed on this area, and a plethora of computational methods and avenues have been developed. Many review papers have summarized these methods and other related work. However, most of them focus on the methodologies from a particular aspect or perspective. Different from these existing bodies of research, this paper aims to comprehensively summarize the mainstream computational methods in gene identification and tries to provide a short but concise technical reference for future studies. Moreover, this review sheds light on the emerging trends and cutting-edge techniques that are believed to be capable of leading the research on this field in the future.

Intelligent Massive Traffic Handling Scheme in 5G Bottleneck Backhaul Networks

  • Tam, Prohim;Math, Sa;Kim, Seokhoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.874-890
    • /
    • 2021
  • With the widespread deployment of the fifth-generation (5G) communication networks, various real-time applications are rapidly increasing and generating massive traffic on backhaul network environments. In this scenario, network congestion will occur when the communication and computation resources exceed the maximum available capacity, which severely degrades the network performance. To alleviate this problem, this paper proposed an intelligent resource allocation (IRA) to integrate with the extant resource adjustment (ERA) approach mainly based on the convergence of support vector machine (SVM) algorithm, software-defined networking (SDN), and mobile edge computing (MEC) paradigms. The proposed scheme acquires predictable schedules to adapt the downlink (DL) transmission towards off-peak hour intervals as a predominant priority. Accordingly, the peak hour bandwidth resources for serving real-time uplink (UL) transmission enlarge its capacity for a variety of mission-critical applications. Furthermore, to advance and boost gateway computation resources, MEC servers are implemented and integrated with the proposed scheme in this study. In the conclusive simulation results, the performance evaluation analyzes and compares the proposed scheme with the conventional approach over a variety of QoS metrics including network delay, jitter, packet drop ratio, packet delivery ratio, and throughput.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

A study of Reference Model of Smart Library based on Linked Open Data (링크드오픈데이터 기반 스마트 라이브러리의 참조모델에 관한 연구)

  • Moon, Hee-kyung;Han, Sung-kook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.9
    • /
    • pp.1666-1672
    • /
    • 2016
  • In recent years, smart technology has been applied to various information system fields. Especially, traditional library service area is changing to Smart-Library from Digital-Library. In this environment are need to library service software platform for supporting variety content, library services, users and smart-devices. Due to this, existing library service has a limitation that inhibits semantic interoperability between different heterogeneous library systems. In this paper, we propose Linked-Open-Data based smart library as an archetype of future-library system that provide a variety content and system interaction and integration of services. It is an innovative system of the cutting-edge information intensive. Therefore, we designed system environments according to various integration requirements for smart library based on Linked-Open-Data. And, we describe the functional requirements of smart-library systems by considering the users' demands and the eco-systems of information technology. In addition, we show the reference framework, which can accommodate the functional requirements and provide smart knowledge service to user through a variety of smart-devices.

Big Data Meets Telcos: A Proactive Caching Perspective

  • Bastug, Ejder;Bennis, Mehdi;Zeydan, Engin;Kader, Manhal Abdel;Karatepe, Ilyas Alper;Er, Ahmet Salih;Debbah, Merouane
    • Journal of Communications and Networks
    • /
    • v.17 no.6
    • /
    • pp.549-557
    • /
    • 2015
  • Mobile cellular networks are becoming increasingly complex to manage while classical deployment/optimization techniques and current solutions (i.e., cell densification, acquiring more spectrum, etc.) are cost-ineffective and thus seen as stopgaps. This calls for development of novel approaches that leverage recent advances in storage/memory, context-awareness, edge/cloud computing, and falls into framework of big data. However, the big data by itself is yet another complex phenomena to handle and comes with its notorious 4V: Velocity, voracity, volume, and variety. In this work, we address these issues in optimization of 5G wireless networks via the notion of proactive caching at the base stations. In particular, we investigate the gains of proactive caching in terms of backhaul offloadings and request satisfactions, while tackling the large-amount of available data for content popularity estimation. In order to estimate the content popularity, we first collect users' mobile traffic data from a Turkish telecom operator from several base stations in hours of time interval. Then, an analysis is carried out locally on a big data platformand the gains of proactive caching at the base stations are investigated via numerical simulations. It turns out that several gains are possible depending on the level of available information and storage size. For instance, with 10% of content ratings and 15.4Gbyte of storage size (87%of total catalog size), proactive caching achieves 100% of request satisfaction and offloads 98% of the backhaul when considering 16 base stations.

Optimizing User Experience While Interacting with IR Systems in Big Data Environments

  • Minsoo Park
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.104-110
    • /
    • 2023
  • In the user-centered design paradigm, information systems are created entirely tailored to the users who will use them. When the functions of a complex system meet a simple user interface, users can use the system conveniently. While web personalization services are emerging as a major trend in portal services, portal companies are competing for a second service, such as introducing 'integrated communication platforms'. Until now, the role of the portal has been content and search, but this time, the goal is to create and provide the personalized services that users want through a single platform. Personalization service is a login-based cloud computing service. It has the characteristic of being able to enjoy the same experience at any time in any space with internet access. Personalized web services like this have the advantage of attracting highly loyal users, making them a new service trend that portal companies are paying attention to. Researchers spend a lot of time collecting research-related information by accessing multiple information sources. There is a need to automatically build interest information profiles for each researcher based on personal presentation materials (papers, research projects, patents). There is a need to provide an advanced customized information service that regularly provides the latest information matched with various information sources. Continuous modification and supplementation of each researcher's information profile of interest is the most important factor in increasing suitability when searching for information. As researchers' interest in unstructured information such as technology markets and research trends is gradually increasing from standardized academic information such as patents, it is necessary to expand information sources such as cutting-edge technology markets and research trends. Through this, it is possible to shorten the time required to search and obtain the latest information for research purposes. The interest information profile for each researcher that has already been established can be used in the future to determine the degree of relationship between researchers and to build a database. If this customized information service continues to be provided, it will be useful for research activities.

Development of Raman LIDAR System to Measure Vertical Water Vapor Profiles and Comparision of Raman LIDAR with GNSS and MWR Systems (수증기의 연직 분포 측정을 위한 라만 라이다 장치의 개발 및 GNSS, MWR 장비와 상호 비교연구)

  • Park, Sun-Ho;Kim, Duk-Hyeon;Kim, Yong-Gi;Yun, Mun-Sang;Cheong, Hai-Du
    • Korean Journal of Optics and Photonics
    • /
    • v.22 no.6
    • /
    • pp.283-290
    • /
    • 2011
  • A Raman LIDAR system has been designed and constructed for quantitative measurement of water vapor mixing ratio. The comparison with commercial microwave radiometer and global navigation satellite system(GNSS) was performed for the precipitable water vapor(PWV) profile and total PWV. The result shows that the total GNSS-PWV and LIDAR-PWV have good correlation with each other. But, there is small difference between the two methods because of maximum measurement height in LIDAR and the GNSS method. There are some significant differences between Raman and MWR when the water vapor concentration changes quickly near the boundary layer or at the edge of a cloud. Finally we have decided that MWR cannot detect spatial changes but LIDAR can measure spatial changes.

Performance Evaluation Using Neural Network Learning of Indoor Autonomous Vehicle Based on LiDAR (라이다 기반 실내 자율주행 차량에서 신경망 학습을 사용한 성능평가 )

  • Yonghun Kwon;Inbum Jung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.3
    • /
    • pp.93-102
    • /
    • 2023
  • Data processing through the cloud causes many problems, such as latency and increased communication costs in the communication process. Therefore, many researchers study edge computing in the IoT, and autonomous driving is a representative application. In indoor self-driving, unlike outdoor, GPS and traffic information cannot be used, so the surrounding environment must be recognized using sensors. An efficient autonomous driving system is required because it is a mobile environment with resource constraints. This paper proposes a machine-learning method using neural networks for autonomous driving in an indoor environment. The neural network model predicts the most appropriate driving command for the current location based on the distance data measured by the LiDAR sensor. We designed six learning models to evaluate according to the number of input data of the proposed neural networks. In addition, we made an autonomous vehicle based on Raspberry Pi for driving and learning and an indoor driving track produced for collecting data and evaluation. Finally, we compared six neural network models in terms of accuracy, response time, and battery consumption, and the effect of the number of input data on performance was confirmed.