• Title/Summary/Keyword: cloud learning

Search Result 351, Processing Time 0.027 seconds

Cloud Removal Using Gaussian Process Regression for Optical Image Reconstruction

  • Park, Soyeon;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.4
    • /
    • pp.327-341
    • /
    • 2022
  • Cloud removal is often required to construct time-series sets of optical images for environmental monitoring. In regression-based cloud removal, the selection of an appropriate regression model and the impact analysis of the input images significantly affect the prediction performance. This study evaluates the potential of Gaussian process (GP) regression for cloud removal and also analyzes the effects of cloud-free optical images and spectral bands on prediction performance. Unlike other machine learning-based regression models, GP regression provides uncertainty information and automatically optimizes hyperparameters. An experiment using Sentinel-2 multi-spectral images was conducted for cloud removal in the two agricultural regions. The prediction performance of GP regression was compared with that of random forest (RF) regression. Various combinations of input images and multi-spectral bands were considered for quantitative evaluations. The experimental results showed that using multi-temporal images with multi-spectral bands as inputs achieved the best prediction accuracy. Highly correlated adjacent multi-spectral bands and temporally correlated multi-temporal images resulted in an improved prediction accuracy. The prediction performance of GP regression was significantly improved in predicting the near-infrared band compared to that of RF regression. Estimating the distribution function of input data in GP regression could reflect the variations in the considered spectral band with a broader range. In particular, GP regression was superior to RF regression for reproducing structural patterns at both sites in terms of structural similarity. In addition, uncertainty information provided by GP regression showed a reasonable similarity to prediction errors for some sub-areas, indicating that uncertainty estimates may be used to measure the prediction result quality. These findings suggest that GP regression could be beneficial for cloud removal and optical image reconstruction. In addition, the impact analysis results of the input images provide guidelines for selecting optimal images for regression-based cloud removal.

Designing a Reinforcement Learning-Based 3D Object Reconstruction Data Acquisition Simulation (강화학습 기반 3D 객체복원 데이터 획득 시뮬레이션 설계)

  • Young-Hoon Jin
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.11-16
    • /
    • 2023
  • The technology of 3D reconstruction, primarily relying on point cloud data, is essential for digitizing objects or spaces. This paper aims to utilize reinforcement learning to achieve the acquisition of point clouds in a given environment. To accomplish this, a simulation environment is constructed using Unity, and reinforcement learning is implemented using the Unity package known as ML-Agents. The process of point cloud acquisition involves initially setting a goal and calculating a traversable path around the goal. The traversal path is segmented at regular intervals, with rewards assigned at each step. To prevent the agent from deviating from the path, rewards are increased. Additionally, rewards are granted each time the agent fixates on the goal during traversal, facilitating the learning of optimal points for point cloud acquisition at each traversal step. Experimental results demonstrate that despite the variability in traversal paths, the approach enables the acquisition of relatively accurate point clouds.

An Intelligent Machine Learning Inspired Optimization Algorithm to Enhance Secured Data Transmission in IoT Cloud Ecosystem

  • Ankam, Sreejyothsna;Reddy, N.Sudhakar
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.83-90
    • /
    • 2022
  • Traditional Cloud Computing would be unable to safely host IoT data due to its high latency as the number of IoT sensors and physical devices accommodated on the Internet grows by the day. Because of the difficulty of processing all IoT large data on Cloud facilities, there hasn't been enough research done on automating the security of all components in the IoT-Cloud ecosystem that deal with big data and real-time jobs. It's difficult, for example, to build an automatic, secure data transfer from the IoT layer to the cloud layer, which incorporates a large number of scattered devices. Addressing this issue this article presents an intelligent algorithm that deals with enhancing security aspects in IoT cloud ecosystem using butterfly optimization algorithm.

Agent with Low-latency Overcoming Technique for Distributed Cluster-based Machine Learning

  • Seo-Yeon, Gu;Seok-Jae, Moon;Byung-Joon, Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.157-163
    • /
    • 2023
  • Recently, as businesses and data types become more complex and diverse, efficient data analysis using machine learning is required. However, since communication in the cloud environment is greatly affected by network latency, data analysis is not smooth if information delay occurs. In this paper, SPT (Safe Proper Time) was applied to the cluster-based machine learning data analysis agent proposed in previous studies to solve this delay problem. SPT is a method of remotely and directly accessing memory to a cluster that processes data between layers, effectively improving data transfer speed and ensuring timeliness and reliability of data transfer.

A Reinforcement Learning Framework for Autonomous Cell Activation and Customized Energy-Efficient Resource Allocation in C-RANs

  • Sun, Guolin;Boateng, Gordon Owusu;Huang, Hu;Jiang, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3821-3841
    • /
    • 2019
  • Cloud radio access networks (C-RANs) have been regarded in recent times as a promising concept in future 5G technologies where all DSP processors are moved into a central base band unit (BBU) pool in the cloud, and distributed remote radio heads (RRHs) compress and forward received radio signals from mobile users to the BBUs through radio links. In such dynamic environment, automatic decision-making approaches, such as artificial intelligence based deep reinforcement learning (DRL), become imperative in designing new solutions. In this paper, we propose a generic framework of autonomous cell activation and customized physical resource allocation schemes for energy consumption and QoS optimization in wireless networks. We formulate the problem as fractional power control with bandwidth adaptation and full power control and bandwidth allocation models and set up a Q-learning model to satisfy the QoS requirements of users and to achieve low energy consumption with the minimum number of active RRHs under varying traffic demand and network densities. Extensive simulations are conducted to show the effectiveness of our proposed solution compared to existing schemes.

CLIAM: Cloud Infrastructure Abnormal Monitoring using Machine Learning

  • Choi, Sang-Yong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.105-112
    • /
    • 2020
  • In the fourth industrial revolution represented by hyper-connected and intelligence, cloud computing is drawing attention as a technology to realize big data and artificial intelligence technologies. The proliferation of cloud computing has also increased the number of threats. In this paper, we propose one way to effectively monitor to the resources assigned to clients by the IaaS service provider. The method we propose in this paper is to model the use of resources allocated to cloud systems using ARIMA algorithm, and it identifies abnormal situations through the use and trend analysis. Through experiments, we have verified that the client service provider can effectively monitor using the proposed method within the minimum amount of access to the client systems.

Applicability Analysis of Constructing UDM of Cloud and Cloud Shadow in High-Resolution Imagery Using Deep Learning (딥러닝 기반 구름 및 구름 그림자 탐지를 통한 고해상도 위성영상 UDM 구축 가능성 분석)

  • Nayoung Kim;Yerin Yun;Jaewan Choi;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.351-361
    • /
    • 2024
  • Satellite imagery contains various elements such as clouds, cloud shadows, and terrain shadows. Accurately identifying and eliminating these factors that complicate satellite image analysis is essential for maintaining the reliability of remote sensing imagery. For this reason, satellites such as Landsat-8, Sentinel-2, and Compact Advanced Satellite 500-1 (CAS500-1) provide Usable Data Masks(UDMs)with images as part of their Analysis Ready Data (ARD) product. Precise detection of clouds and their shadows is crucial for the accurate construction of these UDMs. Existing cloud and their shadow detection methods are categorized into threshold-based methods and Artificial Intelligence (AI)-based methods. Recently, AI-based methods, particularly deep learning networks, have been preferred due to their advantage in handling large datasets. This study aims to analyze the applicability of constructing UDMs for high-resolution satellite images through deep learning-based cloud and their shadow detection using open-source datasets. To validate the performance of the deep learning network, we compared the detection results generated by the network with pre-existing UDMs from Landsat-8, Sentinel-2, and CAS500-1 satellite images. The results demonstrated that high accuracy in the detection outcomes produced by the deep learning network. Additionally, we applied the network to detect cloud and their shadow in KOMPSAT-3/3A images, which do not provide UDMs. The experiment confirmed that the deep learning network effectively detected cloud and their shadow in high-resolution satellite images. Through this, we could demonstrate the applicability that UDM data for high-resolution satellite imagery can be constructed using the deep learning network.

Adaptive Resource Management and Provisioning in the Cloud Computing: A Survey of Definitions, Standards and Research Roadmaps

  • Keshavarzi, Amin;Haghighat, Abolfazl Toroghi;Bohlouli, Mahdi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4280-4300
    • /
    • 2017
  • The fact that cloud computing services have been proposed in recent years, organizations and individuals face with various challenges and problems such as how to migrate applications and software platforms into cloud or how to ensure security of migrated applications. This study reviews the current challenges and open issues in cloud computing, with the focus on autonomic resource management especially in federated clouds. In addition, this study provides recommendations and research roadmaps for scientific activities, as well as potential improvements in federated cloud computing. This survey study covers results achieved through 190 literatures including books, journal and conference papers, industrial reports, forums, and project reports. A solution is proposed for autonomic resource management in the federated clouds, using machine learning and statistical analysis in order to provide better and efficient resource management.

Traffic-based reinforcement learning with neural network algorithm in fog computing environment

  • Jung, Tae-Won;Lee, Jong-Yong;Jung, Kye-Dong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.144-150
    • /
    • 2020
  • Reinforcement learning is a technology that can present successful and creative solutions in many areas. This reinforcement learning technology was used to deploy containers from cloud servers to fog servers to help them learn the maximization of rewards due to reduced traffic. Leveraging reinforcement learning is aimed at predicting traffic in the network and optimizing traffic-based fog computing network environment for cloud, fog and clients. The reinforcement learning system collects network traffic data from the fog server and IoT. Reinforcement learning neural networks, which use collected traffic data as input values, can consist of Long Short-Term Memory (LSTM) neural networks in network environments that support fog computing, to learn time series data and to predict optimized traffic. Description of the input and output values of the traffic-based reinforcement learning LSTM neural network, the composition of the node, the activation function and error function of the hidden layer, the overfitting method, and the optimization algorithm.

Grasping Algorithm using Point Cloud-based Deep Learning (점군 기반의 심층학습을 이용한 파지 알고리즘)

  • Bae, Joon-Hyup;Jo, HyunJun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.2
    • /
    • pp.130-136
    • /
    • 2021
  • In recent years, much study has been conducted in robotic grasping. The grasping algorithms based on deep learning have shown better grasping performance than the traditional ones. However, deep learning-based algorithms require a lot of data and time for training. In this study, a grasping algorithm using an artificial neural network-based graspability estimator is proposed. This graspability estimator can be trained with a small number of data by using a neural network based on the residual blocks and point clouds containing the shapes of objects, not RGB images containing various features. The trained graspability estimator can measures graspability of objects and choose the best one to grasp. It was experimentally shown that the proposed algorithm has a success rate of 90% and a cycle time of 12 sec for one grasp, which indicates that it is an efficient grasping algorithm.