• Title/Summary/Keyword: Deep learning based control

Search Result 237, Processing Time 0.028 seconds

High-Capacity Robust Image Steganography via Adversarial Network

  • Chen, Beijing;Wang, Jiaxin;Chen, Yingyue;Jin, Zilong;Shim, Hiuk Jae;Shi, Yun-Qing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.366-381
    • /
    • 2020
  • Steganography has been successfully employed in various applications, e.g., copyright control of materials, smart identity cards, video error correction during transmission, etc. Deep learning-based steganography models can hide information adaptively through network learning, and they draw much more attention. However, the capacity, security, and robustness of the existing deep learning-based steganography models are still not fully satisfactory. In this paper, three models for different cases, i.e., a basic model, a secure model, a secure and robust model, have been proposed for different cases. In the basic model, the functions of high-capacity secret information hiding and extraction have been realized through an encoding network and a decoding network respectively. The high-capacity steganography is implemented by hiding a secret image into a carrier image having the same resolution with the help of concat operations, InceptionBlock and convolutional layers. Moreover, the secret image is hidden into the channel B of carrier image only to resolve the problem of color distortion. In the secure model, to enhance the security of the basic model, a steganalysis network has been added into the basic model to form an adversarial network. In the secure and robust model, an attack network has been inserted into the secure model to improve its robustness further. The experimental results have demonstrated that the proposed secure model and the secure and robust model have an overall better performance than some existing high-capacity deep learning-based steganography models. The secure model performs best in invisibility and security. The secure and robust model is the most robust against some attacks.

Emergency vehicle priority signal system based on deep learning using acoustic data (음향 데이터를 활용한 딥러닝 기반 긴급차량 우선 신호 시스템)

  • Lee, SoYeon;Jang, Jae Won;Kim, Dae-Young
    • Journal of Platform Technology
    • /
    • v.9 no.3
    • /
    • pp.44-51
    • /
    • 2021
  • In general, golden time refers to the most important time in the initial response to accidents such as saving lives or extinguishing fires. The golden time varies from disaster to disaster, but is aimed at five minutes in terms of fire and first aid. However, for the actual site, the average dispatch time for ambulances is 9 minutes and the average transfer time is 17.6 minutes, which is quite large compared to the golden time. There are various causes for this delay, but the main cause is traffic jams. In order to solve the problem, the government has established emergency car concession obligations and secured golden time to prioritize ambulances in places with the highest accident rate, but it is not a solution in rush hour when traffic is increasing rapidly. Therefore, this paper proposed a deep learning-based emergency vehicle priority signal system using collected sound data by installing sound sensors on traffic lights and conducted an experiment to classify frequency signals that differ depending on the distance of the emergency vehicle.

Deep reinforcement learning for a multi-objective operation in a nuclear power plant

  • Junyong Bae;Jae Min Kim;Seung Jun Lee
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3277-3290
    • /
    • 2023
  • Nuclear power plant (NPP) operations with multiple objectives and devices are still performed manually by operators despite the potential for human error. These operations could be automated to reduce the burden on operators; however, classical approaches may not be suitable for these multi-objective tasks. An alternative approach is deep reinforcement learning (DRL), which has been successful in automating various complex tasks and has been applied in automation of certain operations in NPPs. But despite the recent progress, previous studies using DRL for NPP operations have limitations to handle complex multi-objective operations with multiple devices efficiently. This study proposes a novel DRL-based approach that addresses these limitations by employing a continuous action space and straightforward binary rewards supported by the adoption of a soft actor-critic and hindsight experience replay. The feasibility of the proposed approach was evaluated for controlling the pressure and volume of the reactor coolant while heating the coolant during NPP startup. The results show that the proposed approach can train the agent with a proper strategy for effectively achieving multiple objectives through the control of multiple devices. Moreover, hands-on testing results demonstrate that the trained agent is capable of handling untrained objectives, such as cooldown, with substantial success.

Car detection area segmentation using deep learning system

  • Dong-Jin Kwon;Sang-hoon Lee
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.182-189
    • /
    • 2023
  • A recently research, object detection and segmentation have emerged as crucial technologies widely utilized in various fields such as autonomous driving systems, surveillance and image editing. This paper proposes a program that utilizes the QT framework to perform real-time object detection and precise instance segmentation by integrating YOLO(You Only Look Once) and Mask R CNN. This system provides users with a diverse image editing environment, offering features such as selecting specific modes, drawing masks, inspecting detailed image information and employing various image processing techniques, including those based on deep learning. The program advantage the efficiency of YOLO to enable fast and accurate object detection, providing information about bounding boxes. Additionally, it performs precise segmentation using the functionalities of Mask R CNN, allowing users to accurately distinguish and edit objects within images. The QT interface ensures an intuitive and user-friendly environment for program control and enhancing accessibility. Through experiments and evaluations, our proposed system has been demonstrated to be effective in various scenarios. This program provides convenience and powerful image processing and editing capabilities to both beginners and experts, smoothly integrating computer vision technology. This paper contributes to the growth of the computer vision application field and showing the potential to integrate various image processing algorithms on a user-friendly platform

Pine Wilt Disease Detection Based on Deep Learning Using an Unmanned Aerial Vehicle (무인항공기를 이용한 딥러닝 기반의 소나무재선충병 감염목 탐지)

  • Lim, Eon Taek;Do, Myung Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.3
    • /
    • pp.317-325
    • /
    • 2021
  • Pine wilt disease first appeared in Busan in 1998; it is a serious disease that causes enormous damage to pine trees. The Korean government enacted a special law on the control of pine wilt disease in 2005, which controls and prohibits the movement of pine trees in affected areas. However, existing forecasting and control methods have physical and economic challenges in reducing pine wilt disease that occurs simultaneously and radically in mountainous terrain. In this study, the authors present the use of a deep learning object recognition and prediction method based on visual materials using an unmanned aerial vehicle (UAV) to effectively detect trees suspected of being infected with pine wilt disease. In order to observe pine wilt disease, an orthomosaic was produced using image data acquired through aerial shots. As a result, 198 damaged trees were identified, while 84 damaged trees were identified in field surveys that excluded areas with inaccessible steep slopes and cliffs. Analysis using image segmentation (SegNet) and image detection (YOLOv2) obtained a performance value of 0.57 and 0.77, respectively.

Deep Learning Based Emergency Response Traffic Signal Control System

  • Jeong-In, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.2
    • /
    • pp.121-129
    • /
    • 2023
  • In this paper, we developed a traffic signal control system for emergency situations that can minimize loss of property and life by actively controlling traffic signals in a certain section in response to emergency situations. When the emergency vehicle terminal transmits an emergency signal including identification information and GPS information, the surrounding image is obtained from the camera, and the object is analyzed based on deep learning to output object information having information such as the location, type, and size of the object. After generating information tracking this object and detecting the signal system, the signal system is switched to emergency mode to identify and track the emergency vehicle based on the received GPS information, and to transmit emergency control signals based on the emergency vehicle's traveling route. It is a system that can be transmitted to a signal controller. This system prevents the emergency vehicle from being blocked by an emergency control signal that is applied first according to an emergency signal, thereby minimizing loss of life and property due to traffic obstacles.

Prediction of water level in a tidal river using a deep-learning based LSTM model (딥러닝 기반 LSTM 모형을 이용한 감조하천 수위 예측)

  • Jung, Sungho;Cho, Hyoseob;Kim, Jeongyup;Lee, Giha
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.12
    • /
    • pp.1207-1216
    • /
    • 2018
  • Discharge or water level predictions at tidally affected river reaches are currently still a great challenge in hydrological practices. This research aims to predict water level of the tide dominated site, Jamsu bridge in the Han River downstream. Physics-based hydrodynamic approaches are sometimes not applicable for water level prediction in such a tidal river due to uncertainty sources like rainfall forecasting data. In this study, TensorFlow deep learning framework was used to build a deep neural network based LSTM model and its applications. The LSTM model was trained based on 3 data sets having 10-min temporal resolution: Paldang dam release, Jamsu bridge water level, predicted tidal level for 6 years (2011~2016) and then predict the water level time series given the six lead times: 1, 3, 6, 9, 12, 24 hours. The optimal hyper-parameters of LSTM model were set up as follows: 6 hidden layers number, 0.01 learning rate, 3000 iterations. In addition, we changed the key parameter of LSTM model, sequence length, ranging from 1 to 6 hours to test its affect to prediction results. The LSTM model with the 1 hr sequence length led to the best performing prediction results for the all cases. In particular, it resulted in very accurate prediction: RMSE (0.065 cm) and NSE (0.99) for the 1 hr lead time prediction case. However, as the lead time became longer, the RMSE increased from 0.08 m (1 hr lead time) to 0.28 m (24 hrs lead time) and the NSE decreased from 0.99 (1 hr lead time) to 0.74 (24 hrs lead time), respectively.

Comparison of learning performance of character controller based on deep reinforcement learning according to state representation (상태 표현 방식에 따른 심층 강화 학습 기반 캐릭터 제어기의 학습 성능 비교)

  • Sohn, Chaejun;Kwon, Taesoo;Lee, Yoonsang
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.5
    • /
    • pp.55-61
    • /
    • 2021
  • The character motion control based on physics simulation using reinforcement learning continue to being carried out. In order to solve a problem using reinforcement learning, the network structure, hyperparameter, state, action and reward must be properly set according to the problem. In many studies, various combinations of states, action and rewards have been defined and successfully applied to problems. Since there are various combinations in defining state, action and reward, many studies are conducted to analyze the effect of each element to find the optimal combination that improves learning performance. In this work, we analyzed the effect on reinforcement learning performance according to the state representation, which has not been so far. First we defined three coordinate systems: root attached frame, root aligned frame, and projected aligned frame. and then we analyze the effect of state representation by three coordinate systems on reinforcement learning. Second, we analyzed how it affects learning performance when various combinations of joint positions and angles for state.

In-depth Recommendation Model Based on Self-Attention Factorization

  • Hongshuang Ma;Qicheng Liu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.721-739
    • /
    • 2023
  • Rating prediction is an important issue in recommender systems, and its accuracy affects the experience of the user and the revenue of the company. Traditional recommender systems use Factorization Machinesfor rating predictions and each feature is selected with the same weight. Thus, there are problems with inaccurate ratings and limited data representation. This study proposes a deep recommendation model based on self-attention Factorization (SAFMR) to solve these problems. This model uses Convolutional Neural Networks to extract features from user and item reviews. The obtained features are fed into self-attention mechanism Factorization Machines, where the self-attention network automatically learns the dependencies of the features and distinguishes the weights of the different features, thereby reducing the prediction error. The model was experimentally evaluated using six classes of dataset. We compared MSE, NDCG and time for several real datasets. The experiment demonstrated that the SAFMR model achieved excellent rating prediction results and recommendation correlations, thereby verifying the effectiveness of the model.

A DDoS attack Mitigation in IoT Communications Using Machine Learning

  • Hailye Tekleselase
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.170-178
    • /
    • 2024
  • Through the growth of the fifth-generation networks and artificial intelligence technologies, new threats and challenges have appeared to wireless communication system, especially in cybersecurity. And IoT networks are gradually attractive stages for introduction of DDoS attacks due to integral frailer security and resource-constrained nature of IoT devices. This paper emphases on detecting DDoS attack in wireless networks by categorizing inward network packets on the transport layer as either "abnormal" or "normal" using the integration of machine learning algorithms knowledge-based system. In this paper, deep learning algorithms and CNN were autonomously trained for mitigating DDoS attacks. This paper lays importance on misuse based DDOS attacks which comprise TCP SYN-Flood and ICMP flood. The researcher uses CICIDS2017 and NSL-KDD dataset in training and testing the algorithms (model) while the experimentation phase. accuracy score is used to measure the classification performance of the four algorithms. the results display that the 99.93 performance is recorded.