• Title/Summary/Keyword: Lightweight Deep Learning

Search Result 76, Processing Time 0.029 seconds

Performance Analysis of Optical Camera Communication with Applied Convolutional Neural Network (합성곱 신경망을 적용한 Optical Camera Communication 시스템 성능 분석)

  • Jong-In Kim;Hyun-Sun Park;Jung-Hyun Kim
    • Smart Media Journal
    • /
    • v.12 no.3
    • /
    • pp.49-59
    • /
    • 2023
  • Optical Camera Communication (OCC), known as the next-generation wireless communication technology, is currently under extensive research. The performance of OCC technology is affected by the communication environment, and various strategies are being studied to improve it. Among them, the most prominent method is applying convolutional neural networks (CNN) to the receiver of OCC using deep learning technology. However, in most studies, CNN is simply used to detect the transmitter. In this paper, we experiment with applying the convolutional neural network not only for transmitter detection but also for the Rx demodulation system. We hypothesize that, since the data images of the OCC system are relatively simple to classify compared to other image datasets, high accuracy results will appear in most CNN models. To prove this hypothesis, we designed and implemented an OCC system to collect data and applied it to 12 different CNN models for experimentation. The experimental results showed that not only high-performance CNN models with many parameters but also lightweight CNN models achieved an accuracy of over 99%. Through this, we confirmed the feasibility of applying the OCC system in real-time on mobile devices such as smartphones.

A Generalized Adaptive Deep Latent Factor Recommendation Model (일반화 적응 심층 잠재요인 추천모형)

  • Kim, Jeongha;Lee, Jipyeong;Jang, Seonghyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.249-263
    • /
    • 2023
  • Collaborative Filtering, a representative recommendation system methodology, consists of two approaches: neighbor methods and latent factor models. Among these, the latent factor model using matrix factorization decomposes the user-item interaction matrix into two lower-dimensional rectangular matrices, predicting the item's rating through the product of these matrices. Due to the factor vectors inferred from rating patterns capturing user and item characteristics, this method is superior in scalability, accuracy, and flexibility compared to neighbor-based methods. However, it has a fundamental drawback: the need to reflect the diversity of preferences of different individuals for items with no ratings. This limitation leads to repetitive and inaccurate recommendations. The Adaptive Deep Latent Factor Model (ADLFM) was developed to address this issue. This model adaptively learns the preferences for each item by using the item description, which provides a detailed summary and explanation of the item. ADLFM takes in item description as input, calculates latent vectors of the user and item, and presents a method that can reflect personal diversity using an attention score. However, due to the requirement of a dataset that includes item descriptions, the domain that can apply ADLFM is limited, resulting in generalization limitations. This study proposes a Generalized Adaptive Deep Latent Factor Recommendation Model, G-ADLFRM, to improve the limitations of ADLFM. Firstly, we use item ID, commonly used in recommendation systems, as input instead of the item description. Additionally, we apply improved deep learning model structures such as Self-Attention, Multi-head Attention, and Multi-Conv1D. We conducted experiments on various datasets with input and model structure changes. The results showed that when only the input was changed, MAE increased slightly compared to ADLFM due to accompanying information loss, resulting in decreased recommendation performance. However, the average learning speed per epoch significantly improved as the amount of information to be processed decreased. When both the input and the model structure were changed, the best-performing Multi-Conv1d structure showed similar performance to ADLFM, sufficiently counteracting the information loss caused by the input change. We conclude that G-ADLFRM is a new, lightweight, and generalizable model that maintains the performance of the existing ADLFM while enabling fast learning and inference.

Development of an intelligent edge computing device equipped with on-device AI vision model (온디바이스 AI 비전 모델이 탑재된 지능형 엣지 컴퓨팅 기기 개발)

  • Kang, Namhi
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.17-22
    • /
    • 2022
  • In this paper, we design a lightweight embedded device that can support intelligent edge computing, and show that the device quickly detects an object in an image input from a camera device in real time. The proposed system can be applied to environments without pre-installed infrastructure, such as an intelligent video control system for industrial sites or military areas, or video security systems mounted on autonomous vehicles such as drones. The On-Device AI(Artificial intelligence) technology is increasingly required for the widespread application of intelligent vision recognition systems. Computing offloading from an image data acquisition device to a nearby edge device enables fast service with less network and system resources than AI services performed in the cloud. In addition, it is expected to be safely applied to various industries as it can reduce the attack surface vulnerable to various hacking attacks and minimize the disclosure of sensitive data.

Lightweight multiple scale-patch dehazing network for real-world hazy image

  • Wang, Juan;Ding, Chang;Wu, Minghu;Liu, Yuanyuan;Chen, Guanhai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4420-4438
    • /
    • 2021
  • Image dehazing is an ill-posed problem which is far from being solved. Traditional image dehazing methods often yield mediocre effects and possess substandard processing speed, while modern deep learning methods perform best only in certain datasets. The haze removal effect when processed by said methods is unsatisfactory, meaning the generalization performance fails to meet the requirements. Concurrently, due to the limited processing speed, most dehazing algorithms cannot be employed in the industry. To alleviate said problems, a lightweight fast dehazing network based on a multiple scale-patch framework (MSP) is proposed in the present paper. Firstly, the multi-scale structure is employed as the backbone network and the multi-patch structure as the supplementary network. Dehazing through a single network causes problems, such as loss of object details and color in some image areas, the multi-patch structure was employed for MSP as an information supplement. In the algorithm image processing module, the image is segmented up and down for processed separately. Secondly, MSP generates a clear dehazing effect and significant robustness when targeting real-world homogeneous and nonhomogeneous hazy maps and different datasets. Compared with existing dehazing methods, MSP demonstrated a fast inference speed and the feasibility of real-time processing. The overall size and model parameters of the entire dehazing model are 20.75M and 6.8M, and the processing time for the single image is 0.026s. Experiments on NTIRE 2018 and NTIRE 2020 demonstrate that MSP can achieve superior performance among the state-of-the-art methods, such as PSNR, SSIM, LPIPS, and individual subjective evaluation.

Lightweight Attention-Guided Network with Frequency Domain Reconstruction for High Dynamic Range Image Fusion

  • Park, Jae Hyun;Lee, Keuntek;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.205-208
    • /
    • 2022
  • Multi-exposure high dynamic range (HDR) image reconstruction, the task of reconstructing an HDR image from multiple low dynamic range (LDR) images in a dynamic scene, often produces ghosting artifacts caused by camera motion and moving objects and also cannot deal with washed-out regions due to over or under-exposures. While there has been many deep-learning-based methods with motion estimation to alleviate these problems, they still have limitations for severely moving scenes. They also require large parameter counts, especially in the case of state-of-the-art methods that employ attention modules. To address these issues, we propose a frequency domain approach based on the idea that the transform domain coefficients inherently involve the global information from whole image pixels to cope with large motions. Specifically we adopt Residual Fast Fourier Transform (RFFT) blocks, which allows for global interactions of pixels. Moreover, we also employ Depthwise Overparametrized convolution (DO-conv) blocks, a convolution in which each input channel is convolved with its own 2D kernel, for faster convergence and performance gains. We call this LFFNet (Lightweight Frequency Fusion Network), and experiments on the benchmarks show reduced ghosting artifacts and improved performance up to 0.6dB tonemapped PSNR compared to recent state-of-the-art methods. Our architecture also requires fewer parameters and converges faster in training.

  • PDF

LH-FAS v2: Head Pose Estimation-Based Lightweight Face Anti-Spoofing (LH-FAS v2: 머리 자세 추정 기반 경량 얼굴 위조 방지 기술)

  • Hyeon-Beom Heo;Hye-Ri Yang;Sung-Uk Jung;Kyung-Jae Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.309-316
    • /
    • 2024
  • Facial recognition technology is widely used in various fields but faces challenges due to its vulnerability to fraudulent activities such as photo spoofing. Extensive research has been conducted to overcome this challenge. Most of them, however, require the use of specialized equipment like multi-modal cameras or operation in high-performance environments. In this paper, we introduce LH-FAS v2 (: Lightweight Head-pose-based Face Anti-Spoofing v2), a system designed to operate on a commercial webcam without any specialized equipment, to address the issue of facial recognition spoofing. LH-FAS v2 utilizes FSA-Net for head pose estimation and ArcFace for facial recognition, effectively assessing changes in head pose and verifying facial identity. We developed the VD4PS dataset, incorporating photo spoofing scenarios to evaluate the model's performance. The experimental results show the model's balanced accuracy and speed, indicating that head pose estimation-based facial anti-spoofing technology can be effectively used to counteract photo spoofing.

A Study on Lightweight Transformer Based Super Resolution Model Using Knowledge Distillation (지식 증류 기법을 사용한 트랜스포머 기반 초해상화 모델 경량화 연구)

  • Dong-hyun Kim;Dong-hun Lee;Aro Kim;Vani Priyanka Galia;Sang-hyo Park
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.333-336
    • /
    • 2023
  • Recently, the transformer model used in natural language processing is also applied to the image super resolution field, showing good performance. However, these transformer based models have a disadvantage that they are difficult to use in small mobile devices because they are complex and have many learning parameters and require high hardware resources. Therefore, in this paper, we propose a knowledge distillation technique that can effectively reduce the size of a transformer based super resolution model. As a result of the experiment, it was confirmed that by applying the proposed technique to the student model with reduced number of transformer blocks, performance similar to or higher than that of the teacher model could be obtained.

Proposal for License Plate Recognition Using Synthetic Data and Vehicle Type Recognition System (가상 데이터를 활용한 번호판 문자 인식 및 차종 인식 시스템 제안)

  • Lee, Seungju;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.776-788
    • /
    • 2020
  • In this paper, a vehicle type recognition system using deep learning and a license plate recognition system are proposed. In the existing system, the number plate area extraction through image processing and the character recognition method using DNN were used. These systems have the problem of declining recognition rates as the environment changes. Therefore, the proposed system used the one-stage object detection method YOLO v3, focusing on real-time detection and decreasing accuracy due to environmental changes, enabling real-time vehicle type and license plate character recognition with one RGB camera. Training data consists of actual data for vehicle type recognition and license plate area detection, and synthetic data for license plate character recognition. The accuracy of each module was 96.39% for detection of car model, 99.94% for detection of license plates, and 79.06% for recognition of license plates. In addition, accuracy was measured using YOLO v3 tiny, a lightweight network of YOLO v3.

Concurrent Detection for Vehicles and Lanes Using Light-Weight Model of Multi-Task CNN (멀티 테스크 CNN의 경량화 모델을 이용한 차량 및 차선의 동시 검출)

  • Shin, Hyeon-Sik;Kim, Hyung-Won;Hong, Sang-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.367-373
    • /
    • 2022
  • As deep learning-based autonomous driving technology develops, artificial intelligence models for various purposes have been studied. Based on these studies, several models were used simultaneously to develop autonomous driving systems. It can occur by increasing hardware resource consumption. We propose a multi-tasks model using a shared backbone to solve this problem. This can solve the increase in the number of backbones for using AI models. As a result, in the proposed lightweight model, the model parameters could be reduced by more than 50% compared to the existing model, and the speed could be improved. In addition, each lane can be classified through lane detection using the instance segmentation method. However, further research is needed on the decrease in accuracy compared to the existing model.

AIoT-based High-risk Industrial Safety Management System of Artificial Intelligence (AIoT 기반 고위험 산업안전관리시스템 인공지능 연구)

  • Yeo, Seong-koo;Park, Dea-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1272-1278
    • /
    • 2022
  • The government enacted and promulgated the 'Severe Accident Punishment Act' in January 2021 and is implementing this law. However, the number of occupational accidents in 2021 increased by 10.7% compared to the same period of the previous year. Therefore, safety measures are urgently needed in the industrial field. In this study, BLE Mesh networking technology is applied for safety management of high-risk industrial sites with poor communication environment. The complex sensor AIoT device collects gas sensing values, voice and motion values in real time, analyzes the information values through artificial intelligence LSTM algorithm and CNN algorithm, and recognizes dangerous situations and transmits them to the server. The server monitors the transmitted risk information in real time so that immediate relief measures are taken. By applying the AIoT device and safety management system proposed in this study to high-risk industrial sites, it will minimize industrial accidents and contribute to the expansion of the social safety net.