• Title/Summary/Keyword: Deep Fusion Model

Search Result 83, Processing Time 0.025 seconds

Real-time Segmentation of Black Ice Region in Infrared Road Images

  • Li, Yu-Jie;Kang, Sun-Kyoung;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.33-42
    • /
    • 2022
  • In this paper, we proposed a deep learning model based on multi-scale dilated convolution feature fusion for the segmentation of black ice region in road image to send black ice warning to drivers in real time. In the proposed multi-scale dilated convolution feature fusion network, different dilated ratio convolutions are connected in parallel in the encoder blocks, and different dilated ratios are used in different resolution feature maps, and multi-layer feature information are fused together. The multi-scale dilated convolution feature fusion improves the performance by diversifying and expending the receptive field of the network and by preserving detailed space information and enhancing the effectiveness of diated convolutions. The performance of the proposed network model was gradually improved with the increase of the number of dilated convolution branch. The mIoU value of the proposed method is 96.46%, which was higher than the existing networks such as U-Net, FCN, PSPNet, ENet, LinkNet. The parameter was 1,858K, which was 6 times smaller than the existing LinkNet model. From the experimental results of Jetson Nano, the FPS of the proposed method was 3.63, which can realize segmentation of black ice field in real time.

Enhancing Recommender Systems by Fusing Diverse Information Sources through Data Transformation and Feature Selection

  • Thi-Linh Ho;Anh-Cuong Le;Dinh-Hong Vu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1413-1432
    • /
    • 2023
  • Recommender systems aim to recommend items to users by taking into account their probable interests. This study focuses on creating a model that utilizes multiple sources of information about users and items by employing a multimodality approach. The study addresses the task of how to gather information from different sources (modalities) and transform them into a uniform format, resulting in a multi-modal feature description for users and items. This work also aims to transform and represent the features extracted from different modalities so that the information is in a compatible format for integration and contains important, useful information for the prediction model. To achieve this goal, we propose a novel multi-modal recommendation model, which involves extracting latent features of users and items from a utility matrix using matrix factorization techniques. Various transformation techniques are utilized to extract features from other sources of information such as user reviews, item descriptions, and item categories. We also proposed the use of Principal Component Analysis (PCA) and Feature Selection techniques to reduce the data dimension and extract important features as well as remove noisy features to increase the accuracy of the model. We conducted several different experimental models based on different subsets of modalities on the MovieLens and Amazon sub-category datasets. According to the experimental results, the proposed model significantly enhances the accuracy of recommendations when compared to SVD, which is acknowledged as one of the most effective models for recommender systems. Specifically, the proposed model reduces the RMSE by a range of 4.8% to 21.43% and increases the Precision by a range of 2.07% to 26.49% for the Amazon datasets. Similarly, for the MovieLens dataset, the proposed model reduces the RMSE by 45.61% and increases the Precision by 14.06%. Additionally, the experimental results on both datasets demonstrate that combining information from multiple modalities in the proposed model leads to superior outcomes compared to relying on a single type of information.

Multi-Task FaceBoxes: A Lightweight Face Detector Based on Channel Attention and Context Information

  • Qi, Shuaihui;Yang, Jungang;Song, Xiaofeng;Jiang, Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.4080-4097
    • /
    • 2020
  • In recent years, convolutional neural network (CNN) has become the primary method for face detection. But its shortcomings are obvious, such as expensive calculation, heavy model, etc. This makes CNN difficult to use on the mobile devices which have limited computing and storage capabilities. Therefore, the design of lightweight CNN for face detection is becoming more and more important with the popularity of smartphones and mobile Internet. Based on the CPU real-time face detector FaceBoxes, we propose a multi-task lightweight face detector, which has low computing cost and higher detection precision. First, to improve the detection capability, the squeeze and excitation modules are used to extract attention between channels. Then, the textual and semantic information are extracted by shallow networks and deep networks respectively to get rich features. Finally, the landmark detection module is used to improve the detection performance for small faces and provide landmark data for face alignment. Experiments on AFW, FDDB, PASCAL, and WIDER FACE datasets show that our algorithm has achieved significant improvement in the mean average precision. Especially, on the WIDER FACE hard validation set, our algorithm outperforms the mean average precision of FaceBoxes by 7.2%. For VGA-resolution images, the running speed of our algorithm can reach 23FPS on a CPU device.

Quantitative Evaluation of Super-resolution Drone Images Generated Using Deep Learning (딥러닝을 이용하여 생성한 초해상화 드론 영상의 정량적 평가)

  • Seo, Hong-Deok;So, Hyeong-Yoon;Kim, Eui-Myoung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.53 no.2
    • /
    • pp.5-18
    • /
    • 2023
  • As the development of drones and sensors accelerates, new services and values are created by fusing data acquired from various sensors mounted on drone. However, the construction of spatial information through data fusion is mainly constructed depending on the image, and the quality of data is determined according to the specification and performance of the hardware. In addition, it is difficult to utilize it in the actual field because expensive equipment is required to construct spatial information of high-quality. In this study, super-resolution was performed by applying deep learning to low-resolution images acquired through RGB and THM cameras mounted on a drone, and quantitative evaluation and feature point extraction were performed on the generated high-resolution images. As a result of the experiment, the high-resolution image generated by super-resolution was maintained the characteristics of the original image, and as the resolution was improved, more features could be extracted compared to the original image. Therefore, when generating a high-resolution image by applying a low-resolution image to an super-resolution deep learning model, it is judged to be a new method to construct spatial information of high-quality without being restricted by hardware.

A Study on the Implementation of Real-Time Marine Deposited Waste Detection AI System and Performance Improvement Method by Data Screening and Class Segmentation (데이터 선별 및 클래스 세분화를 적용한 실시간 해양 침적 쓰레기 감지 AI 시스템 구현과 성능 개선 방법 연구)

  • Wang, Tae-su;Oh, Seyeong;Lee, Hyun-seo;Choi, Donggyu;Jang, Jongwook;Kim, Minyoung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.3
    • /
    • pp.571-580
    • /
    • 2022
  • Marine deposited waste is a major cause of problems such as a lot of damage and an increase in the estimated amount of garbage due to abandoned fishing grounds caused by ghost fishing. In this paper, we implement a real-time marine deposited waste detection artificial intelligence system to understand the actual conditions of waste fishing gear usage, distribution, loss, and recovery, and study methods for performance improvement. The system was implemented using the yolov5 model, which is an excellent performance model for real-time object detection, and the 'data screening process' and 'class segmentation' method of learning data were applied as performance improvement methods. In conclusion, the object detection results of datasets that do screen unnecessary data or do not subdivide similar items according to characteristics and uses are better than the object recognition results of unscreened datasets and datasets in which classes are subdivided.

Recognition of Overlapped Sound and Influence Analysis Based on Wideband Spectrogram and Deep Neural Networks (광역 스펙트로그램과 심층신경망에 기반한 중첩된 소리의 인식과 영향 분석)

  • Kim, Young Eon;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.421-430
    • /
    • 2018
  • Many voice recognition systems use methods such as MFCC, HMM to acknowledge human voice. This recognition method is designed to analyze only a targeted sound which normally appears between a human and a device one. However, the recognition capability is limited when there is a group sound formed with diversity in wider frequency range such as dog barking and indoor sounds. The frequency of overlapped sound resides in a wide range, up to 20KHz, which is higher than a voice. This paper proposes the new recognition method which provides wider frequency range by conjugating the Wideband Sound Spectrogram and the Keras Sequential Model based on DNN. The wideband sound spectrogram is adopted to analyze and verify diverse sounds from wide frequency range as it is designed to extract features and also classify as explained. The KSM is employed for the pattern recognition using extracted features from the WSS to improve sound recognition quality. The experiment verified that the proposed WSS and KSM excellently classified the targeted sound among noisy environment; overlapped sounds such as dog barking and indoor sounds. Furthermore, the paper shows a stage by stage analyzation and comparison of the factors' influences on the recognition and its characteristics according to various levels of noise.

Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild (준 지도학습과 여러 개의 딥 뉴럴 네트워크를 사용한 멀티 모달 기반 감정 인식 알고리즘)

  • Kim, Dae Ha;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.351-360
    • /
    • 2018
  • Human emotion recognition is a research topic that is receiving continuous attention in computer vision and artificial intelligence domains. This paper proposes a method for classifying human emotions through multiple neural networks based on multi-modal signals which consist of image, landmark, and audio in a wild environment. The proposed method has the following features. First, the learning performance of the image-based network is greatly improved by employing both multi-task learning and semi-supervised learning using the spatio-temporal characteristic of videos. Second, a model for converting 1-dimensional (1D) landmark information of face into two-dimensional (2D) images, is newly proposed, and a CNN-LSTM network based on the model is proposed for better emotion recognition. Third, based on an observation that audio signals are often very effective for specific emotions, we propose an audio deep learning mechanism robust to the specific emotions. Finally, so-called emotion adaptive fusion is applied to enable synergy of multiple networks. The proposed network improves emotion classification performance by appropriately integrating existing supervised learning and semi-supervised learning networks. In the fifth attempt on the given test set in the EmotiW2017 challenge, the proposed method achieved a classification accuracy of 57.12%.

Effect of the Learning Image Combinations and Weather Parameters in the PM Estimation from CCTV Images (CCTV 영상으로부터 미세먼지 추정에서 학습영상조합, 기상변수 적용이 결과에 미치는 영향)

  • Won, Taeyeon;Eo, Yang Dam;Sung, Hong ki;Chong, Kyu soo;Youn, Junhee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.573-581
    • /
    • 2020
  • Using CCTV images and weather parameters, a method for estimating PM (Particulate Matter) index was proposed, and an experiment was conducted. For CCTV images, we proposed a method of estimating the PM index by applying a deep learning technique based on a CNN (Convolutional Neural Network) with ROI(Region Of Interest) image including a specific spot and an full area image. In addition, after combining the predicted result values by deep learning with the two weather parameters of humidity and wind speed, a post-processing experiment was also conducted to calculate the modified PM index using the learned regression model. As a result of the experiment, the estimated value of the PM index from the CCTV image was R2(R-Squared) 0.58~0.89, and the result of learning the ROI image and the full area image with the measuring device was the best. The result of post-processing using weather parameters did not always show improvement in accuracy in all cases in the experimental area.

Estimating the Stand Level Vegetation Structure Map Using Drone Optical Imageries and LiDAR Data based on an Artificial Neural Networks (ANNs) (인공신경망 기반 드론 광학영상 및 LiDAR 자료를 활용한 임분단위 식생층위구조 추정)

  • Cha, Sungeun;Jo, Hyun-Woo;Lim, Chul-Hee;Song, Cholho;Lee, Sle-Gee;Kim, Jiwon;Park, Chiyoung;Jeon, Seong-Woo;Lee, Woo-Kyun
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.653-666
    • /
    • 2020
  • Understanding the vegetation structure is important to manage forest resources for sustainable forest development. With the recent development of technology, it is possible to apply new technologies such as drones and deep learning to forests and use it to estimate the vegetation structure. In this study, the vegetation structure of Gongju, Samchuk, and Seoguipo area was identified by fusion of drone-optical images and LiDAR data using Artificial Neural Networks(ANNs) with the accuracy of 92.62% (Kappa value: 0.59), 91.57% (Kappa value: 0.53), and 86.00% (Kappa value: 0.63), respectively. The vegetation structure analysis technology using deep learning is expected to increase the performance of the model as the amount of information in the optical and LiDAR increases. In the future, if the model is developed with a high-complexity that can reflect various characteristics of vegetation and sufficient sampling, it would be a material that can be used as a reference data to Korea's policies and regulations by constructing a country-level vegetation structure map.

T-Commerce Sale Prediction Using Deep Learning and Statistical Model (딥러닝과 통계 모델을 이용한 T-커머스 매출 예측)

  • Kim, Injung;Na, Kihyun;Yang, Sohee;Jang, Jaemin;Kim, Yunjong;Shin, Wonyoung;Kim, Deokjung
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.803-812
    • /
    • 2017
  • T-commerce is technology-fusion service on which the user can purchase using data broadcasting technology based on bi-directional digital TVs. To achieve the best revenue under a limited environment in regard to the channel number and the variety of sales goods, organizing broadcast programs to maximize the expected sales considering the selling power of each product at each time slot. For this, this paper proposes a method to predict the sales of goods when it is assigned to each time slot. The proposed method predicts the sales of product at a time slot given the week-in-year and weather of the target day. Additionally, it combines a statistical predict model applying SVD (Singular Value Decomposition) to mitigate the sparsity problem caused by the bias in sales record. In experiments on the sales data of W-shopping, a T-commerce company, the proposed method showed NMAE (Normalized Mean Absolute Error) of 0.12 between the prediction and the actual sales, which confirms the effectiveness of the proposed method. The proposed method is practically applied to the T-commerce system of W-shopping and used for broadcasting organization.