• Title/Summary/Keyword: CNN Model

Search Result 989, Processing Time 0.028 seconds

Prediction of Agricultural Purchases Using Structured and Unstructured Data: Focusing on Paprika (정형 및 비정형 데이터를 이용한 농산물 구매량 예측: 파프리카를 중심으로)

  • Somakhamixay Oui;Kyung-Hee Lee;HyungChul Rah;Eun-Seon Choi;Wan-Sup Cho
    • The Journal of Bigdata
    • /
    • v.6 no.2
    • /
    • pp.169-179
    • /
    • 2021
  • Consumers' food consumption behavior is likely to be affected not only by structured data such as consumer panel data but also by unstructured data such as mass media and social media. In this study, a deep learning-based consumption prediction model is generated and verified for the fusion data set linking structured data and unstructured data related to food consumption. The results of the study showed that model accuracy was improved when combining structured data and unstructured data. In addition, unstructured data were found to improve model predictability. As a result of using the SHAP technique to identify the importance of variables, it was found that variables related to blog and video data were on the top list and had a positive correlation with the amount of paprika purchased. In addition, according to the experimental results, it was confirmed that the machine learning model showed higher accuracy than the deep learning model and could be an efficient alternative to the existing time series analysis modeling.

Artificial Neural Network-based Thermal Environment Prediction Model for Energy Saving of Data Center Cooling Systems (데이터센터 냉각 시스템의 에너지 절약을 위한 인공신경망 기반 열환경 예측 모델)

  • Chae-Young Lim;Chae-Eun Yeo;Seong-Yool Ahn;Sang-Hyun Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.883-888
    • /
    • 2023
  • Since data centers are places that provide IT services 24 hours a day, 365 days a year, data center power consumption is expected to increase to approximately 10% by 2030, and the introduction of high-density IT equipment will gradually increase. In order to ensure the stable operation of IT equipment, various types of research are required to conserve energy in cooling and improve energy management. This study proposes the following process for energy saving in data centers. We conducted CFD modeling of the data center, proposed an artificial intelligence-based thermal environment prediction model, compared actual measured data, the predicted model, and the CFD results, and finally evaluated the data center's thermal management performance. It can be seen that the predicted values of RCI, RTI, and PUE are also similar according to the normalization used in the normalization method. Therefore, it is judged that the algorithm proposed in this study can be applied and provided as a thermal environment prediction model applied to data centers.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

A Study of CNN-based Super-Resolution Method for Remote Sensing Image (원격 탐사 영상을 활용한 CNN 기반의 초해상화 기법 연구)

  • Choi, Yeonju;Kim, Minsik;Kim, Yongwoo;Han, Sanghyuck
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.3
    • /
    • pp.449-460
    • /
    • 2020
  • Super-resolution is a technique used to reconstruct an image with low-resolution into that of high-resolution. Recently, deep-learning based super resolution has become the mainstream, and applications of these methods are widely used in the remote sensing field. In this paper, we propose a super-resolution method based on the deep back-projection network model to improve the satellite image resolution by the factor of four. In the process, we customized the loss function with the edge loss to result in a more detailed feature of the boundary of each object and to improve the stability of the model training using generative adversarial network based on Wasserstein distance loss. Also, we have applied the detail preserving image down-scaling method to enhance the naturalness of the training output. Finally, by including the modified-residual learning with a panchromatic feature in the final step of the training process. Our proposed method is able to reconstruct fine features and high frequency information. Comparing the results of our method with that of the others, we propose that the super-resolution method improves the sharpness and the clarity of WorldView-3 and KOMPSAT-2 images.

Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild (준 지도학습과 여러 개의 딥 뉴럴 네트워크를 사용한 멀티 모달 기반 감정 인식 알고리즘)

  • Kim, Dae Ha;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.351-360
    • /
    • 2018
  • Human emotion recognition is a research topic that is receiving continuous attention in computer vision and artificial intelligence domains. This paper proposes a method for classifying human emotions through multiple neural networks based on multi-modal signals which consist of image, landmark, and audio in a wild environment. The proposed method has the following features. First, the learning performance of the image-based network is greatly improved by employing both multi-task learning and semi-supervised learning using the spatio-temporal characteristic of videos. Second, a model for converting 1-dimensional (1D) landmark information of face into two-dimensional (2D) images, is newly proposed, and a CNN-LSTM network based on the model is proposed for better emotion recognition. Third, based on an observation that audio signals are often very effective for specific emotions, we propose an audio deep learning mechanism robust to the specific emotions. Finally, so-called emotion adaptive fusion is applied to enable synergy of multiple networks. The proposed network improves emotion classification performance by appropriately integrating existing supervised learning and semi-supervised learning networks. In the fifth attempt on the given test set in the EmotiW2017 challenge, the proposed method achieved a classification accuracy of 57.12%.

CNN-based Shadow Detection Method using Height map in 3D Virtual City Model (3차원 가상도시 모델에서 높이맵을 이용한 CNN 기반의 그림자 탐지방법)

  • Yoon, Hee Jin;Kim, Ju Wan;Jang, In Sung;Lee, Byung-Dai;Kim, Nam-Gi
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.55-63
    • /
    • 2019
  • Recently, the use of real-world image data has been increasing to express realistic virtual environments in various application fields such as education, manufacturing, and construction. In particular, with increasing interest in digital twins like smart cities, realistic 3D urban models are being built using real-world images, such as aerial images. However, the captured aerial image includes shadows from the sun, and the 3D city model including the shadows has a problem of distorting and expressing information to the user. Many studies have been conducted to remove the shadow, but it is recognized as a challenging problem that is still difficult to solve. In this paper, we construct a virtual environment dataset including the height map of buildings using 3D spatial information provided by VWorld, and We propose a new shadow detection method using height map and deep learning. According to the experimental results, We can observed that the shadow detection error rate is reduced when using the height map.

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.

Cloud Detection from Sentinel-2 Images Using DeepLabV3+ and Swin Transformer Models (DeepLabV3+와 Swin Transformer 모델을 이용한 Sentinel-2 영상의 구름탐지)

  • Kang, Jonggu;Park, Ganghyun;Kim, Geunah;Youn, Youjeong;Choi, Soyeon;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1743-1747
    • /
    • 2022
  • Sentinel-2 can be used as proxy data for the Korean Compact Advanced Satellite 500-4 (CAS500-4), also known as Agriculture and Forestry Satellite, in terms of spectral wavelengths and spatial resolution. This letter examined cloud detection for later use in the CAS500-4 based on deep learning technologies. DeepLabV3+, a traditional Convolutional Neural Network (CNN) model, and Shifted Windows (Swin) Transformer, a state-of-the-art (SOTA) Transformer model, were compared using 22,728 images provided by Radiant Earth Foundation (REF). Swin Transformer showed a better performance with a precision of 0.886 and a recall of 0.875, which is a balanced result, unbiased between over- and under-estimation. Deep learning-based cloud detection is expected to be a future operational module for CAS500-4 through optimization for the Korean Peninsula.

Water Segmentation Based on Morphologic and Edge-enhanced U-Net Using Sentinel-1 SAR Images (형태학적 연산과 경계추출 학습이 강화된 U-Net을 활용한 Sentinel-1 영상 기반 수체탐지)

  • Kim, Hwisong;Kim, Duk-jin;Kim, Junwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.793-810
    • /
    • 2022
  • Synthetic Aperture Radar (SAR) is considered to be suitable for near real-time inundation monitoring. The distinctly different intensity between water and land makes it adequate for waterbody detection, but the intrinsic speckle noise and variable intensity of SAR images decrease the accuracy of waterbody detection. In this study, we suggest two modules, named 'morphology module' and 'edge-enhanced module', which are the combinations of pooling layers and convolutional layers, improving the accuracy of waterbody detection. The morphology module is composed of min-pooling layers and max-pooling layers, which shows the effect of morphological transformation. The edge-enhanced module is composed of convolution layers, which has the fixed weights of the traditional edge detection algorithm. After comparing the accuracy of various versions of each module for U-Net, we found that the optimal combination is the case that the morphology module of min-pooling and successive layers of min-pooling and max-pooling, and the edge-enhanced module of Scharr filter were the inputs of conv9. This morphologic and edge-enhanced U-Net improved the F1-score by 9.81% than the original U-Net. Qualitative inspection showed that our model has capability of detecting small-sized waterbody and detailed edge of water, which are the distinct advancement of the model presented in this research, compared to the original U-Net.

A Study on the Artificial Intelligence-Based Soybean Growth Analysis Method (인공지능 기반 콩 생장분석 방법 연구)

  • Moon-Seok Jeon;Yeongtae Kim;Yuseok Jeong;Hyojun Bae;Chaewon Lee;Song Lim Kim;Inchan Choi
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.1-14
    • /
    • 2023
  • Soybeans are one of the world's top five staple crops and a major source of plant-based protein. Due to their susceptibility to climate change, which can significantly impact grain production, the National Agricultural Science Institute is conducting research on crop phenotypes through growth analysis of various soybean varieties. While the process of capturing growth progression photos of soybeans is automated, the verification, recording, and analysis of growth stages are currently done manually. In this paper, we designed and trained a YOLOv5s model to detect soybean leaf objects from image data of soybean plants and a Convolution Neural Network (CNN) model to judgement the unfolding status of the detected soybean leaves. We combined these two models and implemented an algorithm that distinguishes layers based on the coordinates of detected soybean leaves. As a result, we developed a program that takes time-series data of soybeans as input and performs growth analysis. The program can accurately determine the growth stages of soybeans up to the second or third compound leaves.