• 제목/요약/키워드: CNN Feature

검색결과 310건 처리시간 0.026초

시분할 CNN-LSTM 기반의 시계열 진동 데이터를 이용한 회전체 기계 설비의 이상 진단 (Anomaly Diagnosis of Rotational Machinery Using Time-Series Vibration Data Based on Time-Distributed CNN-LSTM)

  • 김민기
    • 한국멀티미디어학회논문지
    • /
    • 제25권11호
    • /
    • pp.1547-1556
    • /
    • 2022
  • As mechanical facilities are interacting with each other, the failure of some equipment can affect the entire system, so it is necessary to quickly detect and diagnose the abnormality of mechanical equipment. This study proposes a deep learning model that can effectively diagnose abnormalities in rotating machinery and equipment. CNN is widely used for feature extraction and LSTMs are known to be effective in learning sequential information. In LSTM, the number of parameters and learning time increase as the length of input data increases. In this study, we propose a method of segmenting an input segment signal into shorter-length sub-segment signals, sequentially inputting them to CNN through a time-distributed method for extracting features, and inputting them into LSTM. A failure diagnosis test was performed using the vibration data collected from the motor for ventilation equipment installed at the urban railway station. The experiment showed an accuracy of 99.784% in fault diagnosis. It shows that the proposed method is effective in the fault diagnosis of rotating machinery and equipment.

TANFIS Classifier Integrated Efficacious Aassistance System for Heart Disease Prediction using CNN-MDRP

  • Bhaskaru, O.;Sreedevi, M.
    • International Journal of Computer Science & Network Security
    • /
    • 제22권10호
    • /
    • pp.171-176
    • /
    • 2022
  • A dramatic rise in the number of people dying from heart disease has prompted efforts to find a way to identify it sooner using efficient approaches. A variety of variables contribute to the condition and even hereditary factors. The current estimate approaches use an automated diagnostic system that fails to attain a high level of accuracy because it includes irrelevant dataset information. This paper presents an effective neural network with convolutional layers for classifying clinical data that is highly class-imbalanced. Traditional approaches rely on massive amounts of data rather than precise predictions. Data must be picked carefully in order to achieve an earlier prediction process. It's a setback for analysis if the data obtained is just partially complete. However, feature extraction is a major challenge in classification and prediction since increased data increases the training time of traditional machine learning classifiers. The work integrates the CNN-MDRP classifier (convolutional neural network (CNN)-based efficient multimodal disease risk prediction with TANFIS (tuned adaptive neuro-fuzzy inference system) for earlier accurate prediction. Perform data cleaning by transforming partial data to informative data from the dataset in this project. The recommended TANFIS tuning parameters are then improved using a Laplace Gaussian mutation-based grasshopper and moth flame optimization approach (LGM2G). The proposed approach yields a prediction accuracy of 98.40 percent when compared to current algorithms.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • 제23권11호
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

통합 CNN, LSTM, 및 BERT 모델 기반의 음성 및 텍스트 다중 모달 감정 인식 연구 (Enhancing Multimodal Emotion Recognition in Speech and Text with Integrated CNN, LSTM, and BERT Models)

  • 에드워드 카야디;한스 나타니엘 하디 수실로;송미화
    • 문화기술의 융합
    • /
    • 제10권1호
    • /
    • pp.617-623
    • /
    • 2024
  • 언어와 감정 사이의 복잡한 관계의 특징을 보이며, 우리의 말을 통해 감정을 식별하는 것은 중요한 과제로 인식된다. 이 연구는 음성 및 텍스트 데이터를 모두 포함하는 다중 모드 분류 작업을 통해 음성 언어의 감정을 식별하기 위해 속성 엔지니어링을 사용하여 이러한 과제를 해결하는 것을 목표로 한다. CNN(Convolutional Neural Networks)과 LSTM(Long Short-Term Memory)이라는 두 가지 분류기를 BERT 기반 사전 훈련된 모델과 통합하여 평가하였다. 논문에서 평가는 다양한 실험 설정 전반에 걸쳐 다양한 성능 지표(정확도, F-점수, 정밀도 및 재현율)를 다룬다. 이번 연구 결과는 텍스트와 음성 데이터 모두에서 감정을 정확하게 식별하는 두 모델의 뛰어난 능력을 보인다.

음성-영상 특징 추출 멀티모달 모델을 이용한 감정 인식 모델 개발 (Development of Emotion Recognition Model Using Audio-video Feature Extraction Multimodal Model)

  • 김종구;권장우
    • 융합신호처리학회논문지
    • /
    • 제24권4호
    • /
    • pp.221-228
    • /
    • 2023
  • 감정으로 인해 생기는 신체적 정신적인 변화는 운전이나 학습 행동 등 다양한 행동에 영향을 미칠 수 있다. 따라서 이러한 감정을 인식하는 것은 운전 중 위험한 감정 인식 및 제어 등 다양한 산업에서 이용될 수 있기 때문에 매우 중요한 과업이다. 본 논문에는 서로 도메인이 다른 음성과 영상 데이터를 모두 이용하여 감정을 인식하는 멀티모달 모델을 구현하여 감정 인식 연구를 진행했다. 본 연구에서는 RAVDESS 데이터를 이용하여 영상 데이터에 음성을 추출한 뒤 2D-CNN을 이용한 모델을 통해 음성 데이터 특징을 추출하였으며 영상 데이터는 Slowfast feature extractor를 통해 영상 데이터 특징을 추출하였다. 감정 인식을 위한 제안된 멀티모달 모델에서 음성 데이터와 영상 데이터의 특징 벡터를 통합하여 감정 인식을 시도하였다. 또한 멀티모달 모델을 구현할 때 많이 쓰인 방법론인 각 모델의 결과 스코어를 합치는 방법, 투표하는 방법을 이용하여 멀티모달 모델을 구현하고 본 논문에서 제안하는 방법과 비교하여 각 모델의 성능을 확인하였다.

Development of a Hybrid Deep-Learning Model for the Human Activity Recognition based on the Wristband Accelerometer Signals

  • Jeong, Seungmin;Oh, Dongik
    • 인터넷정보학회논문지
    • /
    • 제22권3호
    • /
    • pp.9-16
    • /
    • 2021
  • This study aims to develop a human activity recognition (HAR) system as a Deep-Learning (DL) classification model, distinguishing various human activities. We solely rely on the signals from a wristband accelerometer worn by a person for the user's convenience. 3-axis sequential acceleration signal data are gathered within a predefined time-window-slice, and they are used as input to the classification system. We are particularly interested in developing a Deep-Learning model that can outperform conventional machine learning classification performance. A total of 13 activities based on the laboratory experiments' data are used for the initial performance comparison. We have improved classification performance using the Convolutional Neural Network (CNN) combined with an auto-encoder feature reduction and parameter tuning. With various publically available HAR datasets, we could also achieve significant improvement in HAR classification. Our CNN model is also compared against Recurrent-Neural-Network(RNN) with Long Short-Term Memory(LSTM) to demonstrate its superiority. Noticeably, our model could distinguish both general activities and near-identical activities such as sitting down on the chair and floor, with almost perfect classification accuracy.

Analyzing performance of time series classification using STFT and time series imaging algorithms

  • Sung-Kyu Hong;Sang-Chul Kim
    • 한국컴퓨터정보학회논문지
    • /
    • 제28권4호
    • /
    • pp.1-11
    • /
    • 2023
  • 본 논문은 순환 신경망 대신 합성곱 신경망을 사용하여 시계열 데이터 분류 성능을 분석한다. TSC(Time Series Community)에는 GAF(Gramian Angular Field), MTF(Markov Transition Field), RP(Recurrence Plot)와 같은 전통적인 시계열 데이터 이미지화 알고리즘들이 있다. 실험은 이미지화 알고리즘들에 필요한 하이퍼 파라미터들을 조정하면서 합성곱 신경망의 성능을 평가하는 방식으로 진행된다. UCR 아카이브의 GunPoint 데이터셋을 기준으로 성능을 평가했을 때, 본 논문에서 제안하는 STFT(Short Time Fourier Transform) 알고리즘이 최적화된 하이퍼 파라미터를 찾은 경우, 기존의 알고리즘들 대비 정확도가 높고, 동적으로 feature map 이미지의 크기도 조절가능하다는 장점이 있다. GAF 또한 98~99%의 높은 정확도를 보이지만, feature map 이미지의 크기를 동적으로 조절할 수 없어 크다는 단점이 존재한다.

감정 인식을 위해 CNN을 사용한 최적화된 패치 특징 추출 (Optimized patch feature extraction using CNN for emotion recognition)

  • 하이더 이르판;김애라;이귀상;김수형
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.510-512
    • /
    • 2023
  • In order to enhance a model's capability for detecting facial expressions, this research suggests a pipeline that makes use of the GradCAM component. The patching module and the pseudo-labeling module make up the pipeline. The patching component takes the original face image and divides it into four equal parts. These parts are then each input into a 2Dconvolutional layer to produce a feature vector. Each picture segment is assigned a weight token using GradCAM in the pseudo-labeling module, and this token is then merged with the feature vector using principal component analysis. A convolutional neural network based on transfer learning technique is then utilized to extract the deep features. This technique applied on a public dataset MMI and achieved a validation accuracy of 96.06% which is showing the effectiveness of our method.

Enhancing Wind Speed and Wind Power Forecasting Using Shape-Wise Feature Engineering: A Novel Approach for Improved Accuracy and Robustness

  • Mulomba Mukendi Christian;Yun Seon Kim;Hyebong Choi;Jaeyoung Lee;SongHee You
    • International Journal of Advanced Culture Technology
    • /
    • 제11권4호
    • /
    • pp.393-405
    • /
    • 2023
  • Accurate prediction of wind speed and power is vital for enhancing the efficiency of wind energy systems. Numerous solutions have been implemented to date, demonstrating their potential to improve forecasting. Among these, deep learning is perceived as a revolutionary approach in the field. However, despite their effectiveness, the noise present in the collected data remains a significant challenge. This noise has the potential to diminish the performance of these algorithms, leading to inaccurate predictions. In response to this, this study explores a novel feature engineering approach. This approach involves altering the data input shape in both Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) and Autoregressive models for various forecasting horizons. The results reveal substantial enhancements in model resilience against noise resulting from step increases in data. The approach could achieve an impressive 83% accuracy in predicting unseen data up to the 24th steps. Furthermore, this method consistently provides high accuracy for short, mid, and long-term forecasts, outperforming the performance of individual models. These findings pave the way for further research on noise reduction strategies at different forecasting horizons through shape-wise feature engineering.

채널 강조와 공간 강조의 결합을 이용한 딥 러닝 기반의 초해상도 방법 (Deep Learning-based Super Resolution Method Using Combination of Channel Attention and Spatial Attention)

  • 이동우;이상훈;한현호
    • 한국융합학회논문지
    • /
    • 제11권12호
    • /
    • pp.15-22
    • /
    • 2020
  • 본 논문은 채널 강조(Channel Attentin)와 공간 강조(Spatial Attention) 방법을 결합한 딥 러닝 기반의 초해상도 방법을 제안하였다. 초해상도 과정에서 질감, 특징과 같은 주변 픽셀의 변화량이 큰 고주파 성분의 복원이 중요하다. 채널 강조와 공간 강조를 결합한 특징 강조를 이용한 초해상도 방법을 제안하였다. 기존의 CNN(Convolutional Neural Network) 기반의 초해상도 방법은 깊은 네트워크의 학습이 어려우며, 고주파 성분의 강조가 부족하여 윤곽선이 흐려지거나 왜곡이 발생한다. 문제를 해결하기 위해 스킵-커넥션(Skip Connection)을 적용한 채널 강조와 공간 강조를 결합한 강조 블록과 잔차 블록(Residual Block)을 사용하였다. 방법으로 추출한 강조된 특징 맵을 부-픽셀 컨볼루션(Sub-pixel Convolution)을 통해 특징맵을 확장하여 초해상도를 진행하였다. 이를 통해 기존의 SRCNN과 비교하여 약 PSNR는 5%, SSIM은 3% 향상되었으며 VDSR과 비교를 통해 약 PSNR는 2%, SSIM은 1% 향상된 결과를 보였다.