• Title/Summary/Keyword: 1D Convolutional Neural Network

Search Result 77, Processing Time 0.026 seconds

Verified Deep Learning-based Model Research for Improved Uniformity of Sputtered Metal Thin Films (스퍼터 금속 박막 균일도 예측을 위한 딥러닝 기반 모델 검증 연구)

  • Eun Ji Lee;Young Joon Yoo;Chang Woo Byun;Jin Pyung Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.1
    • /
    • pp.113-117
    • /
    • 2023
  • As sputter equipment becomes more complex, it becomes increasingly difficult to understand the parameters that affect the thickness uniformity of thin metal film deposited by sputter. To address this issue, we verified a deep learning model that can predict complex relationships. Specifically, we trained the model to predict the height of 36 magnets based on the thickness of the material, using Support Vector Machine (SVM), Multilayer Perceptron (MLP), 1D-Convolutional Neural Network (1D-CNN), and 2D-Convolutional Neural Network (2D-CNN) algorithms. After evaluating each model, we found that the MLP model exhibited the best performance, especially when the dataset was constructed regardless of the thin film material. In conclusion, our study suggests that it is possible to predict the sputter equipment source using film thickness data through a deep learning model, which makes it easier to understand the relationship between film thickness and sputter equipment.

  • PDF

Prediction of Ship Travel Time in Harbour using 1D-Convolutional Neural Network (1D-CNN을 이용한 항만내 선박 이동시간 예측)

  • Sang-Lok Yoo;Kwang-Il Ki;Cho-Young Jung
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.275-276
    • /
    • 2022
  • VTS operators instruct ships to wait for entry and departure to sail in one-way to prevent ship collision accidents in ports with narrow routes. Currently, the instructions are not based on scientific and statistical data. As a result, there is a significant deviation depending on the individual capability of the VTS operators. Accordingly, this study built a 1d-convolutional neural network model by collecting ship and weather data to predict the exact travel time for ship entry/departure waiting for instructions in the port. It was confirmed that the proposed model was improved by more than 4.5% compared to other ensemble machine learning models. Through this study, it is possible to predict the time required to enter and depart a vessel in various situations, so it is expected that the VTS operators will help provide accurate information to the vessel and determine the waiting order.

  • PDF

DeepAct: A Deep Neural Network Model for Activity Detection in Untrimmed Videos

  • Song, Yeongtaek;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.150-161
    • /
    • 2018
  • We propose a novel deep neural network model for detecting human activities in untrimmed videos. The process of human activity detection in a video involves two steps: a step to extract features that are effective in recognizing human activities in a long untrimmed video, followed by a step to detect human activities from those extracted features. To extract the rich features from video segments that could express unique patterns for each activity, we employ two different convolutional neural network models, C3D and I-ResNet. For detecting human activities from the sequence of extracted feature vectors, we use BLSTM, a bi-directional recurrent neural network model. By conducting experiments with ActivityNet 200, a large-scale benchmark dataset, we show the high performance of the proposed DeepAct model.

Motion generation using Center of Mass (무게중심을 활용한 모션 생성 기술)

  • Park, Geuntae;Sohn, Chae Jun;Lee, Yoonsang
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.2
    • /
    • pp.11-19
    • /
    • 2020
  • When a character's pose changes, its center of mass(COM) also changes. The change of COM has distinctive patterns corresponding to various motion types like walking, running or sitting. Thus the motion type can be predicted by using COM movement. We propose a motion generator that uses character's center of mass information. This generator can generate various motions without annotated action type labels. Thus dataset for training and running can be generated full-automatically. Our neural network model takes the motion history of the character and its center of mass information as inputs and generates a full-body pose for the current frame, and is trained using simple Convolutional Neural Network(CNN) that performs 1D convolution to deal with time-series motion data.

Permeability Prediction of Gas Diffusion Layers for PEMFC Using Three-Dimensional Convolutional Neural Networks and Morphological Features Extracted from X-ray Tomography Images (삼차원 합성곱 신경망과 X선 단층 영상에서 추출한 형태학적 특징을 이용한 PEMFC용 가스확산층의 투과도 예측)

  • Hangil You;Gun Jin Yun
    • Composites Research
    • /
    • v.37 no.1
    • /
    • pp.40-45
    • /
    • 2024
  • In this research, we introduce a novel approach that employs a 3D convolutional neural network (CNN) model to predict the permeability of Gas Diffusion Layers (GDLs). For training the model, we create an artificial dataset of GDL representative volume elements (RVEs) by extracting morphological characteristics from actual GDL images obtained through X-ray tomography. These morphological attributes involve statistical distributions of porosity, fiber orientation, and diameter. Subsequently, a permeability analysis using the Lattice Boltzmann Method (LBM) is conducted on a collection of 10,800 RVEs. The 3D CNN model, trained on this artificial dataset, well predicts the permeability of actual GDLs.

Distance Estimation Using Convolutional Neural Network in UWB Systems (UWB 시스템에서 합성곱 신경망을 이용한 거리 추정)

  • Nam, Gyeong-Mo;Jung, Tae-Yun;Jung, Sunghun;Jeong, Eui-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.10
    • /
    • pp.1290-1297
    • /
    • 2019
  • The paper proposes a distance estimation technique for ultra-wideband (UWB) systems using convolutional neural network (CNN). To estimate the distance from the transmitter and the receiver in the proposed method, 1 dimensional vector consisted of the magnitudes of the received samples is reshaped into a 2 dimensional matrix, and by using this matrix, the distance is estimated through the CNN regressor. The received signal for CNN training is generated by the UWB channel model in the IEEE 802.15.4a, and the CNN model is trained. Next, the received signal for CNN test is generated by filed experiments in indoor environments, and the distance estimation performance is verified. The proposed technique is also compared with the existing threshold based method. According to the results, the proposed CNN based technique is superior to the conventional method and specifically, the proposed method shows 0.6 m root mean square error (RMSE) at distance 10 m while the conventional technique shows much worse 1.6 m RMSE.

Improvement of Vocal Detection Accuracy Using Convolutional Neural Networks

  • You, Shingchern D.;Liu, Chien-Hung;Lin, Jia-Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.729-748
    • /
    • 2021
  • Vocal detection is one of the fundamental steps in musical information retrieval. Typically, the detection process consists of feature extraction and classification steps. Recently, neural networks are shown to outperform traditional classifiers. In this paper, we report our study on how to improve detection accuracy further by carefully choosing the parameters of the deep network model. Through experiments, we conclude that a feature-classifier model is still better than an end-to-end model. The recommended model uses a spectrogram as the input plane and the classifier is an 18-layer convolutional neural network (CNN). With this arrangement, when compared with existing literature, the proposed model improves the accuracy from 91.8% to 94.1% in Jamendo dataset. As the dataset has an accuracy of more than 90%, the improvement of 2.3% is difficult and valuable. If even higher accuracy is required, the ensemble learning may be used. The recommend setting is a majority vote with seven proposed models. Doing so, the accuracy increases by about 1.1% in Jamendo dataset.

Feature Visualization and Error Rate Using Feature Map by Convolutional Neural Networks (CNN 기반 특징맵 사용에 따른 특징점 가시화와 에러율)

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.1
    • /
    • pp.1-7
    • /
    • 2021
  • In this paper, we presented the experimental basis for the theoretical background and robustness of the Convolutional Neural Network for object recognition based on artificial intelligence. An experimental result was performed to visualize the weighting filters and feature maps for each layer to determine what characteristics CNN is automatically generating. experimental results were presented on the trend of learning error and identification error rate by checking the relevance of the weight filter and feature map for learning error and identification error. The weighting filter and characteristic map are presented as experimental results. The automatically generated characteristic quantities presented the results of error rates for moving and rotating robustness to geometric changes.

A Proposal of Sensor-based Time Series Classification Model using Explainable Convolutional Neural Network

  • Jang, Youngjun;Kim, Jiho;Lee, Hongchul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.55-67
    • /
    • 2022
  • Sensor data can provide fault diagnosis for equipment. However, the cause analysis for fault results of equipment is not often provided. In this study, we propose an explainable convolutional neural network framework for the sensor-based time series classification model. We used sensor-based time series dataset, acquired from vehicles equipped with sensors, and the Wafer dataset, acquired from manufacturing process. Moreover, we used Cycle Signal dataset, acquired from real world mechanical equipment, and for Data augmentation methods, scaling and jittering were used to train our deep learning models. In addition, our proposed classification models are convolutional neural network based models, FCN, 1D-CNN, and ResNet, to compare evaluations for each model. Our experimental results show that the ResNet provides promising results in the context of time series classification with accuracy and F1 Score reaching 95%, improved by 3% compared to the previous study. Furthermore, we propose XAI methods, Class Activation Map and Layer Visualization, to interpret the experiment result. XAI methods can visualize the time series interval that shows important factors for sensor data classification.

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.