• Title/Summary/Keyword: 합성곱 신경망 모델

Search Result 303, Processing Time 0.027 seconds

Arrhythmia Classification using GAN-based Over-Sampling Method and Combination Model of CNN-BLSTM (GAN 오버샘플링 기법과 CNN-BLSTM 결합 모델을 이용한 부정맥 분류)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1490-1499
    • /
    • 2022
  • Arrhythmia is a condition in which the heart has an irregular rhythm or abnormal heart rate, early diagnosis and management is very important because it can cause stroke, cardiac arrest, or even death. In this paper, we propose arrhythmia classification using hybrid combination model of CNN-BLSTM. For this purpose, the QRS features are detected from noise removed signal through pre-processing and a single bit segment was extracted. In this case, the GAN oversampling technique is applied to solve the data imbalance problem. It consisted of CNN layers to extract the patterns of the arrhythmia precisely, used them as the input of the BLSTM. The weights were learned through deep learning and the learning model was evaluated by the validation data. To evaluate the performance of the proposed method, classification accuracy, precision, recall, and F1-score were compared by using the MIT-BIH arrhythmia database. The achieved scores indicate 99.30%, 98.70%, 97.50%, 98.06% in terms of the accuracy, precision, recall, F1 score, respectively.

Improving Performance of Human Action Recognition on Accelerometer Data (가속도 센서 데이터 기반의 행동 인식 모델 성능 향상 기법)

  • Nam, Jung-Woo;Kim, Jin-Heon
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.523-528
    • /
    • 2020
  • With a widespread of sensor-rich mobile devices, the analysis of human activities becomes more general and simpler than ever before. In this paper, we propose two deep neural networks that efficiently and accurately perform human activity recognition (HAR) using tri-axial accelerometers. In combination with powerful modern deep learning techniques like batch normalization and LSTM networks, our model outperforms baseline approaches and establishes state-of-the-art results on WISDM dataset.

A Vision Transformer Based Recommender System Using Side Information (부가 정보를 활용한 비전 트랜스포머 기반의 추천시스템)

  • Kwon, Yujin;Choi, Minseok;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.119-137
    • /
    • 2022
  • Recent recommendation system studies apply various deep learning models to represent user and item interactions better. One of the noteworthy studies is ONCF(Outer product-based Neural Collaborative Filtering) which builds a two-dimensional interaction map via outer product and employs CNN (Convolutional Neural Networks) to learn high-order correlations from the map. However, ONCF has limitations in recommendation performance due to the problems with CNN and the absence of side information. ONCF using CNN has an inductive bias problem that causes poor performances for data with a distribution that does not appear in the training data. This paper proposes to employ a Vision Transformer (ViT) instead of the vanilla CNN used in ONCF. The reason is that ViT showed better results than state-of-the-art CNN in many image classification cases. In addition, we propose a new architecture to reflect side information that ONCF did not consider. Unlike previous studies that reflect side information in a neural network using simple input combination methods, this study uses an independent auxiliary classifier to reflect side information more effectively in the recommender system. ONCF used a single latent vector for user and item, but in this study, a channel is constructed using multiple vectors to enable the model to learn more diverse expressions and to obtain an ensemble effect. The experiments showed our deep learning model improved performance in recommendation compared to ONCF.

Fire Detection using Deep Convolutional Neural Networks for Assisting People with Visual Impairments in an Emergency Situation (시각 장애인을 위한 영상 기반 심층 합성곱 신경망을 이용한 화재 감지기)

  • Kong, Borasy;Won, Insu;Kwon, Jangwoo
    • 재활복지
    • /
    • v.21 no.3
    • /
    • pp.129-146
    • /
    • 2017
  • In an event of an emergency, such as fire in a building, visually impaired and blind people are prone to exposed to a level of danger that is greater than that of normal people, for they cannot be aware of it quickly. Current fire detection methods such as smoke detector is very slow and unreliable because it usually uses chemical sensor based technology to detect fire particles. But by using vision sensor instead, fire can be proven to be detected much faster as we show in our experiments. Previous studies have applied various image processing and machine learning techniques to detect fire, but they usually don't work very well because these techniques require hand-crafted features that do not generalize well to various scenarios. But with the help of recent advancement in the field of deep learning, this research can be conducted to help solve this problem by using deep learning-based object detector that can detect fire using images from security camera. Deep learning based approach can learn features automatically so they can usually generalize well to various scenes. In order to ensure maximum capacity, we applied the latest technologies in the field of computer vision such as YOLO detector in order to solve this task. Considering the trade-off between recall vs. complexity, we introduced two convolutional neural networks with slightly different model's complexity to detect fire at different recall rate. Both models can detect fire at 99% average precision, but one model has 76% recall at 30 FPS while another has 61% recall at 50 FPS. We also compare our model memory consumption with each other and show our models robustness by testing on various real-world scenarios.

Prediction of Stacking Angles of Fiber-reinforced Composite Materials Using Deep Learning Based on Convolutional Neural Networks (합성곱 신경망 기반의 딥러닝을 이용한 섬유 강화 복합재료의 적층 각도 예측)

  • Hyunsoo Hong;Wonki Kim;Do Yoon Jeon;Kwanho Lee;Seong Su Kim
    • Composites Research
    • /
    • v.36 no.1
    • /
    • pp.48-52
    • /
    • 2023
  • Fiber-reinforced composites have anisotropic material properties, so the mechanical properties of composite structures can vary depending on the stacking sequence. Therefore, it is essential to design the proper stacking sequence of composite structures according to the functional requirements. However, depending on the manufacturing condition or the shape of the structure, there are many cases where the designed stacking angle is out of range, which can affect structural performance. Accordingly, it is important to analyze the stacking angle in order to confirm that the composite structure is correctly fabricated as designed. In this study, the stacking angle was predicted from real cross-sectional images of fiber-reinforced composites using convolutional neural network (CNN)-based deep learning. Carbon fiber-reinforced composite specimens with several stacking angles were fabricated and their cross-sections were photographed on a micro-scale using an optical microscope. The training was performed for a CNN-based deep learning model using the cross-sectional image data of the composite specimens. As a result, the stacking angle can be predicted from the actual cross-sectional image of the fiber-reinforced composite with high accuracy.

Improved Multi-modal Network Using Dilated Convolution Pyramid Pooling (팽창된 합성곱 계층 연산 풀링을 이용한 멀티 모달 네트워크 성능 향상 방법)

  • Park, Jun-Young;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.84-86
    • /
    • 2018
  • 요즘 자율주행과 같은 최신 기술의 발전과 더불어 촬영된 영상 장면에 대한 깊이있는 이해가 필요하게 되었다. 특히, 기계학습 기술이 발전하면서 카메라로 찍은 영상에 대한 의미론적 분할 기술에 대한 연구도 활발히 진행되고 있다. FuseNet은 인코더-디코더 구조를 이용하여 장면 내에 있는 객체에 대한 의미론적 분할 기술을 적용할 수 있는 신경망 모델이다. FuseNet은 오직 RGB 입력을 받는 기존의 FCN보다 깊이정보까지 활용하여 RGB 정보를 기반으로 추출한 특징지도와의 요소합 연산을 통해 멀티 모달 구조를 구현했다. 의미론적 분할 연구에서는 객체의 전역 컨텍스트가 고려되는 것이 중요한데, 이를 위해 여러 계층을 깊게 쌓으면 연산량이 많아지는 단점이 있다. 이를 극복하기 위해서 기존의 합성곱 방식을 벗어나 새롭게 제안된 팽창 합성곱 연산(Dilated Convolution)을 이용하면 객체의 수용 영역이 효과적으로 넓어지고 연산량이 적어질 수 있다. 본 논문에서는 컨볼루션 연산의 새로운 방법론적 접근 중 하나인 팽창된 합성곱 연산을 이용해 의미론적 분할 연구에서 새로운 멀티 모달 네트워크의 성능 향상 방법을 적용하여 계층을 더 깊게 쌓지 않더라도 파라미터의 증가 없이 해상도를 유지하면서 네트워크의 전체 성능을 향상할 수 있는 최적화된 방법을 제안한다.

  • PDF

A novel Node2Vec-based 2-D image representation method for effective learning of cancer genomic data (암 유전체 데이터를 효과적으로 학습하기 위한 Node2Vec 기반의 새로운 2 차원 이미지 표현기법)

  • Choi, Jonghwan;Park, Sanghyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.383-386
    • /
    • 2019
  • 4 차산업혁명의 발달은 전 세계가 건강한 삶에 관련된 스마트시티 및 맞춤형 치료에 큰 관심을 갖게 하였고, 특히 기계학습 기술은 암을 극복하기 위한 유전체 기반의 정밀 의학 연구에 널리 활용되고 있어 암환자의 예후 예측 및 예후에 따른 맞춤형 치료 전략 수립 등을 가능케하였다. 하지만 암 예후 예측 연구에 주로 사용되는 유전자 발현량 데이터는 약 17,000 개의 유전자를 갖는 반면에 샘플의 수가 200 여개 밖에 없는 문제를 안고 있어, 예후 예측을 위한 신경망 모델의 일반화를 어렵게 한다. 이러한 문제를 해결하기 위해 본 연구에서는 고차원의 유전자 발현량 데이터를 신경망 모델이 효과적으로 학습할 수 있도록 2D 이미지로 표현하는 기법을 제안한다. 길이 17,000 인 1 차원 유전자 벡터를 64×64 크기의 2 차원 이미지로 사상하여 입력크기를 압축하였다. 2 차원 평면 상의 유전자 좌표를 구하기 위해 유전자 네트워크 데이터와 Node2Vec 이 활용되었고, 이미지 기반의 암 예후 예측을 수행하기 위해 합성곱 신경망 모델을 사용하였다. 제안하는 기법을 정확하게 평가하기 위해 이중 교차 검증 및 무작위 탐색 기법으로 모델 선택 및 평가 작업을 수행하였고, 그 결과로 베이스라인 모델인 고차원의 유전자 벡터를 입력 받는 다층 퍼셉트론 모델보다 더 높은 예측 정확도를 보여주는 것을 확인하였다.

Prediction of pathological complete response in rectal cancer using 3D tumor PET image (3차원 종양 PET 영상을 이용한 직장암 치료반응 예측)

  • Jinyu Yang;Kangsan Kim;Ui-sup Shin;Sang-Keun Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.63-65
    • /
    • 2023
  • 본 논문에서는 FDG-PET 영상을 사용하는 딥러닝 네트워크를 이용하여 직장암 환자의 치료 후 완치를 예측하는 연구를 수행하였다. 직장암은 흔한 악성 종양 중 하나이지만 병리학적으로 완전하게 치료되는 가능성이 매우 낮아, 치료 후의 반응을 예측하고 적절한 치료 방법을 선택하는 것이 중요하다. 따라서 본 연구에서는 FDG-PET 영상에 합성곱 신경망(CNN)모델을 활용하여 딥러닝 네트워크를 구축하고 직장암 환자의 치료반응을 예측하는 연구를 진행하였다. 116명의 직장암 환자의 FDG-PET 영상을 획득하였다. 대상군은 2cm 이상의 종양 크기를 가지는 환자를 대상으로 하였으며 치료 후 완치된 환자는 21명이었다. FDG-PET 영상은 전신 영역과 종양 영역으로 나누어 평가하였다. 딥러닝 네트워크는 2차원 및 3차원 영상입력에 대한 CNN 모델로 구성되었다. 학습된 CNN 모델을 사용하여 직장암의 치료 후 완치를 예측하는 성능을 평가하였다. 학습 결과에서 평균 정확도와 정밀도는 각각 0.854와 0.905로 나타났으며, 모든 CNN 모델과 영상 영역에 따른 성능을 보였다. 테스트 결과에서는 3차원 CNN 모델과 종양 영역만을 이용한 네트워크에서 정확도가 높게 평가됨을 확인하였다. 본 연구에서는 CNN 모델의 입력 영상에 따른 차이와 영상 영역에 따른 딥러닝 네트워크의 성능을 평가하였으며 딥러닝 네트워크 모델을 통해 직장암 치료반응을 예측하고 적절한 치료 방향 결정에 도움이 될 것으로 기대한다.

  • PDF

Improving Embedding Model for Triple Knowledge Graph Using Neighborliness Vector (인접성 벡터를 이용한 트리플 지식 그래프의 임베딩 모델 개선)

  • Cho, Sae-rom;Kim, Han-joon
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.67-80
    • /
    • 2021
  • The node embedding technique for learning graph representation plays an important role in obtaining good quality results in graph mining. Until now, representative node embedding techniques have been studied for homogeneous graphs, and thus it is difficult to learn knowledge graphs with unique meanings for each edge. To resolve this problem, the conventional Triple2Vec technique builds an embedding model by learning a triple graph having a node pair and an edge of the knowledge graph as one node. However, the Triple2 Vec embedding model has limitations in improving performance because it calculates the relationship between triple nodes as a simple measure. Therefore, this paper proposes a feature extraction technique based on a graph convolutional neural network to improve the Triple2Vec embedding model. The proposed method extracts the neighborliness vector of the triple graph and learns the relationship between neighboring nodes for each node in the triple graph. We proves that the embedding model applying the proposed method is superior to the existing Triple2Vec model through category classification experiments using DBLP, DBpedia, and IMDB datasets.

A Taekwondo Poomsae Movement Classification Model Learned Under Various Conditions

  • Ju-Yeon Kim;Kyu-Cheol Cho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.9-16
    • /
    • 2023
  • Technological advancement is being advanced in sports such as electronic protection of taekwondo competition and VAR of soccer. However, a person judges and guides the posture by looking at the posture, so sometimes a judgment dispute occurs at the site of the competition in Taekwondo Poomsae. This study proposes an artificial intelligence model that can more accurately judge and evaluate Taekwondo movements using artificial intelligence. In this study, after pre-processing the photographed and collected data, it is separated into train, test, and validation sets. The separated data is trained by applying each model and conditions, and then compared to present the best-performing model. The models under each condition compared the values of loss, accuracy, learning time, and top-n error, and as a result, the performance of the model trained under the conditions using ResNet50 and Adam was found to be the best. It is expected that the model presented in this study can be utilized in various fields such as education sites and competitions.