• Title/Summary/Keyword: 3D Convolutional neural networks

Search Result 39, Processing Time 0.035 seconds

3D Convolutional Neural Networks based Fall Detection with Thermal Camera (열화상 카메라를 이용한 3D 컨볼루션 신경망 기반 낙상 인식)

  • Kim, Dae-Eon;Jeon, BongKyu;Kwon, Dong-Soo
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.45-54
    • /
    • 2018
  • This paper presents a vision-based fall detection system to automatically monitor and detect people's fall accidents, particularly those of elderly people or patients. For video analysis, the system should be able to extract both spatial and temporal features so that the model captures appearance and motion information simultaneously. Our approach is based on 3-dimensional convolutional neural networks, which can learn spatiotemporal features. In addition, we adopts a thermal camera in order to handle several issues regarding usability, day and night surveillance and privacy concerns. We design a pan-tilt camera with two actuators to extend the range of view. Performance is evaluated on our thermal dataset: TCL Fall Detection Dataset. The proposed model achieves 90.2% average clip accuracy which is better than other approaches.

Convolutional Neural Network Based Multi-feature Fusion for Non-rigid 3D Model Retrieval

  • Zeng, Hui;Liu, Yanrong;Li, Siqi;Che, JianYong;Wang, Xiuqing
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.176-190
    • /
    • 2018
  • This paper presents a novel convolutional neural network based multi-feature fusion learning method for non-rigid 3D model retrieval, which can investigate the useful discriminative information of the heat kernel signature (HKS) descriptor and the wave kernel signature (WKS) descriptor. At first, we compute the 2D shape distributions of the two kinds of descriptors to represent the 3D model and use them as the input to the networks. Then we construct two convolutional neural networks for the HKS distribution and the WKS distribution separately, and use the multi-feature fusion layer to connect them. The fusion layer not only can exploit more discriminative characteristics of the two descriptors, but also can complement the correlated information between the two kinds of descriptors. Furthermore, to further improve the performance of the description ability, the cross-connected layer is built to combine the low-level features with high-level features. Extensive experiments have validated the effectiveness of the designed multi-feature fusion learning method.

Visual Model of Pattern Design Based on Deep Convolutional Neural Network

  • Jingjing Ye;Jun Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.311-326
    • /
    • 2024
  • The rapid development of neural network technology promotes the neural network model driven by big data to overcome the texture effect of complex objects. Due to the limitations in complex scenes, it is necessary to establish custom template matching and apply it to the research of many fields of computational vision technology. The dependence on high-quality small label sample database data is not very strong, and the machine learning system of deep feature connection to complete the task of texture effect inference and speculation is relatively poor. The style transfer algorithm based on neural network collects and preserves the data of patterns, extracts and modernizes their features. Through the algorithm model, it is easier to present the texture color of patterns and display them digitally. In this paper, according to the texture effect reasoning of custom template matching, the 3D visualization of the target is transformed into a 3D model. The high similarity between the scene to be inferred and the user-defined template is calculated by the user-defined template of the multi-dimensional external feature label. The convolutional neural network is adopted to optimize the external area of the object to improve the sampling quality and computational performance of the sample pyramid structure. The results indicate that the proposed algorithm can accurately capture the significant target, achieve more ablation noise, and improve the visualization results. The proposed deep convolutional neural network optimization algorithm has good rapidity, data accuracy and robustness. The proposed algorithm can adapt to the calculation of more task scenes, display the redundant vision-related information of image conversion, enhance the powerful computing power, and further improve the computational efficiency and accuracy of convolutional networks, which has a high research significance for the study of image information conversion.

A Proposal of Shuffle Graph Convolutional Network for Skeleton-based Action Recognition

  • Jang, Sungjun;Bae, Han Byeol;Lee, HeanSung;Lee, Sangyoun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.4
    • /
    • pp.314-322
    • /
    • 2021
  • Skeleton-based action recognition has attracted considerable attention in human action recognition. Recent methods for skeleton-based action recognition employ spatiotemporal graph convolutional networks (GCNs) and have remarkable performance. However, most of them have heavy computational complexity for robust action recognition. To solve this problem, we propose a shuffle graph convolutional network (SGCN) which is a lightweight graph convolutional network using pointwise group convolution rather than pointwise convolution to reduce computational cost. Our SGCN is composed of spatial and temporal GCN. The spatial shuffle GCN contains pointwise group convolution and part shuffle module which enhances local and global information between correlated joints. In addition, the temporal shuffle GCN contains depthwise convolution to maintain a large receptive field. Our model achieves comparable performance with lowest computational cost and exceeds the performance of baseline at 0.3% and 1.2% on NTU RGB+D and NTU RGB+D 120 datasets, respectively.

Effective Hand Gesture Recognition by Key Frame Selection and 3D Neural Network

  • Hoang, Nguyen Ngoc;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • Smart Media Journal
    • /
    • v.9 no.1
    • /
    • pp.23-29
    • /
    • 2020
  • This paper presents an approach for dynamic hand gesture recognition by using algorithm based on 3D Convolutional Neural Network (3D_CNN), which is later extended to 3D Residual Networks (3D_ResNet), and the neural network based key frame selection. Typically, 3D deep neural network is used to classify gestures from the input of image frames, randomly sampled from a video data. In this work, to improve the classification performance, we employ key frames which represent the overall video, as the input of the classification network. The key frames are extracted by SegNet instead of conventional clustering algorithms for video summarization (VSUMM) which require heavy computation. By using a deep neural network, key frame selection can be performed in a real-time system. Experiments are conducted using 3D convolutional kernels such as 3D_CNN, Inflated 3D_CNN (I3D) and 3D_ResNet for gesture classification. Our algorithm achieved up to 97.8% of classification accuracy on the Cambridge gesture dataset. The experimental results show that the proposed approach is efficient and outperforms existing methods.

Automatic Volumetric Brain Tumor Segmentation using Convolutional Neural Networks

  • Yavorskyi, Vladyslav;Sull, Sanghoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.432-435
    • /
    • 2019
  • Convolutional Neural Networks (CNNs) have recently been gaining popularity in the medical image analysis field because of their image segmentation capabilities. In this paper, we present a CNN that performs automated brain tumor segmentations of sparsely annotated 3D Magnetic Resonance Imaging (MRI) scans. Our CNN is based on 3D U-net architecture, and it includes separate Dilated and Depth-wise Convolutions. It is fully-trained on the BraTS 2018 data set, and it produces more accurate results even when compared to the winners of the BraTS 2017 competition despite having a significantly smaller amount of parameters.

  • PDF

A Review of 3D Object Tracking Methods Using Deep Learning (딥러닝 기술을 이용한 3차원 객체 추적 기술 리뷰)

  • Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.1
    • /
    • pp.30-37
    • /
    • 2021
  • Accurate 3D object tracking with camera images is a key enabling technology for augmented reality applications. Motivated by the impressive success of convolutional neural networks (CNNs) in computer vision tasks such as image classification, object detection, image segmentation, recent studies for 3D object tracking have focused on leveraging deep learning. In this paper, we review deep learning approaches for 3D object tracking. We describe key methods in this field and discuss potential future research directions.

Super-resolution based on multi-channel input convolutional residual neural network (다중 채널 입력 Convolution residual neural networks 기반의 초해상화 기법)

  • Youm, Gwang-Young;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.37-39
    • /
    • 2016
  • 최근 Convolutional neural networks(CNN) 기반의 초해상화 기법인 Super-Resolution Convolutional Neural Networks (SRCNN) 이 좋은 PSNR 성능을 발휘하는 것으로 보고되었다 [1]. 하지만 많은 제안 방법들이 고주파 성분을 복원하는데 한계를 드러내는 것처럼, SRCNN 도 고주파 성분 복원에 한계점을 지니고 있다. 또한 SRCNN 의 네트워크 층을 깊게 만들면 좋은 PSNR 성능을 발휘하는 것으로 널리 알려져 있지만, 네트워크의 층을 깊게 하는 것은 네트워크 파라미터 학습을 어렵게 하는 경향이 있다. 네트워크의 층을 깊게 할 경우, gradient 값이 아래(역방향) 층으로 갈수록 발산하거나 0 으로 수렴하여, 네트워크 파라미터 학습이 제대로 되지 않는 현상이 발생하기 때문이다. 따라서 본 논문에서는 네트워크 층을 깊게 하는 대신에, 입력을 다중 채널로 구성하여, 네트워크에 고주파 성분에 관한 추가적인 정보를 주는 방법을 제안하였다. 많은 초해상화 기법들이 고주파 성분의 복원 능력이 부족하다는 점에 착안하여, 우리는 네트워크가 고주파 성분에 관한 많은 정보를 필요로 한다는 것을 가정하였다. 따라서 우리는 네트워크의 입력을 고주파 성분이 여러 가지 강도로 입력되도록 저해상도 입력 영상들을 구성하였다. 또한 잔차신호 네트워크(residual networks)를 도입하여, 네트워크 파라미터를 학습할 때 고주파 성분의 복원에 집중할 수 있도록 하였다. 본 논문의 효율성을 검증하기 위하여 set5 데이터와 set14 데이터에 관하여 실험을 진행하였고, SRCNN 과 비교하여 set5 데이터에서는 2, 3, 4 배에 관하여 각각 평균 0.29, 0.35, 0.17dB 의 PSNR 성능 향상이 있었으며, set14 데이터에서는 3 배의 관하여 평균 0.20dB 의 PSNR 성능 향상이 있었다.

  • PDF

Real-time 3D Pose Estimation of Both Human Hands via RGB-Depth Camera and Deep Convolutional Neural Networks (RGB-Depth 카메라와 Deep Convolution Neural Networks 기반의 실시간 사람 양손 3D 포즈 추정)

  • Park, Na Hyeon;Ji, Yong Bin;Gi, Geon;Kim, Tae Yeon;Park, Hye Min;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.686-689
    • /
    • 2018
  • 3D 손 포즈 추정(Hand Pose Estimation, HPE)은 스마트 인간 컴퓨터 인터페이스를 위해서 중요한 기술이다. 이 연구에서는 딥러닝 방법을 기반으로 하여 단일 RGB-Depth 카메라로 촬영한 양손의 3D 손 자세를 실시간으로 인식하는 손 포즈 추정 시스템을 제시한다. 손 포즈 추정 시스템은 4단계로 구성된다. 첫째, Skin Detection 및 Depth cutting 알고리즘을 사용하여 양손을 RGB와 깊이 영상에서 감지하고 추출한다. 둘째, Convolutional Neural Network(CNN) Classifier는 오른손과 왼손을 구별하는데 사용된다. CNN Classifier 는 3개의 convolution layer와 2개의 Fully-Connected Layer로 구성되어 있으며, 추출된 깊이 영상을 입력으로 사용한다. 셋째, 학습된 CNN regressor는 추출된 왼쪽 및 오른쪽 손의 깊이 영상에서 손 관절을 추정하기 위해 다수의 Convolutional Layers, Pooling Layers, Fully Connected Layers로 구성된다. CNN classifier와 regressor는 22,000개 깊이 영상 데이터셋으로 학습된다. 마지막으로, 각 손의 3D 손 자세는 추정된 손 관절 정보로부터 재구성된다. 테스트 결과, CNN classifier는 오른쪽 손과 왼쪽 손을 96.9%의 정확도로 구별할 수 있으며, CNN regressor는 형균 8.48mm의 오차 범위로 3D 손 관절 정보를 추정할 수 있다. 본 연구에서 제안하는 손 포즈 추정 시스템은 가상 현실(virtual reality, VR), 증강 현실(Augmented Reality, AR) 및 융합 현실 (Mixed Reality, MR) 응용 프로그램을 포함한 다양한 응용 분야에서 사용할 수 있다.

Performance Comparisons of GAN-Based Generative Models for New Product Development (신제품 개발을 위한 GAN 기반 생성모델 성능 비교)

  • Lee, Dong-Hun;Lee, Se-Hun;Kang, Jae-Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.867-871
    • /
    • 2022
  • Amid the recent rapid trend change, the change in design has a great impact on the sales of fashion companies, so it is inevitable to be careful in choosing new designs. With the recent development of the artificial intelligence field, various machine learning is being used a lot in the fashion market to increase consumers' preferences. To contribute to increasing reliability in the development of new products by quantifying abstract concepts such as preferences, we generate new images that do not exist through three adversarial generative neural networks (GANs) and numerically compare abstract concepts of preferences using pre-trained convolution neural networks (CNNs). Deep convolutional generative adversarial networks (DCGAN), Progressive growing adversarial networks (PGGAN), and Dual Discriminator generative adversarial networks (DANs), which were trained to produce comparative, high-level, and high-level images. The degree of similarity measured was considered as a preference, and the experimental results showed that D2GAN showed a relatively high similarity compared to DCGAN and PGGAN.