• 제목/요약/키워드: Multi-View Convolutional Neural Network

검색결과 9건 처리시간 0.023초

Learning-Based Multiple Pooling Fusion in Multi-View Convolutional Neural Network for 3D Model Classification and Retrieval

  • Zeng, Hui;Wang, Qi;Li, Chen;Song, Wei
    • Journal of Information Processing Systems
    • /
    • 제15권5호
    • /
    • pp.1179-1191
    • /
    • 2019
  • We design an ingenious view-pooling method named learning-based multiple pooling fusion (LMPF), and apply it to multi-view convolutional neural network (MVCNN) for 3D model classification or retrieval. By this means, multi-view feature maps projected from a 3D model can be compiled as a simple and effective feature descriptor. The LMPF method fuses the max pooling method and the mean pooling method by learning a set of optimal weights. Compared with the hand-crafted approaches such as max pooling and mean pooling, the LMPF method can decrease the information loss effectively because of its "learning" ability. Experiments on ModelNet40 dataset and McGill dataset are presented and the results verify that LMPF can outperform those previous methods to a great extent.

3차원 합성곱 신경망 기반 향상된 스테레오 매칭 알고리즘 (Enhanced Stereo Matching Algorithm based on 3-Dimensional Convolutional Neural Network)

  • 왕지엔;노재규
    • 대한임베디드공학회논문지
    • /
    • 제16권5호
    • /
    • pp.179-186
    • /
    • 2021
  • For stereo matching based on deep learning, the design of network structure is crucial to the calculation of matching cost, and the time-consuming problem of convolutional neural network in image processing also needs to be solved urgently. In this paper, a method of stereo matching using sparse loss volume in parallax dimension is proposed. A sparse 3D loss volume is constructed by using a wide step length translation of the right view feature map, which reduces the video memory and computing resources required by the 3D convolution module by several times. In order to improve the accuracy of the algorithm, the nonlinear up-sampling of the matching loss in the parallax dimension is carried out by using the method of multi-category output, and the training model is combined with two kinds of loss functions. Compared with the benchmark algorithm, the proposed algorithm not only improves the accuracy but also shortens the running time by about 30%.

Parallel Multi-task Cascade Convolution Neural Network Optimization Algorithm for Real-time Dynamic Face Recognition

  • Jiang, Bin;Ren, Qiang;Dai, Fei;Zhou, Tian;Gui, Guan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권10호
    • /
    • pp.4117-4135
    • /
    • 2020
  • Due to the angle of view, illumination and scene diversity, real-time dynamic face detection and recognition is no small difficulty in those unrestricted environments. In this study, we used the intrinsic correlation between detection and calibration, using a multi-task cascaded convolutional neural network(MTCNN) to improve the efficiency of face recognition, and the output of each core network is mapped in parallel to a compact Euclidean space, where distance represents the similarity of facial features, so that the target face can be identified as quickly as possible, without waiting for all network iteration calculations to complete the recognition results. And after the angle of the target face and the illumination change, the correlation between the recognition results can be well obtained. In the actual application scenario, we use a multi-camera real-time monitoring system to perform face matching and recognition using successive frames acquired from different angles. The effectiveness of the method was verified by several real-time monitoring experiments, and good results were obtained.

다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망 (Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks)

  • 안병태;최동걸;권인소
    • 로봇학회논문지
    • /
    • 제12권3호
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

다시점 영상 집합을 활용한 선체 블록 분류를 위한 CNN 모델 성능 비교 연구 (Comparison Study of the Performance of CNN Models with Multi-view Image Set on the Classification of Ship Hull Blocks)

  • 전해명;노재규
    • 대한조선학회논문집
    • /
    • 제57권3호
    • /
    • pp.140-151
    • /
    • 2020
  • It is important to identify the location of ship hull blocks with exact block identification number when scheduling the shipbuilding process. The wrong information on the location and identification number of some hull block can cause low productivity by spending time to find where the exact hull block is. In order to solve this problem, it is necessary to equip the system to track the location of the blocks and to identify the identification numbers of the blocks automatically. There were a lot of researches of location tracking system for the hull blocks on the stockyard. However there has been no research to identify the hull blocks on the stockyard. This study compares the performance of 5 Convolutional Neural Network (CNN) models with multi-view image set on the classification of the hull blocks to identify the blocks on the stockyard. The CNN models are open algorithms of ImageNet Large-Scale Visual Recognition Competition (ILSVRC). Four scaled hull block models are used to acquire the images of ship hull blocks. Learning and transfer learning of the CNN models with original training data and augmented data of the original training data were done. 20 tests and predictions in consideration of five CNN models and four cases of training conditions are performed. In order to compare the classification performance of the CNN models, accuracy and average F1-Score from confusion matrix are adopted as the performance measures. As a result of the comparison, Resnet-152v2 model shows the highest accuracy and average F1-Score with full block prediction image set and with cropped block prediction image set.

컨볼루션 신경망을 이용한 다시점 비디오의 중간 시점 양자화 노이즈 제거 (Quantization noise removal in an intermediate view of multi-view videos using convolutional neural network)

  • 함유진;강제원
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2020년도 추계학술대회
    • /
    • pp.57-59
    • /
    • 2020
  • 본 논문에서는 컨볼루션 신경망을 이용하여 다시점 비디오의 중간 시점 양자화 노이즈를 제거하는 방안을 제안한다. 다시점 비디오에서 중간 시점의 화질을 개선하기 위한 방안으로 인접 시점의 정보를 활용하였다. 제안하는 알고리즘을 적용하여 중간 시정에서의 양자화 노이즈를 제거할 수 있으며, 화질 (PSNR, peak-to-noise ratio)를 개선할 수 있다. 인접 시접의 정보를 활용할 경우, 일반적인 양자화 노이즈에 대해서 학습한 결과 대비 성능 향상을 제공한다.

  • PDF

딥러닝기반 입체 영상의 획득 및 처리 기술 동향 (Recent Technologies for the Acquisition and Processing of 3D Images Based on Deep Learning)

  • 윤민성
    • 전자통신동향분석
    • /
    • 제35권5호
    • /
    • pp.112-122
    • /
    • 2020
  • In 3D computer graphics, a depth map is an image that provides information related to the distance from the viewpoint to the subject's surface. Stereo sensors, depth cameras, and imaging systems using an active illumination system and a time-resolved detector can perform accurate depth measurements with their own light sources. The 3D image information obtained through the depth map is useful in 3D modeling, autonomous vehicle navigation, object recognition and remote gesture detection, resolution-enhanced medical images, aviation and defense technology, and robotics. In addition, the depth map information is important data used for extracting and restoring multi-view images, and extracting phase information required for digital hologram synthesis. This study is oriented toward a recent research trend in deep learning-based 3D data analysis methods and depth map information extraction technology using a convolutional neural network. Further, the study focuses on 3D image processing technology related to digital hologram and multi-view image extraction/reconstruction, which are becoming more popular as the computing power of hardware rapidly increases.

멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동 (Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion)

  • 최정현;김인철
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제12권9호
    • /
    • pp.407-418
    • /
    • 2023
  • MultiOn(Multi-Object Goal Visual Navigation)은 에이전트가 미지의 실내 환경 내 임의의 위치에 놓인 다수의 목표 물체들을 미리 정해준 일정한 순서에 따라 찾아가야 하는 매우 어려운 시각적 탐색 이동 작업이다. MultiOn 작업을 위한 기존의 모델들은 행동 선택을 위해 시각적 외관 지도나 목표 지도와 같은 단일 맥락 지도만을 이용할 뿐, 다양한 멀티모달 맥락정보에 관한 종합적인 관점을 활용할 수 없다는 한계성을 가지고 있다. 이와 같은 한계성을 극복하기 위해, 본 논문에서는 MultiOn 작업을 위한 새로운 심층 신경망 기반의 에이전트 모델인 MCFMO(Multimodal Context Fusion for MultiOn tasks)를 제안한다. 제안 모델에서는 입력 영상의 시각적 외관 특징외에 환경 물체의 의미적 특징, 목표 물체 특징도 함께 포함한 멀티모달 맥락 지도를 행동 선택에 이용한다. 또한, 제안 모델은 점-단위 합성곱 신경망 모듈을 이용하여 3가지 서로 이질적인 맥락 특징들을 효과적으로 융합한다. 이 밖에도 제안 모델은 효율적인 이동 정책 학습을 유도하기 위해, 목표 물체의 관측 여부와 방향, 그리고 거리를 예측하는 보조 작업 학습 모듈을 추가로 채용한다. 본 논문에서는 Habitat-Matterport3D 시뮬레이션 환경과 장면 데이터 집합을 이용한 다양한 정량 및 정성 실험들을 통해, 제안 모델의 우수성을 확인하였다.

A Lightweight Pedestrian Intrusion Detection and Warning Method for Intelligent Traffic Security

  • Yan, Xinyun;He, Zhengran;Huang, Youxiang;Xu, Xiaohu;Wang, Jie;Zhou, Xiaofeng;Wang, Chishe;Lu, Zhiyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권12호
    • /
    • pp.3904-3922
    • /
    • 2022
  • As a research hotspot, pedestrian detection has a wide range of applications in the field of computer vision in recent years. However, current pedestrian detection methods have problems such as insufficient detection accuracy and large models that are not suitable for large-scale deployment. In view of these problems mentioned above, a lightweight pedestrian detection and early warning method using a new model called you only look once (Yolov5) is proposed in this paper, which utilizing advantages of Yolov5s model to achieve accurate and fast pedestrian recognition. In addition, this paper also optimizes the loss function of the batch normalization (BN) layer. After sparsification, pruning and fine-tuning, got a lot of optimization, the size of the model on the edge of the computing power is lower equipment can be deployed. Finally, from the experimental data presented in this paper, under the training of the road pedestrian dataset that we collected and processed independently, the Yolov5s model has certain advantages in terms of precision and other indicators compared with traditional single shot multiBox detector (SSD) model and fast region-convolutional neural network (Fast R-CNN) model. After pruning and lightweight, the size of training model is greatly reduced without a significant reduction in accuracy, and the final precision reaches 87%, while the model size is reduced to 7,723 KB.