• Title/Summary/Keyword: multi-vision

Search Result 483, Processing Time 0.025 seconds

Survey on Deep Learning-based Panoptic Segmentation Methods (딥 러닝 기반의 팬옵틱 분할 기법 분석)

  • Kwon, Jung Eun;Cho, Sung In
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.209-214
    • /
    • 2021
  • Panoptic segmentation, which is now widely used in computer vision such as medical image analysis, and autonomous driving, helps understanding an image with holistic view. It identifies each pixel by assigning a unique class ID, and an instance ID. Specifically, it can classify 'thing' from 'stuff', and provide pixel-wise results of semantic prediction and object detection. As a result, it can solve both semantic segmentation and instance segmentation tasks through a unified single model, producing two different contexts for two segmentation tasks. Semantic segmentation task focuses on how to obtain multi-scale features from large receptive field, without losing low-level features. On the other hand, instance segmentation task focuses on how to separate 'thing' from 'stuff' and how to produce the representation of detected objects. With the advances of both segmentation techniques, several panoptic segmentation models have been proposed. Many researchers try to solve discrepancy problems between results of two segmentation branches that can be caused on the boundary of the object. In this survey paper, we will introduce the concept of panoptic segmentation, categorize the existing method into two representative methods and explain how it is operated on two methods: top-down method and bottom-up method. Then, we will analyze the performance of various methods with experimental results.

Boosting the Face Recognition Performance of Ensemble Based LDA for Pose, Non-uniform Illuminations, and Low-Resolution Images

  • Haq, Mahmood Ul;Shahzad, Aamir;Mahmood, Zahid;Shah, Ayaz Ali;Muhammad, Nazeer;Akram, Tallha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3144-3164
    • /
    • 2019
  • Face recognition systems have several potential applications, such as security and biometric access control. Ongoing research is focused to develop a robust face recognition algorithm that can mimic the human vision system. Face pose, non-uniform illuminations, and low-resolution are main factors that influence the performance of face recognition algorithms. This paper proposes a novel method to handle the aforementioned aspects. Proposed face recognition algorithm initially uses 68 points to locate a face in the input image and later partially uses the PCA to extract mean image. Meanwhile, the AdaBoost and the LDA are used to extract face features. In final stage, classic nearest centre classifier is used for face classification. Proposed method outperforms recent state-of-the-art face recognition algorithms by producing high recognition rate and yields much lower error rate for a very challenging situation, such as when only frontal ($0^{\circ}$) face sample is available in gallery and seven poses ($0^{\circ}$, ${\pm}30^{\circ}$, ${\pm}35^{\circ}$, and ${\pm}45^{\circ}$) as a probe on the LFW and the CMU Multi-PIE databases.

Video Object Segmentation with Weakly Temporal Information

  • Zhang, Yikun;Yao, Rui;Jiang, Qingnan;Zhang, Changbin;Wang, Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1434-1449
    • /
    • 2019
  • Video object segmentation is a significant task in computer vision, but its performance is not very satisfactory. A method of video object segmentation using weakly temporal information is presented in this paper. Motivated by the phenomenon in reality that the motion of the object is a continuous and smooth process and the appearance of the object does not change much between adjacent frames in the video sequences, we use a feed-forward architecture with motion estimation to predict the mask of the current frame. We extend an additional mask channel for the previous frame segmentation result. The mask of the previous frame is treated as the input of the expanded channel after processing, and then we extract the temporal feature of the object and fuse it with other feature maps to generate the final mask. In addition, we introduce multi-mask guidance to improve the stability of the model. Moreover, we enhance segmentation performance by further training with the masks already obtained. Experiments show that our method achieves competitive results on DAVIS-2016 on single object segmentation compared to some state-of-the-art algorithms.

Capturing research trends in structural health monitoring using bibliometric analysis

  • Yeom, Jaesun;Jeong, Seunghoo;Woo, Han-Gyun;Sim, Sung-Han
    • Smart Structures and Systems
    • /
    • v.29 no.2
    • /
    • pp.361-374
    • /
    • 2022
  • As civil infrastructure has continued to age worldwide, its structural integrity has been threatened owing to material deteriorations and continual loadings from the external environment. Structural Health Monitoring (SHM) has emerged as a cost-efficient method for ensuring structural safety and durability. As SHM research has gradually addressed an increasing number of structure-related problems, it has become difficult to understand the changing research topic trends. Although previous review papers have analyzed research trends on specific SHM topics, these studies have faced challenges in providing (1) consistent insights regarding macroscopic SHM research trends, (2) empirical evidence for research topic changes in overall SHM fields, and (3) methodological validations for the insights. To overcome these challenges, this study proposes a framework tailored to capturing the trends of research topics in SHM through a bibliometric and network analysis. The framework is applied to track SHM research topics over 15 years by identifying both quantitative and relational changes in the author keywords provided from representative SHM journals. The results of this study confirm that overall SHM research has become diversified and multi-disciplinary. Especially, the rapidly growing research topics are tightly related to applying machine learning and computer vision techniques to solve SHM-related issues. In addition, the research topic network indicates that damage detection and vibration control have been both steadily and actively studied in SHM research.

A Feature Map Generation Method for MSFC-Based Feature Compression without Min-Max Signaling in VCM (VCM 의 MSFC 기반 특징 압축을 위한 Min-Max 시그널링을 제외한 특징맵 생성 기법)

  • Dong-Ha Kim;Yong-Uk Yoon;Jae-Gon Kim
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.79-81
    • /
    • 2022
  • MPEG-VCM(Video Coding for Machines)에서는 머신비전(machine vision) 네트워크의 백본(backbone)에서 추출된 이미지/비디오 특징 압축을 위한 표준화를 진행하고 있다. 현재 VCM 표준기술 탐색 과정에서 가장 좋은 압축 성능을 보이는 MSFC(Multi-Scale Feature compression) 기반 압축 네트워크 모델은 추출된 멀티-스케일 특징을 단일-스케일 특징으로 변환하여 특징맵으로 구성하고 이를 VVC 로 압축한다. 본 논문에서는 MSFC 기반 압축 모델에서 Min-Max 값 시그널링을 제외한 최소-최대(Min-Max) 정규화를 포함한 개선된 특징맵 생성 기법을 제시한다. 즉, 제안기법은 VCM 디코더에서의 특징맵 복원을 위한 Min-Max 값을 학습 기반으로 생성함으로써 Min-Max 시그널링의 비트 오버헤드 절감뿐만 아니라 별도의 시그널링 기제를 생략한 보다 단순한 전송 비트스트림 구성을 가능하게 한다. 실험결과 제안기법은 이미지 앵커(Anchor) 대비 BPP-mAP 성능에서 83.24% BD-rate 이득을 보이며, 이는 기존 MSFC 보다 1.74%정도 다소 떨어지지만 별도의 Min-Max 시그널링 없이도 기존의 성능을 유지할 수 있음을 보인다.

  • PDF

Jointly Learning of Heavy Rain Removal and Super-Resolution in Single Images

  • Vu, Dac Tung;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.113-117
    • /
    • 2020
  • Images were taken under various weather such as rain, haze, snow often show low visibility, which can dramatically decrease accuracy of some tasks in computer vision: object detection, segmentation. Besides, previous work to enhance image usually downsample the image to receive consistency features but have not yet good upsample algorithm to recover original size. So, in this research, we jointly implement removal streak in heavy rain image and super resolution using a deep network. We put forth a 2-stage network: a multi-model network followed by a refinement network. The first stage using rain formula in the single image and two operation layers (addition, multiplication) removes rain streak and noise to get clean image in low resolution. The second stage uses refinement network to recover damaged background information as well as upsample, and receive high resolution image. Our method improves visual quality image, gains accuracy in human action recognition task in datasets. Extensive experiments show that our network outperforms the state of the art (SoTA) methods.

  • PDF

Ensemble Knowledge Distillation for Classification of 14 Thorax Diseases using Chest X-ray Images (흉부 X-선 영상을 이용한 14 가지 흉부 질환 분류를 위한 Ensemble Knowledge Distillation)

  • Ho, Thi Kieu Khanh;Jeon, Younghoon;Gwak, Jeonghwan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.313-315
    • /
    • 2021
  • Timely and accurate diagnosis of lung diseases using Chest X-ray images has been gained much attention from the computer vision and medical imaging communities. Although previous studies have presented the capability of deep convolutional neural networks by achieving competitive binary classification results, their models were seemingly unreliable to effectively distinguish multiple disease groups using a large number of x-ray images. In this paper, we aim to build an advanced approach, so-called Ensemble Knowledge Distillation (EKD), to significantly boost the classification accuracies, compared to traditional KD methods by distilling knowledge from a cumbersome teacher model into an ensemble of lightweight student models with parallel branches trained with ground truth labels. Therefore, learning features at different branches of the student models could enable the network to learn diverse patterns and improve the qualify of final predictions through an ensemble learning solution. Although we observed that experiments on the well-established ChestX-ray14 dataset showed the classification improvements of traditional KD compared to the base transfer learning approach, the EKD performance would be expected to potentially enhance classification accuracy and model generalization, especially in situations of the imbalanced dataset and the interdependency of 14 weakly annotated thorax diseases.

  • PDF

A Study on Performance Improvement of GVQA Model Using Transformer (트랜스포머를 이용한 GVQA 모델의 성능 개선에 관한 연구)

  • Park, Sung-Wook;Kim, Jun-Yeong;Park, Jun;Lee, Han-Sung;Jung, Se-Hoon;Sim, Cun-Bo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.749-752
    • /
    • 2021
  • 오늘날 인공지능(Artificial Intelligence, AI) 분야에서 가장 구현하기 어려운 분야 중 하나는 추론이다. 근래 추론 분야에서 영상과 언어가 결합한 다중 모드(Multi-modal) 환경에서 영상 기반의 질의 응답(Visual Question Answering, VQA) 과업에 대한 AI 모델이 발표됐다. 얼마 지나지 않아 VQA 모델의 성능을 개선한 GVQA(Grounded Visual Question Answering) 모델도 발표됐다. 하지만 아직 GVQA 모델도 완벽한 성능을 내진 못한다. 본 논문에서는 GVQA 모델의 성능 개선을 위해 VCC(Visual Concept Classifier) 모델을 ViT-G(Vision Transformer-Giant)/14로 변경하고, ACP(Answer Cluster Predictor) 모델을 GPT(Generative Pretrained Transformer)-3으로 변경한다. 이와 같은 방법들은 성능을 개선하는 데 큰 도움이 될 수 있다고 사료된다.

Multi-type Image Noise Classification by Using Deep Learning

  • Waqar Ahmed;Zahid Hussain Khand;Sajid Khan;Ghulam Mujtaba;Muhammad Asif Khan;Ahmad Waqas
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.143-147
    • /
    • 2024
  • Image noise classification is a classical problem in the field of image processing, machine learning, deep learning and computer vision. In this paper, image noise classification is performed using deep learning. Keras deep learning library of TensorFlow is used for this purpose. 6900 images images are selected from the Kaggle database for the classification purpose. Dataset for labeled noisy images of multiple type was generated with the help of Matlab from a dataset of non-noisy images. Labeled dataset comprised of Salt & Pepper, Gaussian and Sinusoidal noise. Different training and tests sets were partitioned to train and test the model for image classification. In deep neural networks CNN (Convolutional Neural Network) is used due to its in-depth and hidden patterns and features learning in the images to be classified. This deep learning of features and patterns in images make CNN outperform the other classical methods in many classification problems.

Rotation Invariant 3D Star Skeleton Feature Extraction (회전무관 3D Star Skeleton 특징 추출)

  • Chun, Sung-Kuk;Hong, Kwang-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.836-850
    • /
    • 2009
  • Human posture recognition has attracted tremendous attention in ubiquitous environment, performing arts and robot control so that, recently, many researchers in pattern recognition and computer vision are working to make efficient posture recognition system. However the most of existing studies is very sensitive to human variations such as the rotation or the translation of body. This is why the feature, which is extracted from the feature extraction part as the first step of general posture recognition system, is influenced by these variations. To alleviate these human variations and improve the posture recognition result, this paper presents 3D Star Skeleton and Principle Component Analysis (PCA) based feature extraction methods in the multi-view environment. The proposed system use the 8 projection maps, a kind of depth map, as an input data. And the projection maps are extracted from the visual hull generation process. Though these data, the system constructs 3D Star Skeleton and extracts the rotation invariant feature using PCA. In experimental result, we extract the feature from the 3D Star Skeleton and recognize the human posture using the feature. Finally we prove that the proposed method is robust to human variations.