• Title/Summary/Keyword: task features

Search Result 559, Processing Time 0.032 seconds

Survey on Deep Learning-based Panoptic Segmentation Methods (딥 러닝 기반의 팬옵틱 분할 기법 분석)

  • Kwon, Jung Eun;Cho, Sung In
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.209-214
    • /
    • 2021
  • Panoptic segmentation, which is now widely used in computer vision such as medical image analysis, and autonomous driving, helps understanding an image with holistic view. It identifies each pixel by assigning a unique class ID, and an instance ID. Specifically, it can classify 'thing' from 'stuff', and provide pixel-wise results of semantic prediction and object detection. As a result, it can solve both semantic segmentation and instance segmentation tasks through a unified single model, producing two different contexts for two segmentation tasks. Semantic segmentation task focuses on how to obtain multi-scale features from large receptive field, without losing low-level features. On the other hand, instance segmentation task focuses on how to separate 'thing' from 'stuff' and how to produce the representation of detected objects. With the advances of both segmentation techniques, several panoptic segmentation models have been proposed. Many researchers try to solve discrepancy problems between results of two segmentation branches that can be caused on the boundary of the object. In this survey paper, we will introduce the concept of panoptic segmentation, categorize the existing method into two representative methods and explain how it is operated on two methods: top-down method and bottom-up method. Then, we will analyze the performance of various methods with experimental results.

Pyramidal Deep Neural Networks for the Accurate Segmentation and Counting of Cells in Microscopy Data

  • Vununu, Caleb;Kang, Kyung-Won;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.3
    • /
    • pp.335-348
    • /
    • 2019
  • Cell segmentation and counting represent one of the most important tasks required in order to provide an exhaustive understanding of biological images. Conventional features suffer the lack of spatial consistency by causing the joining of the cells and, thus, complicating the cell counting task. We propose, in this work, a cascade of networks that take as inputs different versions of the original image. After constructing a Gaussian pyramid representation of the microscopy data, the inputs of different size and spatial resolution are given to a cascade of deep convolutional autoencoders whose task is to reconstruct the segmentation mask. The coarse masks obtained from the different networks are summed up in order to provide the final mask. The principal and main contribution of this work is to propose a novel method for the cell counting. Unlike the majority of the methods that use the obtained segmentation mask as the prior information for counting, we propose to utilize the hidden latent representations, often called the high-level features, as the inputs of a neural network based regressor. While the segmentation part of our method performs as good as the conventional deep learning methods, the proposed cell counting approach outperforms the state-of-the-art methods.

Improving classification of low-resource COVID-19 literature by using Named Entity Recognition

  • Lithgow-Serrano, Oscar;Cornelius, Joseph;Kanjirangat, Vani;Mendez-Cruz, Carlos-Francisco;Rinaldi, Fabio
    • Genomics & Informatics
    • /
    • v.19 no.3
    • /
    • pp.22.1-22.5
    • /
    • 2021
  • Automatic document classification for highly interrelated classes is a demanding task that becomes more challenging when there is little labeled data for training. Such is the case of the coronavirus disease 2019 (COVID-19) clinical repository-a repository of classified and translated academic articles related to COVID-19 and relevant to the clinical practice-where a 3-way classification scheme is being applied to COVID-19 literature. During the 7th Biomedical Linked Annotation Hackathon (BLAH7) hackathon, we performed experiments to explore the use of named-entity-recognition (NER) to improve the classification. We processed the literature with OntoGene's Biomedical Entity Recogniser (OGER) and used the resulting identified Named Entities (NE) and their links to major biological databases as extra input features for the classifier. We compared the results with a baseline model without the OGER extracted features. In these proof-of-concept experiments, we observed a clear gain on COVID-19 literature classification. In particular, NE's origin was useful to classify document types and NE's type for clinical specialties. Due to the limitations of the small dataset, we can only conclude that our results suggests that NER would benefit this classification task. In order to accurately estimate this benefit, further experiments with a larger dataset would be needed.

Improving Transformer with Dynamic Convolution and Shortcut for Video-Text Retrieval

  • Liu, Zhi;Cai, Jincen;Zhang, Mengmeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2407-2424
    • /
    • 2022
  • Recently, Transformer has made great progress in video retrieval tasks due to its high representation capability. For the structure of a Transformer, the cascaded self-attention modules are capable of capturing long-distance feature dependencies. However, the local feature details are likely to have deteriorated. In addition, increasing the depth of the structure is likely to produce learning bias in the learned features. In this paper, an improved Transformer structure named TransDCS (Transformer with Dynamic Convolution and Shortcut) is proposed. A Multi-head Conv-Self-Attention module is introduced to model the local dependencies and improve the efficiency of local features extraction. Meanwhile, the augmented shortcuts module based on a dual identity matrix is applied to enhance the conduction of input features, and mitigate the learning bias. The proposed model is tested on MSRVTT, LSMDC and Activity-Net benchmarks, and it surpasses all previous solutions for the video-text retrieval task. For example, on the LSMDC benchmark, a gain of about 2.3% MdR and 6.1% MnR is obtained over recently proposed multimodal-based methods.

Variational autoencoder for prosody-based speaker recognition

  • Starlet Ben Alex;Leena Mary
    • ETRI Journal
    • /
    • v.45 no.4
    • /
    • pp.678-689
    • /
    • 2023
  • This paper describes a novel end-to-end deep generative model-based speaker recognition system using prosodic features. The usefulness of variational autoencoders (VAE) in learning the speaker-specific prosody representations for the speaker recognition task is examined herein for the first time. The speech signal is first automatically segmented into syllable-like units using vowel onset points (VOP) and energy valleys. Prosodic features, such as the dynamics of duration, energy, and fundamental frequency (F0), are then extracted at the syllable level and used to train/adapt a speaker-dependent VAE from a universal VAE. The initial comparative studies on VAEs and traditional autoencoders (AE) suggest that the former can efficiently learn speaker representations. Investigations on the impact of gender information in speaker recognition also point out that gender-dependent impostor banks lead to higher accuracies. Finally, the evaluation on the NIST SRE 2010 dataset demonstrates the usefulness of the proposed approach for speaker recognition.

Similar Image Retrieval Technique based on Semantics through Automatic Labeling Extraction of Personalized Images

  • Jung-Hee, Seo
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.1
    • /
    • pp.56-63
    • /
    • 2024
  • Despite the rapid strides in content-based image retrieval, a notable disparity persists between the visual features of images and the semantic features discerned by humans. Hence, image retrieval based on the association of semantic similarities recognized by humans with visual similarities is a difficult task for most image-retrieval systems. Our study endeavors to bridge this gap by refining image semantics, aligning them more closely with human perception. Deep learning techniques are used to semantically classify images and retrieve those that are semantically similar to personalized images. Moreover, we introduce a keyword-based image retrieval, enabling automatic labeling of images in mobile environments. The proposed approach can improve the performance of a mobile device with limited resources and bandwidth by performing retrieval based on the visual features and keywords of the image on the mobile device.

Representation and Detection of Video Shot s Features for Emotional Events (감정에 관련된 비디오 셧의 특징 표현 및 검출)

  • Kang, Hang-Bong;Park, Hyun-Jae
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.53-62
    • /
    • 2004
  • The processing of emotional information is very important in Human-Computer Interaction (HCI). In particular, it is very important in video information processing to deal with a user's affection. To handle emotional information, it is necessary to represent meaningful features and detect them efficiently. Even though it is not an easy task to detect emotional events from low level features such as colour and motion, it is possible to detect them if we use statistical analysis like Linear Discriminant Analysis (LDA). In this paper, we propose a representation scheme for emotion-related features and a defection method. We experiment with extracted features from video to detect emotional events and obtain desirable results.

Detection of Multiple Salient Objects by Categorizing Regional Features

  • Oh, Kang-Han;Kim, Soo-Hyung;Kim, Young-Chul;Lee, Yu-Ra
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.272-287
    • /
    • 2016
  • Recently, various and effective contrast based salient object detection models to focus on a single target have been proposed. However, there is a lack of research on detection of multiple objects, and also it is a more challenging task than single target process. In the multiple target problem, we are confronted by new difficulties caused by distinct difference between properties of objects. The characteristic of existing models depending on the global maximum distribution of data point would become a drawback for detection of multiple objects. In this paper, by analyzing limitations of the existing methods, we have devised three main processes to detect multiple salient objects. In the first stage, regional features are extracted from over-segmented regions. In the second stage, the regional features are categorized into homogeneous cluster using the mean-shift algorithm with the kernel function having various sizes. In the final stage, we compute saliency scores of the categorized regions using only spatial features without the contrast features, and then all scores are integrated for the final salient regions. In the experimental results, the scheme achieved superior detection accuracy for the SED2 and MSRA-ASD benchmarks with both a higher precision and better recall than state-of-the-art approaches. Especially, given multiple objects having different properties, our model significantly outperforms all existing models.

Evaluation of Frequency Warping Based Features and Spectro-Temporal Features for Speaker Recognition (화자인식을 위한 주파수 워핑 기반 특징 및 주파수-시간 특징 평가)

  • Choi, Young Ho;Ban, Sung Min;Kim, Kyung-Wha;Kim, Hyung Soon
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.3-10
    • /
    • 2015
  • In this paper, different frequency scales in cepstral feature extraction are evaluated for the text-independent speaker recognition. To this end, mel-frequency cepstral coefficients (MFCCs), linear frequency cepstral coefficients (LFCCs), and bilinear warped frequency cepstral coefficients (BWFCCs) are applied to the speaker recognition experiment. In addition, the spectro-temporal features extracted by the cepstral-time matrix (CTM) are examined as an alternative to the delta and delta-delta features. Experiments on the NIST speaker recognition evaluation (SRE) 2004 task are carried out using the Gaussian mixture model-universal background model (GMM-UBM) method and the joint factor analysis (JFA) method, both based on the ALIZE 3.0 toolkit. Experimental results using both the methods show that BWFCC with appropriate warping factor yields better performance than MFCC and LFCC. It is also shown that the feature set including the spectro-temporal information based on the CTM outperforms the conventional feature set including the delta and delta-delta features.

Proposed Schemes for Image Sensors Compatibility in IEEE TG7r1 Image Sensor Communications

  • Nguyen, Trang;Hong, Chang Hyun;Jang, Yeong Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.7
    • /
    • pp.799-808
    • /
    • 2016
  • The IEEE 802.15.7r1 Task Group (TG7r1), known as the revision of the IEEE 802.15.7 Visible Light Communication standard targeting the commercial usage of visible light communication systems which mainly use either image sensors or cameras, is of interest in this paper. The vast challenge in Image Sensor Communications (ISC), as it has been addressed in the Technical Consideration Document (TCD) of the TG7r1, is the Image Sensor Compatibility to support the variety of different commercial cameras available on the market. The on-going ISC standard must adhere to compatible image sensors regulations. This paper brings an inside review of the TG7r1 and an inside look of related works on Image Sensor Communications. The paper analyzes the compatibility features by introducing a revised model of receiver to explain how those features are necessary. One of the most challenging but interesting features is the capability in being compatible to camera frame rates. The variation of camera frame rate is modeled from verified experimental results. Noticeably, three singular approaches to support frame rates compatibility, including temporal approach, spatial approach, and frequency-domain approach, are proposed on the paper along with concise definitions. Those schemes have been presented as valuable proposals on the call-for-proposal meeting series of the TG7r1 recently.