• Title/Summary/Keyword: Automatic Annotation

Search Result 78, Processing Time 0.036 seconds

Bird's Eye View Semantic Segmentation based on Improved Transformer for Automatic Annotation

  • Tianjiao Liang;Weiguo Pan;Hong Bao;Xinyue Fan;Han Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.1996-2015
    • /
    • 2023
  • High-definition (HD) maps can provide precise road information that enables an autonomous driving system to effectively navigate a vehicle. Recent research has focused on leveraging semantic segmentation to achieve automatic annotation of HD maps. However, the existing methods suffer from low recognition accuracy in automatic driving scenarios, leading to inefficient annotation processes. In this paper, we propose a novel semantic segmentation method for automatic HD map annotation. Our approach introduces a new encoder, known as the convolutional transformer hybrid encoder, to enhance the model's feature extraction capabilities. Additionally, we propose a multi-level fusion module that enables the model to aggregate different levels of detail and semantic information. Furthermore, we present a novel decoupled boundary joint decoder to improve the model's ability to handle the boundary between categories. To evaluate our method, we conducted experiments using the Bird's Eye View point cloud images dataset and Cityscapes dataset. Comparative analysis against stateof-the-art methods demonstrates that our model achieves the highest performance. Specifically, our model achieves an mIoU of 56.26%, surpassing the results of SegFormer with an mIoU of 1.47%. This innovative promises to significantly enhance the efficiency of HD map automatic annotation.

Efficient Semi-automatic Annotation System based on Deep Learning

  • Hyunseok Lee;Hwa Hui Shin;Soohoon Maeng;Dae Gwan Kim;Hyojeong Moon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.267-275
    • /
    • 2023
  • This paper presents the development of specialized software for annotating volume-of-interest on 18F-FDG PET/CT images with the goal of facilitating the studies and diagnosis of head and neck cancer (HNC). To achieve an efficient annotation process, we employed the SE-Norm-Residual Layer-based U-Net model. This model exhibited outstanding proficiency to segment cancerous regions within 18F-FDG PET/CT scans of HNC cases. Manual annotation function was also integrated, allowing researchers and clinicians to validate and refine annotations based on dataset characteristics. Workspace has a display with fusion of both PET and CT images, providing enhance user convenience through simultaneous visualization. The performance of deeplearning model was validated using a Hecktor 2021 dataset, and subsequently developed semi-automatic annotation functionalities. We began by performing image preprocessing including resampling, normalization, and co-registration, followed by an evaluation of the deep learning model performance. This model was integrated into the software, serving as an initial automatic segmentation step. Users can manually refine pre-segmented regions to correct false positives and false negatives. Annotation images are subsequently saved along with their corresponding 18F-FDG PET/CT fusion images, enabling their application across various domains. In this study, we developed a semi-automatic annotation software designed for efficiently generating annotated lesion images, with applications in HNC research and diagnosis. The findings indicated that this software surpasses conventional tools, particularly in the context of HNC-specific annotation with 18F-FDG PET/CT data. Consequently, developed software offers a robust solution for producing annotated datasets, driving advances in the studies and diagnosis of HNC.

Hybrid Fungal Genome Annotation Pipeline Combining ab initio, Evidence-, and Homology-based gene model evaluation

  • Min, Byoungnam;Choi, In-Geol
    • 한국균학회소식:학술대회논문집
    • /
    • 2018.05a
    • /
    • pp.22-22
    • /
    • 2018
  • Fungal genome sequencing and assembly have been trivial in these days. Genome analysis relies on high quality of gene prediction and annotation. Automatic fungal genome annotation pipeline is essential for handling genomic sequence data accumulated exponentially. However, building an automatic annotation procedure for fungal genomes is not an easy task. FunGAP (Fungal Genome Annotation Pipeline) is developed for precise and accurate prediction of gene models from any fungal genome assembly. To make high-quality gene models, this pipeline employs multiple gene prediction programs encompassing ab initio, evidence-, and homology-based evaluation. FunGAP aims to evaluate all predicted genes by filtering gene models. To make a successful filtering guide for removal of false-positive genes, we used a scoring function that seeks for a consensus by estimating each gene model based on homology to the known proteins or domains. FunGAP is freely available for non-commercial users at the GitHub site (https://github.com/CompSynBioLab-KoreaUniv/FunGAP).

  • PDF

Multi-cue Integration for Automatic Annotation (자동 주석을 위한 멀티 큐 통합)

  • Shin, Seong-Yoon;Rhee, Yang-Won
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2010.07a
    • /
    • pp.151-152
    • /
    • 2010
  • WWW images locate in structural, networking documents, so the importance of a word can be indicated by its location, frequency. There are two patterns for multi-cues ingegration annotation. The multi-cues integration algorithm shows initial promise as an indicator of semantic keyphrases of the web images. The latent semantic automatic keyphrase extraction that causes the improvement with the usage of multi-cues is expected to be preferable.

  • PDF

A Multimedia Database System using Method of Automatic Annotation Update and Multi-Partition Color Histogram (자동 주석 갱신 및 다중 분할 칼라 히스토그램 기법을 이용한 멀티미디에 데이터베이스 시스템)

  • Ahn Jae-Myung;Oh Hae-Seok
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.701-708
    • /
    • 2004
  • Existing contents-based video retrieval systems search by using a single method such as annotation-based or feature-based retrieval. Hence, it not only shows low search efficiency, but also requires many efforts to provide system administrator or annotator with a perfect automatic processing. Tn this paper, we propose an agent-based, and automatic and unified semantics-based video retrieval system, which support various semantics-retrieval of the massive video data by integrating the feature-based retrieval and the annotation-based retrieval. The indexing agent embodies the semantics about annotation of extracted key frames by analyzing a fundamental query of a user and by selecting a key-frame image that is ed by a query. Also, a key frame selected by user takes a query image of the feature-based retrieval and the indexing agent searches and displays the most similar key-frame images after comparing query images with key frames in the database by using the color-multiple-partition histogram techniques. Furthermore, it is shown that the performance of the proposed system can be significantly improved.

Automatic Summarization of French Scientific Articles by a Discourse Annotation Method using the EXCOM System

  • Antoine, Blais
    • Language and Information
    • /
    • v.13 no.1
    • /
    • pp.1-20
    • /
    • 2009
  • Summarization is a complex cognitive task and its simulation is very difficult for machines. This paper presents an automatic summarization strategy that is based on a discourse categorization of the textual information. This categorization is carried out by the automatic identification of discourse markers in texts. We defend here the use of discourse methods in automatic summarization. Two evaluations of the summarization strategy are presented. The summaries produced by our strategy are evaluated with summaries produced by humans and other applications. These two evaluations display well the capacity of our application, based on EXCOM, to produce summaries comparable to the summaries of other applications.

  • PDF

Semi-Automatic Annotation Tool to Build Large Dependency Tree-Tagged Corpus

  • Park, Eun-Jin;Kim, Jae-Hoon;Kim, Chang-Hyun;Kim, Young-Kill
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.385-393
    • /
    • 2007
  • Corpora annotated with lots of linguistic information are required to develop robust and statistical natural language processing systems. Building such corpora, however, is an expensive, labor-intensive, and time-consuming work. To help the work, we design and implement an annotation tool for establishing a Korean dependency tree-tagged corpus. Compared with other annotation tools, our tool is characterized by the following features: independence of applications, localization of errors, powerful error checking, instant annotated information sharing, user-friendly. Using our tool, we have annotated 100,904 Korean sentences with dependency structures. The number of annotators is 33, the average annotation time is about 4 minutes per sentence, and the total period of the annotation is 5 months. We are confident that we can have accurate and consistent annotations as well as reduced labor and time.

  • PDF

A semi-automatic cell type annotation method for single-cell RNA sequencing dataset

  • Kim, Wan;Yoon, Sung Min;Kim, Sangsoo
    • Genomics & Informatics
    • /
    • v.18 no.3
    • /
    • pp.26.1-26.6
    • /
    • 2020
  • Single-cell RNA sequencing (scRNA-seq) has been widely applied to provide insights into the cell-by-cell expression difference in a given bulk sample. Accordingly, numerous analysis methods have been developed. As it involves simultaneous analyses of many cell and genes, efficiency of the methods is crucial. The conventional cell type annotation method is laborious and subjective. Here we propose a semi-automatic method that calculates a normalized score for each cell type based on user-supplied cell type-specific marker gene list. The method was applied to a publicly available scRNA-seq data of mouse cardiac non-myocyte cell pool. Annotating the 35 t-stochastic neighbor embedding clusters into 12 cell types was straightforward, and its accuracy was evaluated by constructing co-expression network for each cell type. Gene Ontology analysis was congruent with the annotated cell type and the corollary regulatory network analysis showed upstream transcription factors that have well supported literature evidences. The source code is available as an R script upon request.

Korean Semantic Annotation on the EXCOM Platform

  • Chai, Hyun-Zoo;Djioua, Brahim;Priol, Florence Le;Descles, Jean-Pierre
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.548-556
    • /
    • 2007
  • We present an automatic semantic annotation system for Korean on the EXCOM (EXploration COntextual for Multilingual) platform. The purpose of natural language processing is enabling computers to understand human language, so that they can perform more sophisticated tasks. Accordingly, current research concentrates more and more on extracting semantic information. The realization of semantic processing requires the widespread annotation of documents. However, compared to that of inflectional languages, the technology in agglutinative language processing such as Korean still has shortcomings. EXCOM identifies semantic information in Korean text using our new method, the Contextual Exploration Method. Our initial system properly annotates approximately 88% of standard Korean sentences, and this annotation rate holds across text domains.

  • PDF

Images Automatic Annotation: Multi-cues Integration (영상의 자동 주석: 멀티 큐 통합)

  • Shin, Seong-Yoon;Ahn, Eun-Mi;Rhee, Yang-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.589-590
    • /
    • 2010
  • All these images consist a considerable database. What's more, the semantic meanings of images are well presented by the surrounding text and links. But only a small minority of these images have precise assigned keyphrases, and manually assigning keyphrases to existing images is very laborious. Therefore it is highly desirable to automate the keyphrases extraction process. In this paper, we first introduce WWW image annotation methods, based on low level features, page tags, overall word frequency and local word frequency. Then we put forward our method of multi-cues integration image annotation. Also, show multi-cue image annotation method is more superior than other method through an experiment.

  • PDF