• Title/Summary/Keyword: annotation information

Search Result 393, Processing Time 0.031 seconds

Bird's Eye View Semantic Segmentation based on Improved Transformer for Automatic Annotation

  • Tianjiao Liang;Weiguo Pan;Hong Bao;Xinyue Fan;Han Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.1996-2015
    • /
    • 2023
  • High-definition (HD) maps can provide precise road information that enables an autonomous driving system to effectively navigate a vehicle. Recent research has focused on leveraging semantic segmentation to achieve automatic annotation of HD maps. However, the existing methods suffer from low recognition accuracy in automatic driving scenarios, leading to inefficient annotation processes. In this paper, we propose a novel semantic segmentation method for automatic HD map annotation. Our approach introduces a new encoder, known as the convolutional transformer hybrid encoder, to enhance the model's feature extraction capabilities. Additionally, we propose a multi-level fusion module that enables the model to aggregate different levels of detail and semantic information. Furthermore, we present a novel decoupled boundary joint decoder to improve the model's ability to handle the boundary between categories. To evaluate our method, we conducted experiments using the Bird's Eye View point cloud images dataset and Cityscapes dataset. Comparative analysis against stateof-the-art methods demonstrates that our model achieves the highest performance. Specifically, our model achieves an mIoU of 56.26%, surpassing the results of SegFormer with an mIoU of 1.47%. This innovative promises to significantly enhance the efficiency of HD map automatic annotation.

Implementation of Annotation and Thesaurus for Remote Sensing

  • Chae, Gee-Ju;Yun, Young-Bo;Park, Jong-Hyun
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.222-224
    • /
    • 2003
  • Many users want to add some their own information to data which was on the web and computer without actually needing to touch data. In remote sensing, the result data for image classification consist of image and text file in general. To overcome these inconvenience problems, we suggest the annotation method using XML language. We give the efficient annotation method which can be applied to web and viewing of image classification. We can apply the annotation for web and image classification with image and text file. The need for thesaurus construction is the lack of information for remote sensing and GIS on search engine like Empas, Naver and Google. In search engine, we can’t search the information for word which has many different names simultaneously. We select the remote sensing data from different sources and make the relation between many terms. For this process, we analyze the meaning for different terms which has similar meaning.

  • PDF

An empirical evaluation of electronic annotation tools for Twitter data

  • Weissenbacher, Davy;O'Connor, Karen;Hiraki, Aiko T.;Kim, Jin-Dong;Gonzalez-Hernandez, Graciela
    • Genomics & Informatics
    • /
    • v.18 no.2
    • /
    • pp.24.1-24.7
    • /
    • 2020
  • Despite a growing number of natural language processing shared-tasks dedicated to the use of Twitter data, there is currently no ad-hoc annotation tool for the purpose. During the 6th edition of Biomedical Linked Annotation Hackathon (BLAH), after a short review of 19 generic annotation tools, we adapted GATE and TextAE for annotating Twitter timelines. Although none of the tools reviewed allow the annotation of all information inherent of Twitter timelines, a few may be suitable provided the willingness by annotators to compromise on some functionality.

Avatar Augmented Annotation Interface for e-Learning (E-Learning을 위한 아바타 기반 Annotation 인터페이스)

  • Kim, Jae-Kyung;Sohn, Won-Sung;Shon, Eui-Sung;Lim, Soon-Bum;Choy, Yoon-Chul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06c
    • /
    • pp.209-212
    • /
    • 2007
  • E-Learning 혹은 원격 강의 환경에서 아바타 애니메이션과 annotation을 이용한 강의 컨텐츠를 제작 하는 것은 많은 시간과 비용을 요구한다. 본 논문에서는 웹 환경에서 아바타 애니메이션과 디지털 잉크(Digital ink) 기능을 지원하는 강의 컨텐츠를 디자인하고 공유하기 위한 아바타 기반 Annotation(Avatar Augmented Annotation), 즉 AAA 인터페이스 기법을 제안한다. AAA를 이용하여 강사는 복잡한 프로그래밍 언어나 스크립트를 사용하지 않고도 필기형식의 펜 입력을 통해 애니메이션과 Annotation이 복합된 강의 컨텐츠를 디자인 할 수 있다. 입력된 정보는 AAA를 통하여 XML 형식의 스크립트로 표현되고 이것은 강의 컨텐츠에 적용되어 아바타와 Annotation이 결합된 생동감있는 컨텐츠를 만들어낸다. 실험을 통하여 AAA 시스템은 기존의 온라인 교육 컨텐츠에 비해 교육적으로 효과적임을 알 수 있었다.

  • PDF

Design of an Ontology for eBook Annotation System (eBook Annotation 시스템을 위한 온톨로지 설계)

  • Kim, Jong-Suk;Ko, Seung-Kyu;Lim, Soon-Bum;Choy, Yoon-Chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11c
    • /
    • pp.2253-2256
    • /
    • 2002
  • 본 연구에서는 온라인 다중 사용자 환경의 eBook annotation 시스템 개발에서 데이터를 의미 기반으로 관리하고, 데이터에 대하여 상호 공통적인 이해를 표현하며, 그리고 데이터에 대한 무결성 검사 등을 지원하기 위해서 eBook annotation 온톨로지를 설계하였다. eBook annotation 데이터에 대한 상호 공통적이 이해를 표현을 위해서 한국 전자책 문서 표준인 EBKS(Electronic Book of korra Standard)를 기반으로 설계하였으며 설계된 온톨로지는 Conceptual Graph(CG)를 사용하여 표현하였다. 의미 기반의 처리를 위해서 본 온톨로지에서는 동의어(Synonym) 관계와 다국어(Interlingua) 관계를 고려하였으며 또한 annotation 데이터 생성시 오류 방지와 중요도를 표현하기 위해서 integrity, important axiom을 고려했다. 제안된 온톨로지는 annotation 데이터의 재사용성을 높일 수 있고 의미 정보를 활용함으로써 eLearning, cyberclass과 같은 다중 사용자 환경에서 효과적인 협업을 가능하게 한다.

  • PDF

Design of An Interface for Explicit Free-farm Annotation Creation (명확한 free-form annotation 생성을 위한 인터페이스 설계)

  • 손원성;김재경;최윤철;임순범
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10d
    • /
    • pp.139-141
    • /
    • 2002
  • Free-form annotation 환경에서 정확한 annotation 정보를 생성하기 위해서는 free-form 마킹의 기하 정보와 annotated part간의 관계를 분석하는 과정에서 발생하는 ambiguity를 인식 및 해결할 수 있어야 한다. 따라서 본 논문에서는 먼저 XML 기반의 annotation 환경에서 free-form 마킹과 다양한 컨텍스트 간에 발생할 수 있는 ambiguity를 분석하였으며 이를 해결하기 위한 annotation 보정 기법을 제안한다. 제안 기법은 free-form 마킹과 annotated part간의 다양한 textual 및 문서구조를 포함하는 컨텍스트를 기반으로 하며 본 연구에서 구현한 annotation 시스템을 통하여 출력 및 교환된다. 그 결과 본 연구의 제안 기법을 통하여 생성된 free-form 마킹 정보는 기존의 기법보다 사용자가 원하는 annotated part 영역을 포함할 수 있으며 따라서 다중사용자 및 서로 다른 문서환경에서도 명확한 교환 결과를 보장할 수 있다.

  • PDF

Korean Semantic Annotation on the EXCOM Platform

  • Chai, Hyun-Zoo;Djioua, Brahim;Priol, Florence Le;Descles, Jean-Pierre
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.548-556
    • /
    • 2007
  • We present an automatic semantic annotation system for Korean on the EXCOM (EXploration COntextual for Multilingual) platform. The purpose of natural language processing is enabling computers to understand human language, so that they can perform more sophisticated tasks. Accordingly, current research concentrates more and more on extracting semantic information. The realization of semantic processing requires the widespread annotation of documents. However, compared to that of inflectional languages, the technology in agglutinative language processing such as Korean still has shortcomings. EXCOM identifies semantic information in Korean text using our new method, the Contextual Exploration Method. Our initial system properly annotates approximately 88% of standard Korean sentences, and this annotation rate holds across text domains.

  • PDF

Semantic Image Annotation and Retrieval in Mobile Environments (모바일 환경에서 의미 기반 이미지 어노테이션 및 검색)

  • No, Hyun-Deok;Seo, Kwang-won;Im, Dong-Hyuk
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1498-1504
    • /
    • 2016
  • The progress of mobile computing technology is bringing a large amount of multimedia contents such as image. Thus, we need an image retrieval system which searches semantically relevant image. In this paper, we propose a semantic image annotation and retrieval in mobile environments. Previous mobile-based annotation approaches cannot fully express the semantics of image due to the limitation of current form (i.e., keyword tagging). Our approach allows mobile devices to annotate the image automatically using the context-aware information such as temporal and spatial data. In addition, since we annotate the image using RDF(Resource Description Framework) model, we are able to query SPARQL for semantic image retrieval. Our system implemented in android environment shows that it can more fully represent the semantics of image and retrieve the images semantically comparing with other image annotation systems.

Annotation of a Non-native English Speech Database by Korean Speakers

  • Kim, Jong-Mi
    • Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.111-135
    • /
    • 2002
  • An annotation model of a non-native speech database has been devised, wherein English is the target language and Korean is the native language. The proposed annotation model features overt transcription of predictable linguistic information in native speech by the dictionary entry and several predefined types of error specification found in native language transfer. The proposed model is, in that sense, different from other previously explored annotation models in the literature, most of which are based on native speech. The validity of the newly proposed model is revealed in its consistent annotation of 1) salient linguistic features of English, 2) contrastive linguistic features of English and Korean, 3) actual errors reported in the literature, and 4) the newly collected data in this study. The annotation method in this model adopts the widely accepted conventions, Speech Assessment Methods Phonetic Alphabet (SAMPA) and the TOnes and Break Indices (ToBI). In the proposed annotation model, SAMPA is exclusively employed for segmental transcription and ToBI for prosodic transcription. The annotation of non-native speech is used to assess speaking ability for English as Foreign Language (EFL) learners.

  • PDF

Towards cross-platform interoperability for machine-assisted text annotation

  • de Castilho, Richard Eckart;Ide, Nancy;Kim, Jin-Dong;Klie, Jan-Christoph;Suderman, Keith
    • Genomics & Informatics
    • /
    • v.17 no.2
    • /
    • pp.19.1-19.10
    • /
    • 2019
  • In this paper, we investigate cross-platform interoperability for natural language processing (NLP) and, in particular, annotation of textual resources, with an eye toward identifying the design elements of annotation models and processes that are particularly problematic for, or amenable to, enabling seamless communication across different platforms. The study is conducted in the context of a specific annotation methodology, namely machine-assisted interactive annotation (also known as human-in-the-loop annotation). This methodology requires the ability to freely combine resources from different document repositories, access a wide array of NLP tools that automatically annotate corpora for various linguistic phenomena, and use a sophisticated annotation editor that enables interactive manual annotation coupled with on-the-fly machine learning. We consider three independently developed platforms, each of which utilizes a different model for representing annotations over text, and each of which performs a different role in the process.