• Title/Summary/Keyword: Semantic Net

Search Result 248, Processing Time 0.019 seconds

Construction of Korean FrameNet through Manual Translation of English FrameNet (영어 FrameNet의 수동번역을 통한 한국어 FrameNet 구축 개발)

  • Nam, Sejin;Kim, Youngsik;Park, Jungyeul;Hahm, Younggyun;Hwang, Dosam;Choi, Key-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2014.10a
    • /
    • pp.38-43
    • /
    • 2014
  • 본 논문은, 현존하는 영어 FrameNet 데이터를 기반으로 하여, FrameNet에 대한 전문 지식이 없는 번역가들을 통해 수행할 수 있는 한국어 FrameNet의 수동 구축 개발 과정을 제시한다. 우리 연구팀은 실제로, NLTK가 제공하는 영어 FrameNet 버전 1.5의 Full Text를 이루고 있는 5,945개의 문장들 중에서, Frame 데이터를 가진 4,025개의 문장들을 추출해내어, 번역가들에 의해 한국어로 수동번역 함으로써, 한국어 FrameNet 구축 개발을 향한 의미 있는 초석을 마련하였으며, 제시한 방법의 실효성을 입증하는 연구결과들을 웹에 공개하기도 하였다.

  • PDF

A Study on Residual U-Net for Semantic Segmentation based on Deep Learning (딥러닝 기반의 Semantic Segmentation을 위한 Residual U-Net에 관한 연구)

  • Shin, Seokyong;Lee, SangHun;Han, HyunHo
    • Journal of Digital Convergence
    • /
    • v.19 no.6
    • /
    • pp.251-258
    • /
    • 2021
  • In this paper, we proposed an encoder-decoder model utilizing residual learning to improve the accuracy of the U-Net-based semantic segmentation method. U-Net is a deep learning-based semantic segmentation method and is mainly used in applications such as autonomous vehicles and medical image analysis. The conventional U-Net occurs loss in feature compression process due to the shallow structure of the encoder. The loss of features causes a lack of context information necessary for classifying objects and has a problem of reducing segmentation accuracy. To improve this, The proposed method efficiently extracted context information through an encoder using residual learning, which is effective in preventing feature loss and gradient vanishing problems in the conventional U-Net. Furthermore, we reduced down-sampling operations in the encoder to reduce the loss of spatial information included in the feature maps. The proposed method showed an improved segmentation result of about 12% compared to the conventional U-Net in the Cityscapes dataset experiment.

Construction of Efficient Semantic Net and Component Retrieval in Case-Based Reuse (Case 기반 재사용에서 효율적인 의미망의 구축과 컴포넌트 검색)

  • Han Jung-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.3
    • /
    • pp.20-27
    • /
    • 2006
  • In this paper we constructed semantic net that can efficiently conform retrieval and reuse of object-oriented source code. In order that initial relevance of semantic net was constructed using thesaurus to represent concept of object-oriented inheritance between each node. Also we made up for the weak points in spreading activation method that use to activate node and line of semantic net and to impulse activation value. Therefore we proposed the method to enhance retrieval time and to keep the quality of spreading activation.

  • PDF

Atrous Residual U-Net for Semantic Segmentation in Street Scenes based on Deep Learning (딥러닝 기반 거리 영상의 Semantic Segmentation을 위한 Atrous Residual U-Net)

  • Shin, SeokYong;Lee, SangHun;Han, HyunHo
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.10
    • /
    • pp.45-52
    • /
    • 2021
  • In this paper, we proposed an Atrous Residual U-Net (AR-UNet) to improve the segmentation accuracy of semantic segmentation method based on U-Net. The U-Net is mainly used in fields such as medical image analysis, autonomous vehicles, and remote sensing images. The conventional U-Net lacks extracted features due to the small number of convolution layers in the encoder part. The extracted features are essential for classifying object categories, and if they are insufficient, it causes a problem of lowering the segmentation accuracy. Therefore, to improve this problem, we proposed the AR-UNet using residual learning and ASPP in the encoder. Residual learning improves feature extraction ability and is effective in preventing feature loss and vanishing gradient problems caused by continuous convolutions. In addition, ASPP enables additional feature extraction without reducing the resolution of the feature map. Experiments verified the effectiveness of the AR-UNet with Cityscapes dataset. The experimental results showed that the AR-UNet showed improved segmentation results compared to the conventional U-Net. In this way, AR-UNet can contribute to the advancement of many applications where accuracy is important.

Query-based Document Summarization using Pseudo Relevance Feedback based on Semantic Features and WordNet (의미특징과 워드넷 기반의 의사 연관 피드백을 사용한 질의기반 문서요약)

  • Kim, Chul-Won;Park, Sun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.7
    • /
    • pp.1517-1524
    • /
    • 2011
  • In this paper, a new document summarization method, which uses the semantic features and the pseudo relevance feedback (PRF) by using WordNet, is introduced to extract meaningful sentences relevant to a user query. The proposed method can improve the quality of document summaries because the inherent semantic of the documents are well reflected by the semantic feature from NMF. In addition, it uses the PRF by the semantic features and WordNet to reduce the semantic gap between the high level user's requirement and the low level vector representation. The experimental results demonstrate that the proposed method achieves better performance that the other methods.

Road Extraction from Images Using Semantic Segmentation Algorithm (영상 기반 Semantic Segmentation 알고리즘을 이용한 도로 추출)

  • Oh, Haeng Yeol;Jeon, Seung Bae;Kim, Geon;Jeong, Myeong-Hun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.239-247
    • /
    • 2022
  • Cities are becoming more complex due to rapid industrialization and population growth in modern times. In particular, urban areas are rapidly changing due to housing site development, reconstruction, and demolition. Thus accurate road information is necessary for various purposes, such as High Definition Map for autonomous car driving. In the case of the Republic of Korea, accurate spatial information can be generated by making a map through the existing map production process. However, targeting a large area is limited due to time and money. Road, one of the map elements, is a hub and essential means of transportation that provides many different resources for human civilization. Therefore, it is essential to update road information accurately and quickly. This study uses Semantic Segmentation algorithms Such as LinkNet, D-LinkNet, and NL-LinkNet to extract roads from drone images and then apply hyperparameter optimization to models with the highest performance. As a result, the LinkNet model using pre-trained ResNet-34 as the encoder achieved 85.125 mIoU. Subsequent studies should focus on comparing the results of this study with those of studies using state-of-the-art object detection algorithms or semi-supervised learning-based Semantic Segmentation techniques. The results of this study can be applied to improve the speed of the existing map update process.

Land Cover Classification Using Sematic Image Segmentation with Deep Learning (딥러닝 기반의 영상분할을 이용한 토지피복분류)

  • Lee, Seonghyeok;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.2
    • /
    • pp.279-288
    • /
    • 2019
  • We evaluated the land cover classification performance of SegNet, which features semantic segmentation of aerial imagery. We selected four semantic classes, i.e., urban, farmland, forest, and water areas, and created 2,000 datasets using aerial images and land cover maps. The datasets were divided at a 8:2 ratio into training (1,600) and validation datasets (400); we evaluated validation accuracy after tuning the hyperparameters. SegNet performance was optimal at a batch size of five with 100,000 iterations. When 200 test datasets were subjected to semantic segmentation using the trained SegNet model, the accuracies were farmland 87.89%, forest 87.18%, water 83.66%, and urban regions 82.67%; the overall accuracy was 85.48%. Thus, deep learning-based semantic segmentation can be used to classify land cover.

The Construction of Semantic Networks for Korean "Cooking Verb" Based on the Argument Information. (논항 정보 기반 "요리 동사"의 어휘의미망 구축 방안)

  • Lee, Sukeui
    • Korean Linguistics
    • /
    • v.48
    • /
    • pp.223-268
    • /
    • 2010
  • The purpose of this paper is to build a semantic networks of the 'cooking class' verb (based on 'CoreNet' of KAIST). This proceedings needs to adjust the concept classification. Then sub-categories of [Cooking] and [Foodstuff] hierarchy of CoreNet was adjusted for the construction of verb semantic networks. For the building a semantic networks, each meaning of 'Cooking verbs' of Korean has to be analyzed. This paper focused on the Korean 'heating' verbs and 'non-heating'verbs. Case frame structure and argument information were inserted for the describing verb information. This paper use a Propege 3.3 as a tool for building "cooking verb" semantic networks. Each verb and noun was inserted into it's class, and connected by property relation marker 'HasThemeAs', 'IsMaterialOf'.

A Study on Efficient Construction of Sementic Net for Source Code Reuse (소스코드 재사용을 위한 효율적인 의미망 구성에 관한 연구)

  • Kim Gui-Jung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.05a
    • /
    • pp.475-479
    • /
    • 2005
  • In this paper we constructed semantic net that can efficiently conform retrieval and reuse of object-oriented source code. In odor that initial relevance of semantic net was constructed using thesaurus to represent concept of object-oriented inheritance between each node. Also we made up for the weak points in spreading activation method that use to activate node and line of semantic net and to impulse activation value. Therefore we proposed the method to enhance retrieval time and to keep the quality of spreading activation.

  • PDF

Comparing the Use of Semantic Relations between Tags Versus Latent Semantic Analysis for Speech Summarization (스피치 요약을 위한 태그의미분석과 잠재의미분석간의 비교 연구)

  • Kim, Hyun-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.47 no.3
    • /
    • pp.343-361
    • /
    • 2013
  • We proposed and evaluated a tag semantic analysis method in which original tags are expanded and the semantic relations between original or expanded tags are used to extract key sentences from lecture speech transcripts. To do that, we first investigated how useful Flickr tag clusters and WordNet synonyms are for expanding tags and for detecting the semantic relations between tags. Then, to evaluate our proposed method, we compared it with a latent semantic analysis (LSA) method. As a result, we found that Flick tag clusters are more effective than WordNet synonyms and that the F measure mean (0.27) of the tag semantic analysis method is higher than that of LSA method (0.22).