• Title/Summary/Keyword: Semantic Net

Search Result 248, Processing Time 0.027 seconds

Semantic Image Retrieval Using Color Distribution and Similarity Measurement in WordNet (컬러 분포와 WordNet상의 유사도 측정을 이용한 의미적 이미지 검색)

  • Choi, Jun-Ho;Cho, Mi-Young;Kim, Pan-Koo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.509-516
    • /
    • 2004
  • Semantic interpretation of image is incomplete without some mechanism for understanding semantic content that is not directly visible. For this reason, human assisted content-annotation through natural language is an attachment of textual description to image. However, keyword-based retrieval is in the level of syntactic pattern matching. In other words, dissimilarity computation among terms is usually done by using string matching not concept matching. In this paper, we propose a method for computerized semantic similarity calculation In WordNet space. We consider the edge, depth, link type and density as well as existence of common ancestors. Also, we have introduced method that applied similarity measurement on semantic image retrieval. To combine wi#h the low level features, we use the spatial color distribution model. When tested on a image set of Microsoft's 'Design Gallery Line', proposed method outperforms other approach.

Tracking Method of Dynamic Smoke based on U-net (U-net기반 동적 연기 탐지 기법)

  • Gwak, Kyung-Min;Rho, Young J.
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.4
    • /
    • pp.81-87
    • /
    • 2021
  • Artificial intelligence technology is developing as it enters the fourth industrial revolution. Active researches are going on; visual-based models using CNNs. U-net is one of the visual-based models. It has shown strong performance for semantic segmentation. Although various U-net studies have been conducted, studies on tracking objects with unclear outlines such as gases and smokes are still insufficient. We conducted a U-net study to tackle this limitation. In this paper, we describe how 3D cameras are used to collect data. The data are organized into learning and test sets. This paper also describes how U-net is applied and how the results is validated.

Comparative Study of Deep Learning Model for Semantic Segmentation of Water System in SAR Images of KOMPSAT-5 (아리랑 5호 위성 영상에서 수계의 의미론적 분할을 위한 딥러닝 모델의 비교 연구)

  • Kim, Min-Ji;Kim, Seung Kyu;Lee, DoHoon;Gahm, Jin Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.206-214
    • /
    • 2022
  • The way to measure the extent of damage from floods and droughts is to identify changes in the extent of water systems. In order to effectively grasp this at a glance, satellite images are used. KOMPSAT-5 uses Synthetic Aperture Radar (SAR) to capture images regardless of weather conditions such as clouds and rain. In this paper, various deep learning models are applied to perform semantic segmentation of the water system in this SAR image and the performance is compared. The models used are U-net, V-Net, U2-Net, UNet 3+, PSPNet, Deeplab-V3, Deeplab-V3+ and PAN. In addition, performance comparison was performed when the data was augmented by applying elastic deformation to the existing SAR image dataset. As a result, without data augmentation, U-Net was the best with IoU of 97.25% and pixel accuracy of 98.53%. In case of data augmentation, Deeplab-V3 showed IoU of 95.15% and V-Net showed the best pixel accuracy of 96.86%.

Comparison of Performance of Medical Image Semantic Segmentation Model in ATLASV2.0 Data (ATLAS V2.0 데이터에서 의료영상 분할 모델 성능 비교)

  • So Yeon Woo;Yeong Hyeon Gu;Seong Joon Yoo
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.267-274
    • /
    • 2023
  • There is a problem that the size of the dataset is insufficient due to the limitation of the collection of the medical image public data, so there is a possibility that the existing studies are overfitted to the public dataset. In this paper, we compare the performance of eight (Unet, X-Net, HarDNet, SegNet, PSPNet, SwinUnet, 3D-ResU-Net, UNETR) medical image semantic segmentation models to revalidate the superiority of existing models. Anatomical Tracings of Lesions After Stroke (ATLAS) V1.2, a public dataset for stroke diagnosis, is used to compare the performance of the models and the performance of the models in ATLAS V2.0. Experimental results show that most models have similar performance in V1.2 and V2.0, but X-net and 3D-ResU-Net have higher performance in V1.2 datasets. These results can be interpreted that the models may be overfitted to V1.2.

Semantic-Based Web Information Filtering Using WordNet (어휘사전 워드넷을 활용한 의미기반 웹 정보필터링)

  • Byeon, Yeong-Tae;Hwang, Sang-Gyu;O, Gyeong-Muk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.11S
    • /
    • pp.3399-3409
    • /
    • 1999
  • Information filtering for internet search, in which new information retrieval environment is given, is different from traditional methods such as bibliography information filtering, news-group and E-mail filtering. Therefore, we cannot expect high performance from the traditional information filtering models when they are applied to the new environment. To solve this problem, we inspect the characteristics of the new filtering environment, and propose a semantic-based filtering model which includes a new filtering method using WordNet. For extracting keywords from documents, this model uses the SDCC(Semantic Distance for Common Category) algorithm instead of the TF/IDF method usually used by traditional methods. The world sense ambiguation problem, which is one of causes dropping efficiency of internet search, is solved by this method. The semantic-based filtering model can filter web pages selectively with considering a user level and we show in this paper that it is more convenient for users to search information in internet by the proposed method than by traditional filtering methods.

  • PDF

Improving The Performance of Triple Generation Based on Distant Supervision By Using Semantic Similarity (의미 유사도를 활용한 Distant Supervision 기반의 트리플 생성 성능 향상)

  • Yoon, Hee-Geun;Choi, Su Jeong;Park, Seong-Bae
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.653-661
    • /
    • 2016
  • The existing pattern-based triple generation systems based on distant supervision could be flawed by assumption of distant supervision. For resolving flaw from an excessive assumption, statistics information has been commonly used for measuring confidence of patterns in previous studies. In this study, we proposed a more accurate confidence measure based on semantic similarity between patterns and properties. Unsupervised learning method, word embedding and WordNet-based similarity measures were adopted for learning meaning of words and measuring semantic similarity. For resolving language discordance between patterns and properties, we adopted CCA for aligning bilingual word embedding models and a translation-based approach for a WordNet-based measure. The results of our experiments indicated that the accuracy of triples that are filtered by the semantic similarity-based confidence measure was 16% higher than that of the statistics-based approach. These results suggested that semantic similarity-based confidence measure is more effective than statistics-based approach for generating high quality triples.

Alignment of Hypernym-Hyponym Noun Pairs between Korean and English, Based on the EuroWordNet Approach (유로워드넷 방식에 기반한 한국어와 영어의 명사 상하위어 정렬)

  • Kim, Dong-Sung
    • Language and Information
    • /
    • v.12 no.1
    • /
    • pp.27-65
    • /
    • 2008
  • This paper presents a set of methodologies for aligning hypernym-hyponym noun pairs between Korean and English, based on the EuroWordNet approach. Following the methods conducted in EuroWordNet, our approach makes extensive use of WordNet in four steps of the building process: 1) Monolingual dictionaries have been used to extract proper hypernym-hyponym noun pairs, 2) bilingual dictionary has converted the extracted pairs, 3) Word Net has been used as a backbone of alignment criteria, and 4) WordNet has been used to select the most similar pair among the candidates. The importance of this study lies not only on enriching semantic links between two languages, but also on integrating lexical resources based on a language specific and dependent structure. Our approaches are aimed at building an accurate and detailed lexical resource with proper measures rather than at fast development of generic one using NLP technique.

  • PDF

Enhancing Text Document Clustering Using Non-negative Matrix Factorization and WordNet

  • Kim, Chul-Won;Park, Sun
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.4
    • /
    • pp.241-246
    • /
    • 2013
  • A classic document clustering technique may incorrectly classify documents into different clusters when documents that should belong to the same cluster do not have any shared terms. Recently, to overcome this problem, internal and external knowledge-based approaches have been used for text document clustering. However, the clustering results of these approaches are influenced by the inherent structure and the topical composition of the documents. Further, the organization of knowledge into an ontology is expensive. In this paper, we propose a new enhanced text document clustering method using non-negative matrix factorization (NMF) and WordNet. The semantic terms extracted as cluster labels by NMF can represent the inherent structure of a document cluster well. The proposed method can also improve the quality of document clustering that uses cluster labels and term weights based on term mutual information of WordNet. The experimental results demonstrate that the proposed method achieves better performance than the other text clustering methods.

DP-LinkNet: A convolutional network for historical document image binarization

  • Xiong, Wei;Jia, Xiuhong;Yang, Dichun;Ai, Meihui;Li, Lirong;Wang, Song
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1778-1797
    • /
    • 2021
  • Document image binarization is an important pre-processing step in document analysis and archiving. The state-of-the-art models for document image binarization are variants of encoder-decoder architectures, such as FCN (fully convolutional network) and U-Net. Despite their success, they still suffer from three limitations: (1) reduced feature map resolution due to consecutive strided pooling or convolutions, (2) multiple scales of target objects, and (3) reduced localization accuracy due to the built-in invariance of deep convolutional neural networks (DCNNs). To overcome these three challenges, we propose an improved semantic segmentation model, referred to as DP-LinkNet, which adopts the D-LinkNet architecture as its backbone, with the proposed hybrid dilated convolution (HDC) and spatial pyramid pooling (SPP) modules between the encoder and the decoder. Extensive experiments are conducted on recent document image binarization competition (DIBCO) and handwritten document image binarization competition (H-DIBCO) benchmark datasets. Results show that our proposed DP-LinkNet outperforms other state-of-the-art techniques by a large margin. Our implementation and the pre-trained models are available at https://github.com/beargolden/DP-LinkNet.

Detection of Number and Character Area of License Plate Using Deep Learning and Semantic Image Segmentation (딥러닝과 의미론적 영상분할을 이용한 자동차 번호판의 숫자 및 문자영역 검출)

  • Lee, Jeong-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.1
    • /
    • pp.29-35
    • /
    • 2021
  • License plate recognition plays a key role in intelligent transportation systems. Therefore, it is a very important process to efficiently detect the number and character areas. In this paper, we propose a method to effectively detect license plate number area by applying deep learning and semantic image segmentation algorithm. The proposed method is an algorithm that detects number and text areas directly from the license plate without preprocessing such as pixel projection. The license plate image was acquired from a fixed camera installed on the road, and was used in various real situations taking into account both weather and lighting changes. The input images was normalized to reduce the color change, and the deep learning neural networks used in the experiment were Vgg16, Vgg19, ResNet18, and ResNet50. To examine the performance of the proposed method, we experimented with 500 license plate images. 300 sheets were used for learning and 200 sheets were used for testing. As a result of computer simulation, it was the best when using ResNet50, and 95.77% accuracy was obtained.