• Title/Summary/Keyword: visual classification

Search Result 581, Processing Time 0.037 seconds

The Comparison of Visual Interpretation & Digital Classification of SPOT Satellite Image

  • Lee, Kyoo-Seock;Lee, In-Soo;Jeon, Seong-Woo
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.433-438
    • /
    • 1999
  • The land use type of Korea is high-density. So, the image classification using coarse resolution satellite image may not provide land cover classification results as good as expected. The purpose of this paper is to compare the result of visual interpretation with that of digital image classification of 20 m resolution SPOT satellite image at Kwangju-eup, Kyunggi-do, Korea. Classes are forest, cultivated field, pasture, water and residential area, which are clearly discriminated in visual interpretation. Maximum likelihood classifier was used for digital image classification. Accuracy assessment was done by comparing each classification result with ground truth data obtained from field checking. The classification result from the visual interpretation presented an total accuracy 9.23 percent higher than that of the digital image classification. This proves the importance of visual interpretation for the area with high density land use like the study site in Korea.

  • PDF

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

Effects of Pressure Ulcer Classification System Education Program on Knowledge and Visual Discrimination Ability of Pressure Ulcer Classification and Incontinence-Associated Dermatitis for Hospital Nurses (욕창 분류체계교육프로그램이 병원간호사의 욕창 분류체계와 실금관련 피부염에 대한 지식과 시각적 감별 능력에 미치는 효과)

  • Lee, Yun Jin;Park, Seungmi
    • Journal of Korean Biological Nursing Science
    • /
    • v.16 no.4
    • /
    • pp.342-348
    • /
    • 2014
  • Purpose: The purpose of this study was to examine the effects of pressure ulcer classification system education on hospital nurses' knowledge and visual discrimination ability of pressure ulcer classification system and incontinence-associated dermatitis. Methods: One group pre- and post-test was used. A convenience sample of 96 nurses participating in pressure ulcer classification system education, were enrolled in single institute. The education program was composed of a 50-minute lecture on pressure ulcer classification system and case-studies. The pressure ulcer classification system and incontinence-associated dermatitis knowledge test and visual discrimination tool, consisting of 21 photographs including clinical information were used. Paired t-test was performed using SPSS/WIN 18.0. Results: The overall mean difference of pressure ulcer classification system knowledge (t=4.67, p<.001) and visual discrimination ability (t=10.58, p<.001) were statistically and significantly increased after pressure ulcer classification system education. Conclusion: Overall understanding of pressure ulcer classification system and incontinence-associated dermatitis after pressure ulcer classification system education was increased, but tended to have lack of visual discrimination ability regarding stage III, suspected deep tissue injury. Differentiated continuing education based on clinical practice is needed to improve knowledge and visual discrimination ability for pressure ulcer classification system, and comparison experiment research is required to evaluate its effects.

Comparison of Visual Interpretation and Image Classification of Satellite Data

  • Lee, In-Soo;Shin, Dong-Hoon;Ahn, Seung-Mahn;Lee, Kyoo-Seock;Jeon, Seong-Woo
    • Korean Journal of Remote Sensing
    • /
    • v.18 no.3
    • /
    • pp.163-169
    • /
    • 2002
  • The land uses of Korean peninsula are very complicated and high-density. Therefore, the image classification using coarse resolution satellite images may not provide good results for the land cover classification. The purpose of this paper is to compare the classification accuracy of visual interpretation with that of digital image classification of satellite remote sensing data such as 20m SPOT and 30m TM. In this study, hybrid classification was used. Classification accuracy was assessed by comparing each classification result with reference data obtained from KOMPSAT-1 EOC imagery, air photos, and field surveys.

Image classification and captioning model considering a CAM-based disagreement loss

  • Yoon, Yeo Chan;Park, So Young;Park, Soo Myoung;Lim, Heuiseok
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.67-77
    • /
    • 2020
  • Image captioning has received significant interest in recent years, and notable results have been achieved. Most previous approaches have focused on generating visual descriptions from images, whereas a few approaches have exploited visual descriptions for image classification. This study demonstrates that a good performance can be achieved for both description generation and image classification through an end-to-end joint learning approach with a loss function, which encourages each task to reach a consensus. When given images and visual descriptions, the proposed model learns a multimodal intermediate embedding, which can represent both the textual and visual characteristics of an object. The performance can be improved for both tasks by sharing the multimodal embedding. Through a novel loss function based on class activation mapping, which localizes the discriminative image region of a model, we achieve a higher score when the captioning and classification model reaches a consensus on the key parts of the object. Using the proposed model, we established a substantially improved performance for each task on the UCSD Birds and Oxford Flowers datasets.

Nearest-Neighbors Based Weighted Method for the BOVW Applied to Image Classification

  • Xu, Mengxi;Sun, Quansen;Lu, Yingshu;Shen, Chenming
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.4
    • /
    • pp.1877-1885
    • /
    • 2015
  • This paper presents a new Nearest-Neighbors based weighted representation for images and weighted K-Nearest-Neighbors (WKNN) classifier to improve the precision of image classification using the Bag of Visual Words (BOVW) based models. Scale-invariant feature transform (SIFT) features are firstly extracted from images. Then, the K-means++ algorithm is adopted in place of the conventional K-means algorithm to generate a more effective visual dictionary. Furthermore, the histogram of visual words becomes more expressive by utilizing the proposed weighted vector quantization (WVQ). Finally, WKNN classifier is applied to enhance the properties of the classification task between images in which similar levels of background noise are present. Average precision and absolute change degree are calculated to assess the classification performance and the stability of K-means++ algorithm, respectively. Experimental results on three diverse datasets: Caltech-101, Caltech-256 and PASCAL VOC 2011 show that the proposed WVQ method and WKNN method further improve the performance of classification.

Classification of Restaurant Table Settings with Gestalt's Law of Visual Perception (외식 상차림의 게슈탈트 시지각 법칙에 따른 분류)

  • Joo, Seon Hee;Han, Kyung Soo
    • Journal of the Korean Society of Food Culture
    • /
    • v.28 no.2
    • /
    • pp.177-185
    • /
    • 2013
  • This study analyzed restaurant table settings with Gestalt's law of visual perception to obtain basic data for future marketing strategies. The research uses methods that involve applying images of restaurant table settings to Gestalt's law of visual perception, doing content analysis, and conducting a frequency analysis as well as a Chi-square test for classification analysis by visual perception. Results show a significant difference in the laws of visual perception, especially in the laws of nearness and closure, between table settings of different countries and backgrounds, such as Korean, Japanese, Chinese, Western cultures. In terms of the law of nearness, Chinese dishes were low, while other countries' dishes and Korean dishes showed high figures. In terms of the law of closure, Japanese dishes and western dishes had low values, while other countries' dishes and Korean dishes were high in their closure. Further studies on consumer awareness by visual perception classification need to be conducted.

The Visual Temperature of Textile (원단의 시각적 온도감)

  • Oh, Jiyeon;Park, YungKyung
    • Science of Emotion and Sensibility
    • /
    • v.21 no.1
    • /
    • pp.155-164
    • /
    • 2018
  • The temperature is a sense that can be felt by touch and sight. However, the concept of the temperature sensation is rarely used together with the concept of visual sensation and tactile sensation. In this study, the sensation of the temperature sensed through tactile and visual sense was investigated by the visual temperature depending on color and material characteristics. The textile was selected as a sample that could include color and material characteristics. The textile sample was composed of each 15-16 kinds of Yellow, Red, Blue, and Green of total 90 samples. The analytical method was to analyze first, the warm-cool of the colors of Yellow, Red, Blue, Green, and then to the visual temperature according to visual classification and tactile classification. And we investigated the correlation of the visual temperature depending on weight, thickness, and unevenness. As a result, the number of textiles felt by Cool and Warm differed according to the warm-cool of the colors feeling in the same textile. However, the visual temperature was different to each classification of textile. In particular, it was noticeable in thin, see-through and matte textiles. In relation to weight, thickness, unevenness and the visual temperature, the textile classification related to the weight is a classification of a hard, matte textile, and the textile classification related to the thickness is a thin, see-through textile.

Image Classification Using Bag of Visual Words and Visual Saliency Model (이미지 단어집과 관심영역 자동추출을 사용한 이미지 분류)

  • Jang, Hyunwoong;Cho, Soosun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.12
    • /
    • pp.547-552
    • /
    • 2014
  • As social multimedia sites are getting popular such as Flickr and Facebook, the amount of image information has been increasing very fast. So there have been many studies for accurate social image retrieval. Some of them were web image classification using semantic relations of image tags and BoVW(Bag of Visual Words). In this paper, we propose a method to detect salient region in images using GBVS(Graph Based Visual Saliency) model which can eliminate less important region like a background. First, We construct BoVW based on SIFT algorithm from the database of the preliminary retrieved images with semantically related tags. Second, detect salient region in test images using GBVS model. The result of image classification showed higher accuracy than the previous research. Therefore we expect that our method can classify a variety of images more accurately.

A Study on Visual Humor Expression in Fake Technique Fashion

  • Kim, Jinyoung;Kan, Hosup
    • Journal of Fashion Business
    • /
    • v.21 no.3
    • /
    • pp.43-57
    • /
    • 2017
  • This study concerns visual humor in fake technique fashion. While previous studies focused mainly on expression techniques of fake technique fashion, this study analyzed visual humor in fake technique fashion based on classification criteria of visual humor expression techniques, differenting this study from other studies. The purpose of this study was to derive visual humor in fake technique fashion by classifying cases of fake technique fashion, and re-classifying outcomes of primary classification based on criteria of visual humor expression techniques. As for methods, this theoretical study was conducted on humor, expression techniques of visual humor, fake fashion and fake expression techniques through literature review. Subsequently, 485 fake technique fashion images obtained from research were classified by expression techniques, and cases of fake technique fashion were analyzed. In addition, by combining this theoretical study with case studies, fake technique fashion was re-classified according to criteria of visual humor expression techniques to derive the characteristics of visual humor in fake technique fashion. Based on visual humor expression techniques, visual humor in fake technique fashion was created by distortion and transformation that made the fake look real by distorting or transforming the fake, enlargement and reduction that created new forms by altering familiar forms, and typeplay that added fun by changing familiar luxury logos into various forms.