• Title/Summary/Keyword: Visual language

Search Result 708, Processing Time 0.027 seconds

VOQL* : A Visual Object Query Language with Inductively-Defined Formal Semantics (VOQL* : 귀납적으로 정의된 형식 시맨틱을 지닌 시각 객체 질의어)

  • Lee, Suk-Kyoon
    • Journal of KIISE:Databases
    • /
    • v.27 no.2
    • /
    • pp.151-164
    • /
    • 2000
  • The Visual Object Query Language (VOQL) recently proposed for object databases has been successful in visualizing path expressions and set-related conditions, and providing formal semantics. However, VOQL has several problems. Due to unrealistic assumptions, only set-related conditions can be represented in VOQL. Due to the lack of explicit language construct for the notion of variables, queries are often awkard and less intuitive. In this paper, we propose VOQL*, which extends VOQL to remove these drawbacks. We introduce the notion of visual variables and refine the syntax and semantics of VOQL based on visual variables. We carefully design the language constructs of VOQL* to reflect the syntax of OOPC, so that the constructs such as visual variables, visual elements, VOQL* simple terms, VOQL* structured terms, VOQL* basic formulas, VOQL* formulas, and VOQL* query expressions are hierarchically and inductively constructed as those of OOPC. Most important, we formally define the semantics of each language construct of VOQL* by induction using OOPC. Because of the well-defined syntax and semantics, queries in VOQL* are clear, concise, and intuitive. We also provide an effective procedure to translate queries in VOQL* into those in OOPC. We believe that VOQL* is the first visual query language with the well-defined syntax reflecting the syntactic structure of logic and semantics formally defined by induction.

  • PDF

Improving visual relationship detection using linguistic and spatial cues

  • Jung, Jaewon;Park, Jongyoul
    • ETRI Journal
    • /
    • v.42 no.3
    • /
    • pp.399-410
    • /
    • 2020
  • Detecting visual relationships in an image is important in an image understanding task. It enables higher image understanding tasks, that is, predicting the next scene and understanding what occurs in an image. A visual relationship comprises of a subject, a predicate, and an object, and is related to visual, language, and spatial cues. The predicate explains the relationship between the subject and object and can be categorized into different categories such as prepositions and verbs. A large visual gap exists although the visual relationship is included in the same predicate. This study improves upon a previous study (that uses language cues using two losses) and a spatial cue (that only includes individual information) by adding relative information on the subject and object of the extant study. The architectural limitation is demonstrated and is overcome to detect all zero-shot visual relationships. A new problem is discovered, and an explanation of how it decreases performance is provided. The experiment is conducted on the VRD and VG datasets and a significant improvement over previous results is obtained.

Functional MR Imaging in the speech-control centers of the brain : Comparison study between Visual and Auditory Language instrument methods in Normal Volunteers (Auditory language task를 이용한 자기공명영상에 관한 고찰 : Visual language task와의 비교)

  • Goo Eun Hoe;Kim In Soo;Jeong Heon Jeong;You Byung Ki;Kim Dong Sung;Choi Cheon Kyu;Song In Chan
    • Journal of The Korean Radiological Technologist Association
    • /
    • v.28 no.1
    • /
    • pp.161-166
    • /
    • 2002
  • Purpose: To make a comparison evaluated of the auditory instrument and visual instrument language generation task in the fMRI, on the adult volunteers. Materials and Methods: Total of 6 normal adult volunteers(men;4, women;2, mean age;24) performed in 1.5

  • PDF

Way to the Method of Teaching Korean Speculative Expression Using Visual Thinking : Focusing on '-(으)ㄹ 것 같다', '-나 보다' (비주얼 씽킹을 활용한 한국어 추측 표현 교육 방안 : '-(으)ㄹ 것 같다', '-나 보다'를 대상으로)

  • Lee, Eun-Kyoung;Bak, Jong-Ho
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.5
    • /
    • pp.141-151
    • /
    • 2021
  • This study analyzed the meaning and functions of '-(으)ㄹ 것 같다' and '-나 보다' among the various semantic functions depending on the situation, and discussed ways to train speculative expressions more efficiently by expanding them from traditional teaching methods through visualizations applied visual thinking at real Korean language education. The speculative representation, which is the subject of this study, represents the speaker's speculation about something or situation, with slight differences in meaning depending on the basis of the speculation and the subject of the speculation. We propose a training method that can enhance the diversification and efficiency of teaching-learning through visualization of information or knowledge, speculative representations that exhibit fine semantic differences in various situations. Utilizing visual thinking in language education can simplify and provide language information through visualization of language knowledge, and learners can be efficient at organizing and organizing language knowledge. It also has the advantage of long-term memory of language information through visualization of language knowledge. Attempts of various educational methods that can be applied at the Korean language education site can contribute to establishing a more systematic and efficient education method, which is meaningful in that the visual thinking proposed in this study can give interest and efficiency to international students.

Improving Visual Object Query language (VOQL) by Introducing Visual Elements and visual Variables (시각 요소와 시각 변수를 통한 시각 객체 질의어(VOQL)의 개선)

  • Lee, Seok-Gyun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.6
    • /
    • pp.1447-1457
    • /
    • 1999
  • Visual Object Query language(VOQL) proposed recently is a visual object-oriented database query language which can effectively represent queries on complex structured data, since schema information is visually included in query expressions. VOQL, which is a graph-based query language with inductively defined semantics, can concisely represent various text-based path expressions by graph, and clearly convey the semantics of complex path expressions. however, the existing VOQL assumes that all the attributes are multi-valued, and cannot visualize the concept of binding of object variables. therefore, VPAL query expressions are not intuitive, so that it is difficult to extend the existing VOQL theoretically. In this paper, we propose VOQL that improved on these problems. The improved VOQL visualizes the result of a single-valued attribute and that of a multi-valued attribute as a visual element and a subblob, respectively, and specifies the binding of object variables by introducing visual variables, so that the improved VOQL intuitively and clearly represents the semantics of queries.

  • PDF

Collaborative Social Tagging for eBook using External DSL Approach

  • Yoo, Hwan-Soo;Kim, Seong-Whan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.1068-1072
    • /
    • 2014
  • We propose a collaborative social tagging for eBook using external DSL approach. The goal of this paper is (1) to provide DSL by which authors can write HTML5 rich contents ebook and tag resources, (2) to make users enhance book by tagging resources easily, (3) to make readers read rich book easily regardless of their devices types, (4) to provide ebook resources of RESTful address style by which other system can identify self-descriptive resources of book. To achieve the goal, we provide Bukle DSL language by which author and users can author and enhance ebook with ease. As a domainspecific language Bukle provides a simple yet expressive language for authoring and tagging books that would otherwise be more difficult to express with a general purpose language. Further work includes visual DSL approach and tools by using that the unskilled users could tag book easily. In order that future work also includes text-to-visual DSL transform engine. UX research is also required to tag and to author book. To tackle the above questions we are looking at using visual notation focusing visual syntax.

Large-scale Language-image Model-based Bag-of-Objects Extraction for Visual Place Recognition (영상 기반 위치 인식을 위한 대규모 언어-이미지 모델 기반의 Bag-of-Objects 표현)

  • Seung Won Jung;Byungjae Park
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.2
    • /
    • pp.78-85
    • /
    • 2024
  • We proposed a method for visual place recognition that represents images using objects as visual words. Visual words represent the various objects present in urban environments. To detect various objects within the images, we implemented and used a zero-shot detector based on a large-scale image language model. This zero-shot detector enables the detection of various objects in urban environments without additional training. In the process of creating histograms using the proposed method, frequency-based weighting was applied to consider the importance of each object. Through experiments with open datasets, the potential of the proposed method was demonstrated by comparing it with another method, even in situations involving environmental or viewpoint changes.

A Study on the Syntagma & Paradigm by Repetition, Variation and Contrast in Ads

  • Choi, Seong-hoon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.9
    • /
    • pp.1-12
    • /
    • 2017
  • This study is the academic work to explore the potential meanings of print advertisements. Linguistic features such as repetition, variation, contrast and phonological structure in the verbal texts of ads can give rise to shades-of-meaning or slight variations in advertising. The language of advertising is not only language in words. It is also a language in images, colors, and pictures. Pictures and words combine to form the advertisement's visual text.. While the words are very important in delivering the sales message, the visual text cannot be ignored in advertisements. Forming part of the visual text is the paralanguage of the ad. Paralanguage is the meaningful behaviour accompanying language, such as voice quality, gestures, facial expressions and touch in speech, and choice of typeface and letter sizes in writing. Foregrounding is the throwing into relief of the linguistic sign against the background of the norms of ordinary language. This paper focuses its discussion on the advertisements within the framework of the paradigmatic and the syntagmatic relationship. The sources of ads have been confined to Malboro. The ads were reselected based on purposive sampling methods.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

The Transformation of BPEL into Onion Visual Language For Model-Checking of BPEL (BPEL의 모델 체킹을 위한 BPEL의 Onion Visual Language 변환)

  • Woo, Su-Jeong;Choe, Jae-Hong;On, Jin-Ho;Lee, Moon-Kun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.189-192
    • /
    • 2011
  • 클라우드 컴퓨팅에서 사용되는 웹 서비스들은 BPEL에 의해 여러 서비스들이 새로운 웹 서비스로 조합 되어지며, 서비스가 제대로 동작하는지를 검증하기 위해 Petri nets, Abstract State Machine(ASM), BPECalculus 등의 검증 방법을 사용한다. 이러한 검증 방법은 BPEL을 사용하여 새로 만들어진 웹 서비스들이 안정적으로 동작하는지를 검증하는 것으로, 웹 서비스 설계와 검증이 서로 분리되어 있다. 본 논문에서는 명세, 분석 및 검증의 전 과정에서 프로세스의 포함관계, 상태정보, Interaction, Mobility 등을 그래프로 표현하며, 한 단계의 그래프를 통하여 시스템 전체의 복잡도 및 시스템의 행위를 예측할 수 있는 Onion Visual Language(OVL)을 사용하여 BPEL로 설계 되는 클라우드 웹 서비스들을 OVL로 변환 후 이를 분석 및 검증한다. 추후 OVL은 서로 다른 클라우드 안에서의 웹 서비스 재사용을 위한 동일성 검증을 위한 방법으로 사용될 수 있다.