• Title/Summary/Keyword: Syntactic

Search Result 720, Processing Time 0.024 seconds

Processing of syntactic dependency in Korean relative clauses: Evidence from an eye-tracking study (안구이동추적을 통해 살펴본 관계절의 통사처리 과정)

  • Lee, Mi-Seon;Yong, Nam-Seok
    • Korean Journal of Cognitive Science
    • /
    • v.20 no.4
    • /
    • pp.507-533
    • /
    • 2009
  • This paper examines the time course and processing patterns of filler-gap dependencies in Korean relative clauses, using an eyetracking method. Participants listened to a short story while viewing four pictures of entities mentioned in the story. Each story is followed by an auditorily presented question involving a relative clause (subject relative or dative relative). Participants' eye movements in response to the question were recorded. Results showed that the proportion of looks to the picture corresponding to a filler noun significantly increased at the relative verb affixed with a relativizer, and was largest at the filler where the fixation duration on the filler picture significantly increased. These results suggest that online resolution of the filler-gap dependency only starts at the relative verb marked with a relativiser and is finally completed at the filler position. Accordingly, they partly support the filler-driven parsing strategy for Korean, as for head-initial languages. In addition, the different patterns of eye movements between subject relatives and dative relatives indicate the role of case markers in parsing Korean sentences.

  • PDF

Semantic Inference System Using Backward Chaining (후방향 추론기법을 이용한 시멘틱 추론 시스템)

  • 함영경;박영택
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.97-99
    • /
    • 2003
  • 대부분의 웹 문서들은 HTML이나 XML로 표현된 웹의 정보들은 Syntactic 구조를 기반으로 표현되기 때문에, 소프트웨어가 정보를 처리하는데 한계가 있다. HTML은 문서의 display안을 위한 tag기반의 문서 표현 방식이고, XML은 문서의 구조를 사람이 이해하기 쉽도록 제안된 표현 방식이기 때문이다. 따라서, HTML 및 XML로 표현된 정보들을 가지고 서비스를 제공하는 웹 에이전트들은 사용자들에게 의미있는 서비스를 제공하기 위해 오프라인 상에서 많은 수작업을 수행해야만 했다. 이와 같은 문제점을 극복하기 위해서 미국과 유럽에서는 시멘틱 웹에 대한 연구를 활발히 진행하고 있다. 시멘틱 웹은 기존의 웹과는 달리 소프트웨어가 이해하고 처리 할 수 있는 형태(machine processable)로 정보를 표현하기 때문에 오프라인 상에서 수행되던 많은 작업들을 에이전트가 이해하고 처리할 수 있게 되었다. 그러나. 온톨로지를 구축하는 과정에서도 필연적으로 정보의 31(Incorrect, incomplete, Inconsistence)가 나타나고, 서비스의 결과 또한 온톨로지에 의해 좌우된다는 단점이 있다. 본 논문에서 제안하는 후방향 추론기법을 이용한 추론엔진은 다음과 같은 시스템을 제안한다. 첫째. 시멘틱 웹을 이용함으로써 소프트웨어 에이전트의 자동화 시스템을 제안한다. 둘째 은톨로지 정보의 한계성을 극복하기 위해 규칙기반의 후방향 추론 기법을 사용하는 시멘틱 추론엔진을 제안한다. 본 논문에서 제안하는 후방향 추론기법을 이용한 시멘틱 추론시스템은 사용자의 질의를 입력받아. 온톨로지와 시멘틱 웹 문서의 정보를 이용하여 후방향 추론을 수행함으로써 웹 정보의 불완전성을 완화하고, 온톨로지의 영향력를 감소시킴으로써 웹 서비스의 질을 향상시키는데 목적이 있다.RED에 비해 향상된 성능을 보여주었다.웍스 네트워크상의 다양한 디바이스들간의 네트워크 다양화와 분산화 기능을 얻을 수 있었고, 기존의 고가의 해외 솔루션인 Echelon사의 LonMaker 소프트웨어를 사용하지 않고도 국내의 순수 솔루션인 리눅스 기반의 LonWare 3.0 다중 바인딩 기능을 통해 저 비용으로 홈 네트워크 구성 관리 서버 시스템 개발에 대한 비용을 줄일 수 있다. 기대된다.e 함량이 대체로 높게 나타났다. 점미가 수가용성분에서 goucose대비 용출함량이 고르게 나타나는 경향을 보였고 흑미는 알칼리가용분에서 glucose가 상당량(0.68%) 포함되고 있음을 보여주었고 arabinose(0.68%), xylose(0.05%)도 다른 종류에 비해서 다량 함유한 것으로 나타났다. 흑미는 총식이섬유 함량이 높고 pectic substances, hemicellulose, uronic acid 함량이 높아서 콜레스테롤 저하 등의 효과가 기대되며 고섬유식품으로서 조리 특성 연구가 필요한 것으로 사료된다.리하였다. 얻어진 소견(所見)은 다음과 같았다. 1. 모년령(母年齡), 임신회수(姙娠回數), 임신기간(姙娠其間), 출산시체중등(出産時體重等)의 제요인(諸要因)은 주산기사망(周産基死亡)에 대(對)하여 통계적(統計的)으로 유의(有意)한 영향을 미치고 있어 $25{\sim}29$세(歲)의 연령군에서, 2번째 임신과 2번째의 출산에서 그리고 만삭의 임신 기간에, 출산시체중(出産時體重) $3.50{\sim}3.99kg$사이의 아이에서 그 주산기사망률(周産基死亡率)이 각각 가장 낮았다. 2. 사산(死産)과 초생아사망(初生兒死亡)을 구분(區分)하여 고려해 볼때 사산(死産)은 모성(母性)의 임신력(姙娠歷)과 매우 밀접한 관련이 있는 것으

  • PDF

VOQL* : A Visual Object Query Language with Inductively-Defined Formal Semantics (VOQL* : 귀납적으로 정의된 형식 시맨틱을 지닌 시각 객체 질의어)

  • Lee, Suk-Kyoon
    • Journal of KIISE:Databases
    • /
    • v.27 no.2
    • /
    • pp.151-164
    • /
    • 2000
  • The Visual Object Query Language (VOQL) recently proposed for object databases has been successful in visualizing path expressions and set-related conditions, and providing formal semantics. However, VOQL has several problems. Due to unrealistic assumptions, only set-related conditions can be represented in VOQL. Due to the lack of explicit language construct for the notion of variables, queries are often awkard and less intuitive. In this paper, we propose VOQL*, which extends VOQL to remove these drawbacks. We introduce the notion of visual variables and refine the syntax and semantics of VOQL based on visual variables. We carefully design the language constructs of VOQL* to reflect the syntax of OOPC, so that the constructs such as visual variables, visual elements, VOQL* simple terms, VOQL* structured terms, VOQL* basic formulas, VOQL* formulas, and VOQL* query expressions are hierarchically and inductively constructed as those of OOPC. Most important, we formally define the semantics of each language construct of VOQL* by induction using OOPC. Because of the well-defined syntax and semantics, queries in VOQL* are clear, concise, and intuitive. We also provide an effective procedure to translate queries in VOQL* into those in OOPC. We believe that VOQL* is the first visual query language with the well-defined syntax reflecting the syntactic structure of logic and semantics formally defined by induction.

  • PDF

On the Sequences of Dialogue Acts and the Dialogue Flows-w.r.t. the appointment scheduling dialogues (대화행위의 연쇄관계와 대화흐름에 대하여 -[일정협의 대화] 중심으로)

  • 박혜은;이민행
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.2
    • /
    • pp.27-34
    • /
    • 1999
  • The main purpose of this paper is to propose a general dialogue flow in 'the a appointment scheduling dialogues' in German using the concept of dialogue acts. A basic a assumption of this research is that dialogue acts contribute to the improvement of a translation system. They might be very useful to solve the problems that syntactic and semantic module could not resolve using contextual knowledge. The classification of the dialogue acts was conducted as a work of VERBMOBIL project and was based on real dialogues transcribed by experts. The real dialogues were analyzed in terms of the dialogue acts. We empirically analyzed the sequences of the dialogue acts not only in a series of dialogue turns but also in one dialogue turn. We attempted to analyZe the sequences in one dialogue turn additionally because the dialogue data used in this research showed some difference from the ones in other existing researches. By examining the sequences in dialogue acts. we proposed the dialogue flowchart in 'the a appointment scheduling dialogues' 'Based on the statistical analysis of the sequences of the most frequent dialogue acts. the dialogue flowcharts seem to represent' the a appointment scheduling dialogues' in general. A further research is required on c classification of dialogue acts which was a base for the analysis of dialogues. In order to e extract the most generalized model. we did not subcategorize each dialogue acts and used a limited number of items of dialogue acts. However. generally defined dialogue acts need to be defined more concretely and new dialogue acts for specific situations should be a added.

  • PDF

Event-Related Potentials of a Monosyllabic Word (단음절 단어의 사건 관련 전위)

  • Min, Byoung-Kyong;Kim, Myung-Sun;Yoon, Tak;Kim, Jae-Jin;Kwon, Jun-Soo
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2002.05a
    • /
    • pp.211-215
    • /
    • 2002
  • 본 실험은 종합적 인지과정을 추론할 수 있는 결합 문제(binding problem)를 언어적인지 과정을 통해 알아 본 실험으로, 총 10 명(남:61여:4, 평균나이:24.40 $\pm$ 1.35)의 정상군을 대상으로, 4개의 음소로 이루어진 단음절 명사를 목표 자극(target stimulus)으로 하고, 4개 음소의 임의적인 조합으로서 글자를 이루지 못하는 비목표 자극(non-target stimulus)을, 각각 200 회와 800 회씩 시각적으로 0.5초씩 무작위로 제시하여 128 채널 고밀도 사건관련전위(ERP)를 측정하였다. 이번 실험 결과의 주요 특징은 글자가 아닌 비목표 자극보다 글자인 목표 자극에서 두드러지게 나타난 두정엽 부근의 P500 과 N900 이라고 할 수 있다. 자극 제시 비율의 차이에서 오는 oddball 효과로 인한 기존 P300 의 인지적 의미를 이번 결과의 P500 이 함축한다고 볼 수 있으며, 단음절 단어를 인지할 때, 글자임을 인식하는 순간은 의미적인지 과정이 진행되었다기보다 그 글자의 형태만으로 낯익은 글자인지를 분간하는 것으로 보인다 따라서, 이 경우 기존 언어 실험에 자주 등장하던 의미론적 peak 인 N400 은 보이지 않고, 곧바로 형태적이고, 통사적(syntactic)인 인지 처리 과정인 P500이 나타났다고 해석할 수 있다. 하지만, 이번 실험에서는 N400 대신에 N900 이 나타났다. 이 결과는 이번 ERP 실험과 병행된 프로토콜 분석을 통해, 피험자가 자극 제시 후, 약 900ms 정도에, 이미 제시되고 사라진 글자 자극을 다시 한번 떠올리는 인지 과정이 일어난다는 점과 관련 지어 해석하면, 기존에 의미적(semantic) 인지 과정으로만 해석했던 negative-peak 를 생각(thinking)과 같은 내재적인지 과정(internal cognitive process)으로 확장하여 일반화하는 추론도 생각해 볼 수 있다. 요컨대, 언어인지를 통한 이번 실험을 통해, 뇌파에서 검출되는 negative-peak 은 internal cognitive process로 추측되고, positive-peak 는 external cognitive process 라고 생각된다. 덧붙여, 유의해서 볼 점은 각 peak-topology 에서 Cz 의 진폭이 Fz 보다 크게 나온 점과, 일반적으로 언어 기능을 담당한다는 좌측 측두엽(T7)이 우측(T8)보다 통계적으로 더 유의미한 차이를 보였다는 점등이다.

  • PDF

Component Analysis for Constructing an Emotion Ontology (감정 온톨로지의 구축을 위한 구성요소 분석)

  • Yoon, Ae-Sun;Kwon, Hyuk-Chul
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.1
    • /
    • pp.157-175
    • /
    • 2010
  • Understanding dialogue participant's emotion is important as well as decoding the explicit message in human communication. It is well known that non-verbal elements are more suitable for conveying speaker's emotions than verbal elements. Written texts, however, contain a variety of linguistic units that express emotions. This study aims at analyzing components for constructing an emotion ontology, that provides us with numerous applications in Human Language Technology. A majority of the previous work in text-based emotion processing focused on the classification of emotions, the construction of a dictionary describing emotion, and the retrieval of those lexica in texts through keyword spotting and/or syntactic parsing techniques. The retrieved or computed emotions based on that process did not show good results in terms of accuracy. Thus, more sophisticate components analysis is proposed and the linguistic factors are introduced in this study. (1) 5 linguistic types of emotion expressions are differentiated in terms of target (verbal/non-verbal) and the method (expressive/descriptive/iconic). The correlations among them as well as their correlation with the non-verbal expressive type are also determined. This characteristic is expected to guarantees more adaptability to our ontology in multi-modal environments. (2) As emotion-related components, this study proposes 24 emotion types, the 5-scale intensity (-2~+2), and the 3-scale polarity (positive/negative/neutral) which can describe a variety of emotions in more detail and in standardized way. (3) We introduce verbal expression-related components, such as 'experiencer', 'description target', 'description method' and 'linguistic features', which can classify and tag appropriately verbal expressions of emotions. (4) Adopting the linguistic tag sets proposed by ISO and TEI and providing the mapping table between our classification of emotions and Plutchik's, our ontology can be easily employed for multilingual processing.

  • PDF

Handwritten Korean Amounts Recognition in Bank Slips using Rule Information (규칙 정보를 이용한 은행 전표 상의 필기 한글 금액 인식)

  • Jee, Tae-Chang;Lee, Hyun-Jin;Kim, Eun-Jin;Lee, Yill-Byung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.8
    • /
    • pp.2400-2410
    • /
    • 2000
  • Many researches on recognition of Korean characters have been undertaken. But while the majority are done on Korean character recognition, tasks for developing document recognition system have seldom been challenged. In this paper, I designed a recognizer of Korean courtesy amounts to improve error correction in recognized character string. From the very first step of Korean character recognition, we face the enormous scale of data. We have 2350 characters in Korean. Almost the previous researches tried to recognize about 1000 frequently-used characters, but the recognition rates show under 80%. Therefore using these kinds of recognizers is not efficient, so we designed a statistical multiple recognizer which recognize 16 Korean characters used in courtesy amounts. By using multiple recognizer, we can prevent an increase of errors. For the Postprocessor of Korean courtesy amounts, we use the properties of Korean character strings. There are syntactic rules in character strings of Korean courtesy amounts. By using this property, we can correct errors in Korean courtesy amounts. This kind of error correction is restricted only to the Korean characters representing the unit of the amounts. The first candidate of Korean character recognizer show !!i.49% of recognition rate and up to the fourth candidate show 99.72%. For Korean character string which is postprocessed, recognizer of Korean courtesy amounts show 96.42% of reliability. In this paper, we suggest a method to improve the reliability of Korean courtesy amounts recognition by using the Korean character recognizer which recognize limited numbers of characters and the postprocessor which correct the errors in Korean character strings.

  • PDF

PPEditor: Semi-Automatic Annotation Tool for Korean Dependency Structure (PPEditor: 한국어 의존구조 부착을 위한 반자동 말뭉치 구축 도구)

  • Kim Jae-Hoon;Park Eun-Jin
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.63-70
    • /
    • 2006
  • In general, a corpus contains lots of linguistic information and is widely used in the field of natural language processing and computational linguistics. The creation of such the corpus, however, is an expensive, labor-intensive and time-consuming work. To alleviate this problem, annotation tools to build corpora with much linguistic information is indispensable. In this paper, we design and implement an annotation tool for establishing a Korean dependency tree-tagged corpus. The most ideal way is to fully automatically create the corpus without annotators' interventions, but as a matter of fact, it is impossible. The proposed tool is semi-automatic like most other annotation tools and is designed to edit errors, which are generated by basic analyzers like part-of-speech tagger and (partial) parser. We also design it to avoid repetitive works while editing the errors and to use it easily and friendly. Using the proposed annotation tool, 10,000 Korean sentences containing over 20 words are annotated with dependency structures. For 2 months, eight annotators have worked every 4 hours a day. We are confident that we can have accurate and consistent annotations as well as reduced labor and time.

A design and implementation of VHDL-to-C mapping in the VHDL compiler back-end (VHDL 컴파일러 후반부의 VHDL-to-C 사상에 관한 설계 및 구현)

  • 공진흥;고형일
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.12
    • /
    • pp.1-12
    • /
    • 1998
  • In this paper, a design and implementation of VHDL-to-C mapping in the VHDL compiler back-end is described. The analyzed data in an intermediate format(IF), produced by the compiler front-end, is transformed into a C-code model of VHDL semantics by the VHDL-to-C mapper. The C-code model for VHDL semantics is based on a functional template, including declaration, elaboration, initialization and execution parts. The mapping is carried out by utilizing C mapping templates of 129 types classified by mapping units and functional semantics, and iterative algorithms, which are combined with terminal information, to produce C codes. In order to generate the C program, the C codes are output to the functional template either directly or by combining the higher mapping result with intermediate mapping codes in the data queue. In experiments, it is shown that the VHDL-to-C mapper could completely deal with the VHDL analyzed programs from the compiler front-end, which deal with about 96% of major VHDL syntactic programs in the Validation Suite. As for the performance, it is found that the code size of VHDL-to-C is less than that of interpreter and worse than direct code compiler of which generated code is increased more rapidly with the size of VHDL design, and that the VHDL-to-C timing overhead is needed to be improved by the optimized implementation of mapping mechanism.

  • PDF

An Algorithm for Ontology Merging and Alignment using Local and Global Semantic Set (지역 및 전역 의미집합을 이용한 온톨로지 병합 및 정렬 알고리즘)

  • 김재홍;이상조
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.4
    • /
    • pp.23-30
    • /
    • 2004
  • Ontologies play an important role in the Semantic Web by providing well-defined meaning to ontology consumers. But as the ontologies are authored in a bottom-up distributed mimer, a large number of overlapping ontologies are created and used for the similar domains. Ontology sharing and reuse have become a distinguished topic, and ontology merging and alignment are the solutions for the problem. Ontology merging and alignment algorithms previously proposed detect conflicts between concepts by making use of only local syntactic information of concept names. And they depend only on a semi-automatic approach, which makes ontology engineers tedious. Consequently, the quality of merging and alignment tends to be unsatisfying. To remedy the defects of the previous algorithms, we propose a new algorithm for ontology merging and alignment which uses local and global semantic set of a concept. We evaluated our algorithm with several pairs of ontologies written in OWL, and achieved around 91% of precision in merging and alignment. We expect that, with the widespread use of web ontology, the need for ontology sharing and reuse ill become higher, and our proposed algorithm can significantly reduce the time required for ontology development. And also, our algorithm can easily be applied to various fields such as ontology mapping where semantic information exchange is a requirement.