• Title/Summary/Keyword: Linguistic processing

Search Result 171, Processing Time 0.026 seconds

Use Case Identification Method based on Goal oriented Requirements Engineering(GoRE) (Goal 지향 요구공학 기반의 유스케이스 식별 방법)

  • Park, Bokyung;Kim, R. Youngchul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.7
    • /
    • pp.255-262
    • /
    • 2014
  • Our previous research[1] suggested object extraction and modeling method based on Fillmore's case grammar. This approach had not considered of use case extraction and method. To solve this problem, we adopt Fillmore's semantic method as linguistic approach into requirement engineering, which refine fillmore's case grammar for extracting and modeling use cases from customer requirements. This Refined mechanism includes the definition of a structured procedure and the representation of visual notations for 'case' modeling. This paper also proposes the use case decision matrix to identify use case size from extracted use cases based on goal oriented requirement engineering(GoRE), which related with the complexity of use case, and also prioritizes the use cases with this matrix. It demonstrates our proposal with the bank ATM system.

A Study on the Relation between Taxonomy of Nominal Expressions and OWL Ontologies (체언표현 개념분류체계와 OWL 온톨로지의 상관관계 연구)

  • Song Do-Gyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.2 s.40
    • /
    • pp.93-99
    • /
    • 2006
  • Ontology is an indispensable component in intelligent and semantic processing of knowledge and information, such as in semantic web. Ontology is considered to be constructed generally on the basis of taxonomy of human concepts about the world. However. as human concepts are unstructured and obscure, ontology construction based on the taxonomy of human concepts cannot be realized systematically furthermore automatically. So, we try to do this from the relation among linguistic symbols regarded representing human concepts, in short, words. We show the similarity between taxonomy of human concepts and relation among words. And we propose a methodology to construct and generate automatically ontologies from these relations mon words and a series of algorithm to convert these relations into ontologies. This paper presents the process and concrete application of this methodology.

  • PDF

Constructing Tagged Corpus and Cue Word Patterns for Detecting Korean Hedge Sentences (한국어 Hedge 문장 인식을 위한 태깅 말뭉치 및 단서어구 패턴 구축)

  • Jeong, Ju-Seok;Kim, Jun-Hyeouk;Kim, Hae-Il;Oh, Sung-Ho;Kang, Sin-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.6
    • /
    • pp.761-766
    • /
    • 2011
  • A hedge is a linguistic device to express uncertainties. Hedges are used in a sentence when the writer is uncertain or has doubt about the contents of the sentence. Due to this uncertainty, sentences with hedges are considered to be non-factual. There are many applications which need to determine whether a sentence is factual or not. Detecting hedges has the advantage in information retrieval, and information extraction, and QnA systems, which make use of non-hedge sentences as target to get more accurate results. In this paper, we constructed Korean hedge corpus, and extracted generalized hedge cue-word patterns from the corpus, and then used them in detecting hedges. In our experiments, we achieved 78.6% in F1-measure.

Commercially Available High-Speed Cameras Connected with a Laryngoscope for Capturing the Laryngeal Images (상용화 된 고속카메라와 후두내시경을 이용한 성대촬영 방법의 소개)

  • Nam, Do-Hyun;Choi, Hong-Shik
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.21 no.2
    • /
    • pp.133-138
    • /
    • 2010
  • Background and Objectives : High-speed imaging can be useful in studies of linguistic and artistic singing styles, and laryngeal examination of patients with voice disorders, particularly in irregular vocal fold vibrations. In this study, we introduce new laryngeal imaging systems which are commercially available high speed cameras connected with a laryngoscope. Materials and Method : The laryngeal images were captured from three different types of cameras. First, the adapter was made to connect with laryngoscope and Casio EX-F1 to capture the images using $2{\times}150$ Watt Halogen light source (EndoSTROB) at speeds of 1,200 tps (frame per second)($336{\times}96$). Second, Phantom Miro ex4 was used to capture the digital laryngeal images using Xenon Nova light source 175 Watt (STORZ) at speeds of 1,920 fps ($512{\times}384$). Finally, laryngeal images were captured using MotionXtra N-4 with 250 Watt halogen lamp (Olympus CLH-250) light source at speeds of 2,000tps ($384{\times}400$) by connecting with laryngoscope. All images were transformed into the Kymograph using KIPS (Kay's image processing Software) of Kay Pentex Inc. Results: Casio EX-F1 was too small to adjust the focus and screen size was diminished once the images were captured despite of high resolution images. High quality of color images could be obtained with Phantom Miro ex4 whereas good black and white images from Motion Xtra N-4 Despite of some limitations of illumination problems, limited recording time capacity, and time consuming procedures in Phantom Miro ex4 and Motion Xtra N-4, those portable devices provided high resolution images. Conclusion : All those high speed cameras could capture the laryngeal images by connecting with laryngoscope. High resolution images were able to be captured at the fixed position under the good lightness. Accordingly, these techniques could be applicable to observe the vocal fold vibration properties in the clinical practice.

  • PDF

LVLN : A Landmark-Based Deep Neural Network Model for Vision-and-Language Navigation (LVLN: 시각-언어 이동을 위한 랜드마크 기반의 심층 신경망 모델)

  • Hwang, Jisu;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.9
    • /
    • pp.379-390
    • /
    • 2019
  • In this paper, we propose a novel deep neural network model for Vision-and-Language Navigation (VLN) named LVLN (Landmark-based VLN). In addition to both visual features extracted from input images and linguistic features extracted from the natural language instructions, this model makes use of information about places and landmark objects detected from images. The model also applies a context-based attention mechanism in order to associate each entity mentioned in the instruction, the corresponding region of interest (ROI) in the image, and the corresponding place and landmark object detected from the image with each other. Moreover, in order to improve the success rate of arriving the target goal, the model adopts a progress monitor module for checking substantial approach to the target goal. Conducting experiments with the Matterport3D simulator and the Room-to-Room (R2R) benchmark dataset, we demonstrate high performance of the proposed model.

Error Correction for Korean Speech Recognition using a LSTM-based Sequence-to-Sequence Model

  • Jin, Hye-won;Lee, A-Hyeon;Chae, Ye-Jin;Park, Su-Hyun;Kang, Yu-Jin;Lee, Soowon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.1-7
    • /
    • 2021
  • Recently, since most of the research on correcting speech recognition errors is based on English, there is not enough research on Korean speech recognition. Compared to English speech recognition, however, Korean speech recognition has many errors due to the linguistic characteristics of Korean language, such as Korean Fortis and Korean Liaison, thus research on Korean speech recognition is needed. Furthermore, earlier works primarily focused on editorial distance algorithms and syllable restoration rules, making it difficult to correct the error types of Korean Fortis and Korean Liaison. In this paper, we propose a context-sensitive post-processing model of speech recognition using a LSTM-based sequence-to-sequence model and Bahdanau attention mechanism to correct Korean speech recognition errors caused by the pronunciation. Experiments showed that by using the model, the speech recognition performance was improved from 64% to 77% for Fortis, 74% to 90% for Liaison, and from 69% to 84% for average recognition than before. Based on the results, it seems possible to apply the proposed model to real-world applications based on speech recognition.

Relationship between Speech Perception in Noise and Phonemic Restoration of Speech in Noise in Individuals with Normal Hearing

  • Vijayasarathy, Srikar;Barman, Animesh
    • Journal of Audiology & Otology
    • /
    • v.24 no.4
    • /
    • pp.167-173
    • /
    • 2020
  • Background and Objectives: Top-down restoration of distorted speech, tapped as phonemic restoration of speech in noise, maybe a useful tool to understand robustness of perception in adverse listening situations. However, the relationship between phonemic restoration and speech perception in noise is not empirically clear. Subjects and Methods: 20 adults (40-55 years) with normal audiometric findings were part of the study. Sentence perception in noise performance was studied with various signal-to-noise ratios (SNRs) to estimate the SNR with 50% score. Performance was also measured for sentences interrupted with silence and for those interrupted by speech noise at -10, -5, 0, and 5 dB SNRs. The performance score in the noise interruption condition was subtracted by quiet interruption condition to determine the phonemic restoration magnitude. Results: Fairly robust improvements in speech intelligibility was found when the sentences were interrupted with speech noise instead of silence. Improvement with increasing noise levels was non-monotonic and reached a maximum at -10 dB SNR. Significant correlation between speech perception in noise performance and phonemic restoration of sentences interrupted with -10 dB SNR speech noise was found. Conclusions: It is possible that perception of speech in noise is associated with top-down processing of speech, tapped as phonemic restoration of interrupted speech. More research with a larger sample size is indicated since the restoration is affected by the type of speech material and noise used, age, working memory, and linguistic proficiency, and has a large individual variability.

Relationship between Speech Perception in Noise and Phonemic Restoration of Speech in Noise in Individuals with Normal Hearing

  • Vijayasarathy, Srikar;Barman, Animesh
    • Korean Journal of Audiology
    • /
    • v.24 no.4
    • /
    • pp.167-173
    • /
    • 2020
  • Background and Objectives: Top-down restoration of distorted speech, tapped as phonemic restoration of speech in noise, maybe a useful tool to understand robustness of perception in adverse listening situations. However, the relationship between phonemic restoration and speech perception in noise is not empirically clear. Subjects and Methods: 20 adults (40-55 years) with normal audiometric findings were part of the study. Sentence perception in noise performance was studied with various signal-to-noise ratios (SNRs) to estimate the SNR with 50% score. Performance was also measured for sentences interrupted with silence and for those interrupted by speech noise at -10, -5, 0, and 5 dB SNRs. The performance score in the noise interruption condition was subtracted by quiet interruption condition to determine the phonemic restoration magnitude. Results: Fairly robust improvements in speech intelligibility was found when the sentences were interrupted with speech noise instead of silence. Improvement with increasing noise levels was non-monotonic and reached a maximum at -10 dB SNR. Significant correlation between speech perception in noise performance and phonemic restoration of sentences interrupted with -10 dB SNR speech noise was found. Conclusions: It is possible that perception of speech in noise is associated with top-down processing of speech, tapped as phonemic restoration of interrupted speech. More research with a larger sample size is indicated since the restoration is affected by the type of speech material and noise used, age, working memory, and linguistic proficiency, and has a large individual variability.

Semantic Role Labeling using Biaffine Average Attention Model (Biaffine Average Attention 모델을 이용한 의미역 결정)

  • Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.5
    • /
    • pp.662-667
    • /
    • 2022
  • Semantic role labeling task(SRL) is to extract predicate and arguments such as agent, patient, place, time. In the previously SRL task studies, a pipeline method extracting linguistic features of sentence has been proposed, but in this method, errors of each extraction work in the pipeline affect semantic role labeling performance. Therefore, methods using End-to-End neural network model have recently been proposed. In this paper, we propose a neural network model using the Biaffine Average Attention model for SRL task. The proposed model consists of a structure that can focus on the entire sentence information regardless of the distance between the predicate in the sentence and the arguments, instead of LSTM model that uses the surrounding information for prediction of a specific token proposed in the previous studies. For evaluation, we used F1 scores to compare two models based BERT model that proposed in existing studies using F1 scores, and found that 76.21% performance was higher than comparison models.

A Study on the Dataset of the Korean Multi-class Emotion Analysis in Radio Listeners' Messages (라디오 청취자 문자 사연을 활용한 한국어 다중 감정 분석용 데이터셋연구)

  • Jaeah, Lee;Gooman, Park
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.940-943
    • /
    • 2022
  • This study aims to analyze the Korean dataset by performing Korean sentence Emotion Analysis in the radio listeners' text messages collected personally. Currently, in Korea, research on the Emotion Analysis of Korean sentences is variously continuing. However, it is difficult to expect high accuracy of Emotion Analysis due to the linguistic characteristics of Korean. In addition, a lot of research has been done on Binary Sentiment Analysis that allows positive/negative classification only, but Multi-class Emotion Analysis that is classified into three or more emotions requires more research. In this regard, it is necessary to consider and analyze the Korean dataset to increase the accuracy of Multi-class Emotion Analysis for Korean. In this paper, we analyzed why Korean Emotion Analysis is difficult in the process of conducting Emotion Analysis through surveys and experiments, proposed a method for creating a dataset that can improve accuracy and can be used as a basis for Emotion Analysis of Korean sentences.