• 제목/요약/키워드: Part-of-Speech Set

검색결과 37건 처리시간 0.022초

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • 제5권3호
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.

한국어 낭독과 자유 발화의 운율적 특성 (Korean prosodic properties between read and spontaneous speech)

  • 유승미;이석재
    • 말소리와 음성과학
    • /
    • 제14권2호
    • /
    • pp.39-54
    • /
    • 2022
  • 본 연구의 목적은 L2KSC(외국어로서의 한국어 음성 말뭉치)의 한국어 부분에서 한국어 낭독과 자유 발화를 분석하여 음성 유형의 운율 차이를 명확히 하는 것이다. 이를 위해 문장의 조음 길이, 조음 속도, 한 문장 내 휴지 길이 및 휴지 빈도, 문장 F0값을 변수로 설정하고 통계적 방법론(t-검정, 상관 분석, 회귀 분석)을 통해 분석하였다. 실험결과, 낭독과 자유 발화는 각 문장을 구성하는 운율구 형태가 구조적으로 달랐는데 각 발화 유형을 구별하는 운율적 요소로는 조음 길이, 휴지 길이, 휴지 빈도로 나타났다. 통계적 결과에서는 낭독 발화는 조음 속도와 조음 길이의 상관관계가 가장 높았는데, 이는 주어진 문장이 길수록 화자가 더 빨리 말하는 것을 설명하였다. 그러나 자유 발화에서는 문장의 조음 길이와 휴지 빈도 사이의 관계가 높았다. 전반적으로 자유 발화에서는 문장을 만들기 위해 짧은 억양구가 지속적으로 만들어지는데, 그런 이유로 더 많은 휴지가 발생하여 문장이 더 길어지는 것으로 나타났다.

지식베이스를 이용한 임베디드용 연속음성인식의 어휘 적용률 개선 (Vocabulary Coverage Improvement for Embedded Continuous Speech Recognition Using Knowledgebase)

  • 김광호;임민규;김지환
    • 대한음성학회지:말소리
    • /
    • 제68권
    • /
    • pp.115-126
    • /
    • 2008
  • In this paper, we propose a vocabulary coverage improvement method for embedded continuous speech recognition (CSR) using knowledgebase. A vocabulary in CSR is normally derived from a word frequency list. Therefore, the vocabulary coverage is dependent on a corpus. In the previous research, we presented an improved way of vocabulary generation using part-of-speech (POS) tagged corpus. We analyzed all words paired with 101 among 152 POS tags and decided on a set of words which have to be included in vocabularies of any size. However, for the other 51 POS tags (e.g. nouns, verbs), the vocabulary inclusion of words paired with such POS tags are still based on word frequency counted on a corpus. In this paper, we propose a corpus independent word inclusion method for noun-, verb-, and named entity(NE)-related POS tags using knowledgebase. For noun-related POS tags, we generate synonym groups and analyze their relative importance using Google search. Then, we categorize verbs by lemma and analyze relative importance of each lemma from a pre-analyzed statistic for verbs. We determine the inclusion order of NEs through Google search. The proposed method shows better coverage for the test short message service (SMS) text corpus.

  • PDF

억양이 과장된 원어민 발화를 통한 영어 억양 교육과 평가 (Evaluation of Teaching English Intonation through Native Utterances with Exaggerated Intonation)

  • 윤규철
    • 말소리와 음성과학
    • /
    • 제3권1호
    • /
    • pp.35-43
    • /
    • 2011
  • The purpose of this paper is to evaluate the viability of employing the intonation exaggeration technique proposed in [4] in teaching English prosody to university students. Fifty-six female university students, twenty-two in a control group and the other thirty-four in an experimental group, participated in a teaching experiment as part of their regular coursework for a five-and-a-half week period. For the study material of the experimental group, a set of utterances was synthesized whose intonation contours had been exaggerated whereas the control group was given the same set without any intonation modification. Recordings from both before and after the teaching experiment were made and one sentence set was chosen for analysis. The parameters analyzed were the pitch range, words containing the highest and lowest pitch points, and the 3-dimensional comparison of the three prosodic features [2]. An AXB and subjective rating test were also performed along with a qualitative screening of the individual intonation contours. The results showed that the experimental group performed slightly better in that their intonation contour was more similar to that of the model native speaker's utterance. This appears to suggest that the intonation exaggeration technique can be employed in teaching English prosody to students.

  • PDF

예제 기반 대화 시스템을 위한 양태 분류 (Modality Classification for an Example-Based Dialogue System)

  • 김민정;홍금원;송영인;이연수;이도길;임해창
    • 대한음성학회지:말소리
    • /
    • 제68권
    • /
    • pp.75-93
    • /
    • 2008
  • An example-based dialogue system tries to utilize many pairs which are stored in a dialogue database. The most important part of the example-based dialogue system is to find the most similar utterance to user's input utterance. Modality, which is characterized as conveying the speaker's involvement in the propositional content of a given utterance, is one of the core sentence features. For example, the sentence "I want to go to school." has a modality of hope. In this paper, we have proposed a modality classification system which can predict sentence modality in order to improve the performance of example-based dialogue systems. We also define a modality tag set for a dialogue system, and validate this tag set using a rule-based modality classification system. Experimental results show that our modality tag set and modality classification system improve the performance of an example-based dialogue system.

  • PDF

신경회로망 이용한 한국어 음소 인식 (Korean Phoneme Recognition Using Neural Networks)

  • 김동국;정차균;정홍
    • 대한전기학회논문지
    • /
    • 제40권4호
    • /
    • pp.360-373
    • /
    • 1991
  • Since 70's, efficient speech recognition methods such as HMM or DTW have been introduced primarily for speaker dependent isolated words. These methods however have confronted with difficulties in recognizing continuous speech. Since early 80's, there has been a growing awareness that neural networks might be more appropriate for English and Japanese phoneme recognition using neural networks. Dealing with only a part of vowel or consonant set, Korean phoneme recognition still remains on the elementary level. In this light, we develop a system based on neural networks which can recognize major Korean phonemes. Through experiments using two neural networks, SOFM and TDNN, we obtained remarkable results. Especially in the case of using TDNN, the recognition rate was estimated about 93.78% for training data and 89.83% for test data.

Some Goals and Components in Teaching English Pronunciation To Japanese EFL Students

  • Komoto, Yujin
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2000년도 7월 학술대회지
    • /
    • pp.220-234
    • /
    • 2000
  • This paper focuses on how and where to set learner goals in English phonetic education in Japan, especially at the threshold level, and on what components are necessary to achieve them both from practical and theoretical perspectives. It first describes some issues mainly through the speaker's own teaching plan and a literature review of various researchers such as Morley (1991), Kajima (1989), Porcaro (1999), Matsul (1996), Lambacher (1995, 1996), Dalton and Seidlhofer (1994), and Murphy (1991). By comparing and analyzing these and other researchers, the speaker tries to set and elucidate reasonable and achievable goals for students to attain intelligibility for comprehensible communicative output. The paper then suggests detailed components that form an essential part of desirable pronunciation teaching plan in order to realize a well-balanced curriculum between segmental and suprasegmental aspects.

  • PDF

어휘적 중의성 및 관용적 중의성을 처리하는 대뇌 영역 (The cerebral representation related to lexical ambiguity and idiomatic ambiguity)

  • 유기순;강홍모;조경덕;강명윤;남기춘
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 10월 학술대회지
    • /
    • pp.79-82
    • /
    • 2003
  • The purpose of this study is to examine the regions of the cerebrum that handles the lexical and idiomatic ambiguity. The stimuli sets consist of two parts, and each part has 20 sets of sentences. For each part, 10 sets are experimental conditions and the other 10 sets are control conditions. Each set has two sentences, the 'context' and 'target' sentences, and a sentence-verification question for guaranteeing patients' concentration to the task. The results based on 15 patients showed that significant activation is present in the right frontal lobe of the cerebral cortex for both kinds of ambiguity. It means that right hemisphere participates in the resolution of ambiguity, and there are no regions specified for lexical ambiguity or idiomatic ambiguity alone.

  • PDF

지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법 (Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality)

  • 최석재;이중원;권오병
    • 지능정보연구
    • /
    • 제23권3호
    • /
    • pp.119-138
    • /
    • 2017
  • 최근 SNS는 개인의 의사소통뿐 아니라 마케팅의 중요한 채널로도 자리매김하고 있다. 그러나 사이버 범죄 역시 정보와 통신 기술의 발달에 따라 진화하여 불법 광고가 SNS에 다량으로 배포되고 있다. 그 결과 개인정보를 빼앗기거나 금전적인 손해가 빈번하게 일어난다. 본 연구에서는 SNS로 전달되는 홍보글인 비정형 데이터를 분석하여 어떤 글이 금융사기(예: 불법 대부업 및 불법 방문판매)와 관련된 글인지를 분석하는 방법론을 제안하였다. 불법 홍보글 학습 데이터를 만드는 과정과, 데이터의 특성을 고려하여 입력 데이터를 구성하는 방안, 그리고 판별 알고리즘의 선택과 추출할 정보 대상의 선정 등이 프레임워크의 주요 구성 요소이다. 본 연구의 방법은 실제로 모 지방자치단체의 금융사기 방지 프로그램의 파일럿 테스트에 활용되었으며, 실제 데이터를 가지고 분석한 결과 금융사기 글을 판정하는 정확도가 사람들에 의하여 판정하는 것이나 키워드 추출법(Term Frequency), MLE 등에 비하여 월등함을 검증하였다.

자질집합선택 기반의 기계학습을 통한 한국어 기본구 인식의 성능향상 (Improving the Performance of Korean Text Chunking by Machine learning Approaches based on Feature Set Selection)

  • 황영숙;정후중;박소영;곽용재;임해창
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제29권9호
    • /
    • pp.654-668
    • /
    • 2002
  • In this paper, we present an empirical study for improving the Korean text chunking based on machine learning and feature set selection approaches. We focus on two issues: the problem of selecting feature set for Korean chunking, and the problem of alleviating the data sparseness. To select a proper feature set, we use a heuristic method of searching through the space of feature sets using the estimated performance from a machine learning algorithm as a measure of "incremental usefulness" of a particular feature set. Besides, for smoothing the data sparseness, we suggest a method of using a general part-of-speech tag set and selective lexical information under the consideration of Korean language characteristics. Experimental results showed that chunk tags and lexical information within a given context window are important features and spacing unit information is less important than others, which are independent on the machine teaming techniques. Furthermore, using the selective lexical information gives not only a smoothing effect but also the reduction of the feature space than using all of lexical information. Korean text chunking based on the memory-based learning and the decision tree learning with the selected feature space showed the performance of precision/recall of 90.99%/92.52%, and 93.39%/93.41% respectively.