• Title/Summary/Keyword: Part-of-Speech Set

Search Result 37, Processing Time 0.026 seconds

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.

Korean prosodic properties between read and spontaneous speech (한국어 낭독과 자유 발화의 운율적 특성)

  • Yu, Seungmi;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.39-54
    • /
    • 2022
  • This study aims to clarify the prosodic differences in speech types by examining the Korean read speech and spontaneous speech in the Korean part of the L2 Korean Speech Corpus (speech corpus for Korean as a foreign language). To this end, the articulation length, articulation speed, pause length and frequency, and the average fundamental frequency values of sentences were set as variables and analyzed via statistical methodologies (t-test, correlation analysis, and regression analysis). The results found that read speech and spontaneous speech were structurally different in the form of prosodic phrases constituting each sentence and that the prosodic elements differentiating each speech type were articulation length, pause length, and pause frequency. The statistical results show that the correlation between articulation speed and articulation length was highest in read speech, explaining that the longer a given sentence is, the faster the speaker speaks. In spontaneous speech, however, the relationship between the articulation length and the pause frequency in a sentence was high. Overall, spontaneous speech produces more pauses because short intonation phrases are continuously built to make a sentence, and as a result, the sentence gets lengthened.

Vocabulary Coverage Improvement for Embedded Continuous Speech Recognition Using Knowledgebase (지식베이스를 이용한 임베디드용 연속음성인식의 어휘 적용률 개선)

  • Kim, Kwang-Ho;Lim, Min-Kyu;Kim, Ji-Hwan
    • MALSORI
    • /
    • v.68
    • /
    • pp.115-126
    • /
    • 2008
  • In this paper, we propose a vocabulary coverage improvement method for embedded continuous speech recognition (CSR) using knowledgebase. A vocabulary in CSR is normally derived from a word frequency list. Therefore, the vocabulary coverage is dependent on a corpus. In the previous research, we presented an improved way of vocabulary generation using part-of-speech (POS) tagged corpus. We analyzed all words paired with 101 among 152 POS tags and decided on a set of words which have to be included in vocabularies of any size. However, for the other 51 POS tags (e.g. nouns, verbs), the vocabulary inclusion of words paired with such POS tags are still based on word frequency counted on a corpus. In this paper, we propose a corpus independent word inclusion method for noun-, verb-, and named entity(NE)-related POS tags using knowledgebase. For noun-related POS tags, we generate synonym groups and analyze their relative importance using Google search. Then, we categorize verbs by lemma and analyze relative importance of each lemma from a pre-analyzed statistic for verbs. We determine the inclusion order of NEs through Google search. The proposed method shows better coverage for the test short message service (SMS) text corpus.

  • PDF

Evaluation of Teaching English Intonation through Native Utterances with Exaggerated Intonation (억양이 과장된 원어민 발화를 통한 영어 억양 교육과 평가)

  • Yoon, Kyu-Chul
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.35-43
    • /
    • 2011
  • The purpose of this paper is to evaluate the viability of employing the intonation exaggeration technique proposed in [4] in teaching English prosody to university students. Fifty-six female university students, twenty-two in a control group and the other thirty-four in an experimental group, participated in a teaching experiment as part of their regular coursework for a five-and-a-half week period. For the study material of the experimental group, a set of utterances was synthesized whose intonation contours had been exaggerated whereas the control group was given the same set without any intonation modification. Recordings from both before and after the teaching experiment were made and one sentence set was chosen for analysis. The parameters analyzed were the pitch range, words containing the highest and lowest pitch points, and the 3-dimensional comparison of the three prosodic features [2]. An AXB and subjective rating test were also performed along with a qualitative screening of the individual intonation contours. The results showed that the experimental group performed slightly better in that their intonation contour was more similar to that of the model native speaker's utterance. This appears to suggest that the intonation exaggeration technique can be employed in teaching English prosody to students.

  • PDF

Modality Classification for an Example-Based Dialogue System (예제 기반 대화 시스템을 위한 양태 분류)

  • Kim, Min-Jeong;Hong, Gum-Won;Song, Young-In;Lee, Yeon-Soo;Lee, Do-Gil;Rim, Hae-Chang
    • MALSORI
    • /
    • v.68
    • /
    • pp.75-93
    • /
    • 2008
  • An example-based dialogue system tries to utilize many pairs which are stored in a dialogue database. The most important part of the example-based dialogue system is to find the most similar utterance to user's input utterance. Modality, which is characterized as conveying the speaker's involvement in the propositional content of a given utterance, is one of the core sentence features. For example, the sentence "I want to go to school." has a modality of hope. In this paper, we have proposed a modality classification system which can predict sentence modality in order to improve the performance of example-based dialogue systems. We also define a modality tag set for a dialogue system, and validate this tag set using a rule-based modality classification system. Experimental results show that our modality tag set and modality classification system improve the performance of an example-based dialogue system.

  • PDF

Korean Phoneme Recognition Using Neural Networks (신경회로망 이용한 한국어 음소 인식)

  • 김동국;정차균;정홍
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.40 no.4
    • /
    • pp.360-373
    • /
    • 1991
  • Since 70's, efficient speech recognition methods such as HMM or DTW have been introduced primarily for speaker dependent isolated words. These methods however have confronted with difficulties in recognizing continuous speech. Since early 80's, there has been a growing awareness that neural networks might be more appropriate for English and Japanese phoneme recognition using neural networks. Dealing with only a part of vowel or consonant set, Korean phoneme recognition still remains on the elementary level. In this light, we develop a system based on neural networks which can recognize major Korean phonemes. Through experiments using two neural networks, SOFM and TDNN, we obtained remarkable results. Especially in the case of using TDNN, the recognition rate was estimated about 93.78% for training data and 89.83% for test data.

Some Goals and Components in Teaching English Pronunciation To Japanese EFL Students

  • Komoto, Yujin
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.220-234
    • /
    • 2000
  • This paper focuses on how and where to set learner goals in English phonetic education in Japan, especially at the threshold level, and on what components are necessary to achieve them both from practical and theoretical perspectives. It first describes some issues mainly through the speaker's own teaching plan and a literature review of various researchers such as Morley (1991), Kajima (1989), Porcaro (1999), Matsul (1996), Lambacher (1995, 1996), Dalton and Seidlhofer (1994), and Murphy (1991). By comparing and analyzing these and other researchers, the speaker tries to set and elucidate reasonable and achievable goals for students to attain intelligibility for comprehensible communicative output. The paper then suggests detailed components that form an essential part of desirable pronunciation teaching plan in order to realize a well-balanced curriculum between segmental and suprasegmental aspects.

  • PDF

The cerebral representation related to lexical ambiguity and idiomatic ambiguity (어휘적 중의성 및 관용적 중의성을 처리하는 대뇌 영역)

  • Yu Gisoon;Kang Hongmo;Jo Kyungduk;Kang Myungyoon;Nam Kichun
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.79-82
    • /
    • 2003
  • The purpose of this study is to examine the regions of the cerebrum that handles the lexical and idiomatic ambiguity. The stimuli sets consist of two parts, and each part has 20 sets of sentences. For each part, 10 sets are experimental conditions and the other 10 sets are control conditions. Each set has two sentences, the 'context' and 'target' sentences, and a sentence-verification question for guaranteeing patients' concentration to the task. The results based on 15 patients showed that significant activation is present in the right frontal lobe of the cerebral cortex for both kinds of ambiguity. It means that right hemisphere participates in the resolution of ambiguity, and there are no regions specified for lexical ambiguity or idiomatic ambiguity alone.

  • PDF

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Improving the Performance of Korean Text Chunking by Machine learning Approaches based on Feature Set Selection (자질집합선택 기반의 기계학습을 통한 한국어 기본구 인식의 성능향상)

  • Hwang, Young-Sook;Chung, Hoo-jung;Park, So-Young;Kwak, Young-Jae;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.654-668
    • /
    • 2002
  • In this paper, we present an empirical study for improving the Korean text chunking based on machine learning and feature set selection approaches. We focus on two issues: the problem of selecting feature set for Korean chunking, and the problem of alleviating the data sparseness. To select a proper feature set, we use a heuristic method of searching through the space of feature sets using the estimated performance from a machine learning algorithm as a measure of "incremental usefulness" of a particular feature set. Besides, for smoothing the data sparseness, we suggest a method of using a general part-of-speech tag set and selective lexical information under the consideration of Korean language characteristics. Experimental results showed that chunk tags and lexical information within a given context window are important features and spacing unit information is less important than others, which are independent on the machine teaming techniques. Furthermore, using the selective lexical information gives not only a smoothing effect but also the reduction of the feature space than using all of lexical information. Korean text chunking based on the memory-based learning and the decision tree learning with the selected feature space showed the performance of precision/recall of 90.99%/92.52%, and 93.39%/93.41% respectively.