• Title/Summary/Keyword: Part of speech

Search Result 439, Processing Time 0.024 seconds

Korean Part-of-Speech Tagging using Disambiguation Rules for Ambiguous Word and Statistical Information (어휘별 중의성 제거 규칙과 통계 정보를 이용한 한국어 품사 태깅)

  • Ahn, Kwang-Mo;Han, Kyou-Youl;Seo, Young-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.2
    • /
    • pp.18-26
    • /
    • 2009
  • A hybrid part-of-speech tagging approaches may be robust, easily extendable, and accurate because they can have the advantages of both statistical approach and rule-based approach. But conventional hybrid part-of-speech tagging systems hardly resolve some morphological ambiguities which can't be resolved by statistical information. It is because the coverage of rules is narrow. So, we define disambiguation rules for individual ambiguous word based on syntax and semantics of surround words. We select words from which the top 50% of ambiguities are occurred in Sejong corpus and build 1,814 rules for them. The accuracy of our hybrid part-of-speech tagging system using those rules is 98.28%.

Japanese Speech Based Fuzzy Man-Machine Interface of Manipulators

  • Izumi, Kiyotaka;Watanabe, Keigo;Tamano, Yuya;Kiguchi, Kazuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.603-608
    • /
    • 2003
  • Recently, personal robots and home robots are developing by many companies and research groups. It is considered that a general effective interface for user of those robots is speech or voice. In this paper, Japanese speech based man-machine interface system is discussed for reflecting the fuzziness of natural language on robots, by using fuzzy reasoning. The present system consists of the derivation part of action command and the modification part of the derived command. In particular, a unique problem of Japanese is solved by applying the morphological analyzer ChaSen. The proposed system is applied for the motion control of a robot manipulator. It is proved from the experimental results that the proposed system can easily modify the same voice command to the actual different levels of the command, according to the current state of the robot.

  • PDF

FPGA Implementation of Speech Processor for Cochlear Implant (청각보철장치를 위한 어음 발췌기의 FPGA 구현)

  • Park, S.J.;Hong, M.S.;Shin, J.I.;Park, S.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.163-164
    • /
    • 1998
  • In this paper the digital speech processing part of cochlear implant for sensorineural disorderly patients is implemented and simulated. We implement the speech processing part by dividing into three small parts - Filterbank, Pitch Detect, and Bandmapping parts. With the result, we conclude digital speech processing algorithm is implemented in FPGA perfectly. This means that cochlear implant can be made very small size.

  • PDF

Implementation of Extracting Specific Information by Sniffing Voice Packet in VoIP

  • Lee, Dong-Geon;Choi, WoongChul
    • International journal of advanced smart convergence
    • /
    • v.9 no.4
    • /
    • pp.209-214
    • /
    • 2020
  • VoIP technology has been widely used for exchanging voice or image data through IP networks. VoIP technology, often called Internet Telephony, sends and receives voice data over the RTP protocol during the session. However, there is an exposition risk in the voice data in VoIP using the RTP protocol, where the RTP protocol does not have a specification for encryption of the original data. We implement programs that can extract meaningful information from the user's dialogue. The meaningful information means the information that the program user wants to obtain. In order to do that, our implementation has two parts. One is the client part, which inputs the keyword of the information that the user wants to obtain, and the other is the server part, which sniffs and performs the speech recognition process. We use the Google Speech API from Google Cloud, which uses machine learning in the speech recognition process. Finally, we discuss the usability and the limitations of the implementation with the example.

Determining the Relative Differences of Emotional Speech Using Vocal Tract Ratio

  • Wang, Jianglin;Jo, Cheol-Woo
    • Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.109-116
    • /
    • 2006
  • In this paper, our study focuses on obtaining the differences of emotional speech in three different vocal tract sections. The vocal tract area was computed from the area function of the emotional speech. The total vocal tract was divided into 3 sections (vocal fold section, middle section and lip section) to acquire the differences in each vocal tract section of emotional speech. The experiment data include 6 emotional speeches from 3 males and 3 females. The 6 emotions consist of neutral, happiness, anger, sadness, fear and boredom. The measured difference is computed by the ratio through comparing each emotional speech with the normal speech. The experimental results present that there is not a remarkable difference at lip section, but the fear and sadness have a great change at the vocal fold part.

  • PDF

An Analysis of English Listening Items on the TOEFL (TOEFL의 듣기문항 분석을 통한 한국대학생 듣기 학습효과)

  • Cha, Kyung-Whan;Yoo, Yoon-Hee
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.157-175
    • /
    • 2000
  • The aim of this paper was to diagnose Korean college students' listening skills on the TOEFL. The researchers identified which section, among the TOEFL listening Part A, B, and C, is most easily teachable/ improvable during the period of a semester. First, the result of this research shows that Korean students tend to have lower scores in Part A than Part B or Part C. The results indicate that the short informal conversation doesn't give sufficient clues to students, and they don't have enough time to infer the answer. Second, the results revealed that. students showed the lowest progress in Part B after they studied TOEFL listening items and essential idioms for the listening section for 13 weeks. Because students didn't have much experience learning the informal conversation as opposed to the formal one in English, it is harder to achieve an improved grade in Part B, which consists of the informal conversation. But after a semester-long listening course, the average score on TOEFL listening sections increased.

  • PDF

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.

Improving transformer-based speech recognition performance using data augmentation by local frame rate changes (로컬 프레임 속도 변경에 의한 데이터 증강을 이용한 트랜스포머 기반 음성 인식 성능 향상)

  • Lim, Seong Su;Kang, Byung Ok;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.122-129
    • /
    • 2022
  • In this paper, we propose a method to improve the performance of Transformer-based speech recognizers using data augmentation that locally adjusts the frame rate. First, the start time and length of the part to be augmented in the original voice data are randomly selected. Then, the frame rate of the selected part is changed to a new frame rate by using linear interpolation. Experimental results using the Wall Street Journal and LibriSpeech speech databases showed that the convergence time took longer than the baseline, but the recognition accuracy was improved in most cases. In order to further improve the performance, various parameters such as the length and the speed of the selected parts were optimized. The proposed method was shown to achieve relative performance improvement of 11.8 % and 14.9 % compared with the baseline in the Wall Street Journal and LibriSpeech speech databases, respectively.

Channel Coder Implementation and Performance Analysis for Speech Coding: Considering bit Importance of Speech Information-part III (음성 부호기용 채널 부호화기의 구현 및 성능 분석)

  • 강법주;김선영;김상천;김영식
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.4
    • /
    • pp.484-490
    • /
    • 1990
  • In speech coding scheme, because information bits have different error sensitivities over channel errors, the channel coder for combining with speech coding should be realized by the variable coding rate considering the bit importance of speech information bits. In realizing the 4 kbps channel coder for 12kbps speech, this paper have chosen the channel coding method by analyzing the hard-decision post-decoding error rate of RCPC(Rate Compatible Punctured Convolutional) codes and bit error sensitivity of 12 kbps speech. Under the coherent QPSK and Rayleigh fading channel, the performance analysis has showed that 10dB gain was obtained in speech SEGSNR by 4-level uneuqal error protection, which was compared with the caseof no channel coding at 7dB channel SNR.

  • PDF

An analysis of Speech Acts for Korean Using Support Vector Machines (지지벡터기계(Support Vector Machines)를 이용한 한국어 화행분석)

  • En Jongmin;Lee Songwook;Seo Jungyun
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.365-368
    • /
    • 2005
  • We propose a speech act analysis method for Korean dialogue using Support Vector Machines (SVM). We use a lexical form of a word, its part of speech (POS) tags, and bigrams of POS tags as sentence features and the contexts of the previous utterance as context features. We select informative features by Chi square statistics. After training SVM with the selected features, SVM classifiers determine the speech act of each utterance. In experiment, we acquired overall $90.54\%$ of accuracy with dialogue corpus for hotel reservation domain.