• Title/Summary/Keyword: Part of speech

Search Result 439, Processing Time 0.03 seconds

The Relationship Between Voice and the Image Triggered by the Voice: American Speakers and American Listeners (목소리를 듣고 감지하는 인상에 대한 연구: 미국인화자와 미국인청자)

  • Moon, Seung-Jae
    • Phonetics and Speech Sciences
    • /
    • v.1 no.2
    • /
    • pp.111-118
    • /
    • 2009
  • The present study aims at investigating the relationship between voices and the physical images triggered by the voices. It is the final part of a four-part series and the results reported in the present study are limited to those of American speakers and American listeners. Combined with the results from previous studies (Moon, 2000; Moon, 2002; Tak, 2005), the results suggest that (1) there is a very strong, much higher than chance-level relationship between voices and the pictures chosen for the voices by the perception experiment subjects; (2) the more physical characteristics that are given, the better the chance for correctly matching voices with pictures; and (3) culture (in the present, language environment) seems to play a role in conjuring up the mental images from voices.

  • PDF

Korean Morphological Analysis and Part-Of-Speech Tagging with LSTM-CRF based on BERT (BERT기반 LSTM-CRF 모델을 이용한 한국어 형태소 분석 및 품사 태깅)

  • Park, Cheoneum;Lee, Changki;Kim, Hyunki
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.34-36
    • /
    • 2019
  • 기존 딥 러닝을 이용한 형태소 분석 및 품사 태깅(Part-Of-Speech tagging)은 feed-forward neural network에 CRF를 결합하는 방법이나 sequence-to-sequence 모델을 이용한 방법 등의 다양한 모델들이 연구되었다. 본 논문에서는 한국어 형태소 분석 및 품사 태깅을 수행하기 위하여 최근 자연어처리 태스크에서 많은 성능 향상을 보이고 있는 BERT를 기반으로 한 음절 단위 LSTM-CRF 모델을 제안한다. BERT는 양방향성을 가진 트랜스포머(transformer) 인코더를 기반으로 언어 모델을 사전 학습한 것이며, 본 논문에서는 한국어 대용량 코퍼스를 어절 단위로 사전 학습한 KorBERT를 사용한다. 실험 결과, 본 논문에서 제안한 모델이 기존 한국어 형태소 분석 및 품사 태깅 연구들 보다 좋은 (세종 코퍼스) F1 98.74%의 성능을 보였다.

  • PDF

Crossword Game Using Speech Technology (음성기술을 이용한 십자말 게임)

  • Yu, Il-Soo;Kim, Dong-Ju;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.213-218
    • /
    • 2003
  • In this paper, we implement a crossword game, which operate by speech. The CAA (Cross Array Algorithm) produces the crossword array randomly and automatically using an domain-dictionary. For producing the crossword array, we construct seven domain-dictionaries. The crossword game is operated by a mouse and a keyboard and is also operated by speech. For the user interface by speech, we use a speech recognizer and a speech synthesizer and this provide more comfortable interface to the user. The efficiency evaluation of CAA is performed by estimating the processing times of producing the crossword array and the generation ratio of the crossword array. As the results of the CAA's efficiency evaluation, the processing times is about 10ms and the generation ratio of the crossword array is about 50%. Also, the recognition rates were 95.5%, 97.6% and 96.2% for the window sizes of "$7{\times}7$", "$9{\times}9$," and "$11{\times}11$" respectively.}11$" respectively.vely.

A study on the Stochastic Model for Sentence Speech Understanding (문장음성 이해를 위한 확률모델에 관한 연구)

  • Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.7
    • /
    • pp.829-836
    • /
    • 2003
  • In this paper, we propose a stochastic model for sentence speech understanding using dictionary and thesaurus. The proposed model extracts words from an input speech or text into a sentence. A computer is sellected category of dictionary database compared the word extracting from the input sentence calculating a probability value to the compare results from stochastic model. At this time, computer read out upper dictionary information from the upper dictionary searching and extracting word compared input sentence caluclating value to the compare results from stochastic model. We compare adding the first and second probability value from the dictionary searching and the upper dictionary searching with threshold probability that we measure the sentence understanding rate. We evaluated the performance of the sentence speech understanding system by applying twenty questions game. As the experiment results, we got sentence speech understanding accuracy of 79.8%. In this case, probability ($\alpha$) of high level word is 0.9 and threshold probability ($\beta$) is 0.38.

Domain Adaptation Method for LHMM-based English Part-of-Speech Tagger (LHMM기반 영어 형태소 품사 태거의 도메인 적응 방법)

  • Kwon, Oh-Woog;Kim, Young-Gil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.10
    • /
    • pp.1000-1004
    • /
    • 2010
  • A large number of current language processing systems use a part-of-speech tagger for preprocessing. Most language processing systems required a tagger with the highest possible accuracy. Specially, the use of domain-specific advantages has become a hot issue in machine translation community to improve the translation quality. This paper addresses a method for customizing an HMM or LHMM based English tagger from general domain to specific domain. The proposed method is to semi-automatically customize the output and transition probabilities of HMM or LHMM using domain-specific raw corpus. Through the experiments customizing to Patent domain, our LHMM tagger adapted by the proposed method shows the word tagging accuracy of 98.87% and the sentence tagging accuracy of 78.5%. Also, compared with the general tagger, our tagger improved the word tagging accuracy of 2.24% (ERR: 66.4%) and the sentence tagging accuracy of 41.0% (ERR: 65.6%).

A Corpus-based Hybrid Model for Morphological Analysis and Part-of-Speech Tagging (형태소 분석 및 품사 부착을 위한 말뭉치 기반 혼합 모형)

  • Lee, Seung-Wook;Lee, Do-Gil;Rim, Hae-Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.7
    • /
    • pp.11-18
    • /
    • 2008
  • Korean morphological analyzer generally generates multiple candidates, and then selects the most likely one among multiple candidates. As the number of candidates increases, the chance that the correctly analyzed candidate is included in the candidate list also grows. This process, however, increases ambiguity and then deteriorates the performance. In this paper, we propose a new rule-based model that produces one best analysis. The analysis rules are automatically extracted from large amount of Part-of-Speech tagged corpus, and the proposed model does not require any manual construction cost of analysis rules, and has shown high success rate of analysis. Futhermore, the proposed model can reduce the ambiguities and computational complexities in the candidate selection phase because the model produces one analysis when it can successfully analyze the given word. By combining the conventional probability-based model. the model can also improve the performance of analysis when it does not produce a successful analysis.

  • PDF

Performance Comparison Analysis on Named Entity Recognition system with Bi-LSTM based Multi-task Learning (다중작업학습 기법을 적용한 Bi-LSTM 개체명 인식 시스템 성능 비교 분석)

  • Kim, GyeongMin;Han, Seunggnyu;Oh, Dongsuk;Lim, HeuiSeok
    • Journal of Digital Convergence
    • /
    • v.17 no.12
    • /
    • pp.243-248
    • /
    • 2019
  • Multi-Task Learning(MTL) is a training method that trains a single neural network with multiple tasks influences each other. In this paper, we compare performance of MTL Named entity recognition(NER) model trained with Korean traditional culture corpus and other NER model. In training process, each Bi-LSTM layer of Part of speech tagging(POS-tagging) and NER are propagated from a Bi-LSTM layer to obtain the joint loss. As a result, the MTL based Bi-LSTM model shows 1.1%~4.6% performance improvement compared to single Bi-LSTM models.

A Study on the reestablishment of English Part of Speech and Sentence Structural Elements (영어 품사 및 문장요소 용어 재확립에 대한 고찰)

  • Yi, Jae-Il
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.2
    • /
    • pp.43-48
    • /
    • 2019
  • This study examines the problem of incorrect usage of grammatical terms that are quite common in English grammar teaching process and suggests ways to revise and improve the errors. Parts of speech and sentence elements are indispensable for any grammatical explanation. These grammatical terms are a core part of the grammar, but they are frequently used without being verified correctly and interchangeably with no distinction. These terms refer to different things, and when they are used interchangeably, they cause confusion in the establishment of grammar cognition. In result, there is a crucial need to discuss and improve the definitions of the grammatical terms defined in the English teaching process for proper improvement in effective English education.

New Text Steganography Technique Based on Part-of-Speech Tagging and Format-Preserving Encryption

  • Mohammed Abdul Majeed;Rossilawati Sulaiman;Zarina Shukur
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.170-191
    • /
    • 2024
  • The transmission of confidential data using cover media is called steganography. The three requirements of any effective steganography system are high embedding capacity, security, and imperceptibility. The text file's structure, which makes syntax and grammar more visually obvious than in other media, contributes to its poor imperceptibility. Text steganography is regarded as the most challenging carrier to hide secret data because of its insufficient redundant data compared to other digital objects. Unicode characters, especially non-printing or invisible, are employed for hiding data by mapping a specific amount of secret data bits in each character and inserting the character into cover text spaces. These characters are known with limited spaces to embed secret data. Current studies that used Unicode characters in text steganography focused on increasing the data hiding capacity with insufficient redundant data in a text file. A sequential embedding pattern is often selected and included in all available positions in the cover text. This embedding pattern negatively affects the text steganography system's imperceptibility and security. Thus, this study attempts to solve these limitations using the Part-of-speech (POS) tagging technique combined with the randomization concept in data hiding. Combining these two techniques allows inserting the Unicode characters in randomized patterns with specific positions in the cover text to increase data hiding capacity with minimum effects on imperceptibility and security. Format-preserving encryption (FPE) is also used to encrypt a secret message without changing its size before the embedding processes. By comparing the proposed technique to already existing ones, the results demonstrate that it fulfils the cover file's capacity, imperceptibility, and security requirements.

Voice Features Extraction of Lung Diseases Based on the Analysis of Speech Rates and Intensity (발화속도 및 강도 분석에 기반한 폐질환의 음성적 특징 추출)

  • Kim, Bong-Hyun;Cho, Dong-Uk
    • The KIPS Transactions:PartB
    • /
    • v.16B no.6
    • /
    • pp.471-478
    • /
    • 2009
  • The lung diseases classifying as one of the six incurable diseases in modern days are caused mostly by smoking and air pollution. Such causes the lung function damages, and results in malfunction of the exchange of carbon dioxide and oxygen in an alveolus, which the interest is augment with risk diseases of life prolongation. With this in the paper, we proposed a diagnosis method of lung diseases by applying parameters of voice analysis aiming at the getting the voice feature extraction. Firstly, we sampled the voice data from patients and normal persons in the same age and sex, and made two sample groups from them. Also, we conducted an analysis by applying the various parameters of voice analysis through the collected voice data. The relational significance between the patient and normal groups can be evaluated in terms of speech rates and intensity as a part of analized parameters. In conclusion, the patient group has shown slower speech rates and bigger intensity than the normal group. With this, we propose the method of voice feature extraction for lung diseases.