• 제목/요약/키워드: Word Input

검색결과 225건 처리시간 0.024초

음운 현상과 연속 발화에서의 단어 인지 - 종성중화 작용을 중심으로 (Phonological Process and Word Recognition in Continuous Speech: Evidence from Coda-neutralization)

  • 김선미;남기춘
    • 말소리와 음성과학
    • /
    • 제2권2호
    • /
    • pp.17-25
    • /
    • 2010
  • This study explores whether Koreans exploit their native coda-neutralization process when recognizing words in Korean continuous speech. According to the phonological rules in Korean, coda-neutralization process must come before the liaison process, as long as the latter(i.e. liaison process) occurs between 'words', which results in liaison-consonants being coda-neutralized ones such as /b/, /d/, or /g/, rather than non-neutralized ones like /p/, /t/, /k/, /ʧ/, /ʤ/, or /s/. Consequently, if Korean listeners use their native coda-neutralization rules when processing speech input, word recognition will be hampered when non-neutralized consonants precede vowel-initial targets. Word-spotting and word-monitoring tasks were conducted in Experiment 1 and 2, respectively. In both experiments, listeners recognized words faster and more accurately when vowel-initial target words were preceded by coda-neutralized consonants than when preceded by coda non-neutralized ones. The results show that Korean listeners exploit the coda-neutralization process when processing their native spoken language.

  • PDF

4세 유아의 수세기 기술과 어머니의 수 단어 사용: 유아 수 단어 사용의 매개효과 (Four-Year-Old Children's Counting Skills and Their Mothers' Use of Number Words: The Mediating Role of Children's Number Word Use)

  • 박지현;박유정;이유진;백선정;최수경
    • 한국보육지원학회지
    • /
    • 제19권6호
    • /
    • pp.79-95
    • /
    • 2023
  • Objective: This study examines the relationships among four-year-olds' counting skills, their use of number words, and their mothers' use of number words during mother-child free play. Specifically, we assess whether children's use of number words mediates the relationship between their counting skills and their mothers' use of number words during play. Methods: Forty-two 4-year-old children and their mothers were asked to play freely with a given set of toys at their home for 10 minutes. Children also completed a counting skill test. Frequencies of number word use were calculated for mothers and children from transcriptions of the free play. Results: Children's counting skills, the frequency of their number word use, and their mothers' frequency of number word use were positively correlated with each other. Additionally, the frequency of children's number-word use completely mediated the relationship between their counting skills and their mothers' frequency of number-word use. Conclusion/Implications: The results suggest that children's use of number language may play a crucial role in the provision of number-related language input by parents, based on their children's math skills. Practical implications of the findings are discussed.

Adjusting Weights of Single-word and Multi-word Terms for Keyphrase Extraction from Article Text

  • Kang, In-Su
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권8호
    • /
    • pp.47-54
    • /
    • 2021
  • 핵심구 추출은 문서의 내용을 대표하는 주제 용어를 자동 추출하는 작업이다. 비지도 방식 핵심구 추출에서는 문서 텍스트로부터 핵심구 후보 용어가 되는 단어나 구를 추출하고 후보 용어에 부여된 중요도에 기반하여 최종 핵심구들이 선택된다. 본 논문에서는 비지도 방식 핵심구 후보 용어 중요도 계산에서 단어 유형 후보 용어와 구 유형 후보 용어의 중요도를 조정하는 방법을 제안한다. 이를 위해 핵심구 추출 대상 문서 텍스트로부터 후보 용어 집합의 타입-토큰 비율과 고빈도 대표 용어의 정보량을 단어 유형과 구 유형으로 구분하여 수집한 후 중요도 조정에 활용한다. 실험에서는 영어로 작성된 full-text 논문을 대상으로 구축된 4개 서로 다른 핵심구 추출 평가집합들을 사용하여 성능 평가를 수행하였고, 제안된 중요도 조정 방법은 3개 평가집합들에서 베이스 라인 및 비교 방법들보다 높은 성능을 보였다.

한국어 어휘 의미망(alias. KorLex)의 지식 그래프 임베딩을 이용한 문맥의존 철자오류 교정 기법의 성능 향상 (Performance Improvement of Context-Sensitive Spelling Error Correction Techniques using Knowledge Graph Embedding of Korean WordNet (alias. KorLex))

  • 이정훈;조상현;권혁철
    • 한국멀티미디어학회논문지
    • /
    • 제25권3호
    • /
    • pp.493-501
    • /
    • 2022
  • This paper is a study on context-sensitive spelling error correction and uses the Korean WordNet (KorLex)[1] that defines the relationship between words as a graph to improve the performance of the correction[2] based on the vector information of the word embedded in the correction technique. The Korean WordNet replaced WordNet[3] developed at Princeton University in the United States and was additionally constructed for Korean. In order to learn a semantic network in graph form or to use it for learned vector information, it is necessary to transform it into a vector form by embedding learning. For transformation, we list the nodes (limited number) in a line format like a sentence in a graph in the form of a network before the training input. One of the learning techniques that use this strategy is Deepwalk[4]. DeepWalk is used to learn graphs between words in the Korean WordNet. The graph embedding information is used in concatenation with the word vector information of the learned language model for correction, and the final correction word is determined by the cosine distance value between the vectors. In this paper, In order to test whether the information of graph embedding affects the improvement of the performance of context- sensitive spelling error correction, a confused word pair was constructed and tested from the perspective of Word Sense Disambiguation(WSD). In the experimental results, the average correction performance of all confused word pairs was improved by 2.24% compared to the baseline correction performance.

Ternary Decomposition and Dictionary Extension for Khmer Word Segmentation

  • Sung, Thaileang;Hwang, Insoo
    • Journal of Information Technology Applications and Management
    • /
    • 제23권2호
    • /
    • pp.11-28
    • /
    • 2016
  • In this paper, we proposed a dictionary extension and a ternary decomposition technique to improve the effectiveness of Khmer word segmentation. Most word segmentation approaches depend on a dictionary. However, the dictionary being used is not fully reliable and cannot cover all the words of the Khmer language. This causes an issue of unknown words or out-of-vocabulary words. Our approach is to extend the original dictionary to be more reliable with new words. In addition, we use ternary decomposition for the segmentation process. In this research, we also introduced the invisible space of the Khmer Unicode (char\u200B) in order to segment our training corpus. With our segmentation algorithm, based on ternary decomposition and invisible space, we can extract new words from our training text and then input the new words into the dictionary. We used an extended wordlist and a segmentation algorithm regardless of the invisible space to test an unannotated text. Our results remarkably outperformed other approaches. We have achieved 88.8%, 91.8% and 90.6% rates of precision, recall and F-measurement.

A Computational Model of Language Learning Driven by Training Inputs

  • 이은석;이지훈;장병탁
    • 한국인지과학회:학술대회논문집
    • /
    • 한국인지과학회 2010년도 춘계학술대회
    • /
    • pp.60-65
    • /
    • 2010
  • Language learning involves linguistic environments around the learner. So the variation in training input to which the learner is exposed has been linked to their language learning. We explore how linguistic experiences can cause differences in learning linguistic structural features, as investigate in a probabilistic graphical model. We manipulate the amounts of training input, composed of natural linguistic data from animation videos for children, from holistic (one-word expression) to compositional (two- to six-word one) gradually. The recognition and generation of sentences are a "probabilistic" constraint satisfaction process which is based on massively parallel DNA chemistry. Random sentence generation tasks succeed when networks begin with limited sentential lengths and vocabulary sizes and gradually expand with larger ones, like children's cognitive development in learning. This model supports the suggestion that variations in early linguistic environments with developmental steps may be useful for facilitating language acquisition.

  • PDF

클라우드 컴퓨팅에서 Hadoop 애플리케이션 특성에 따른 성능 분석 (A Performance Analysis Based on Hadoop Application's Characteristics in Cloud Computing)

  • 금태훈;이원주;전창호
    • 한국컴퓨터정보학회논문지
    • /
    • 제15권5호
    • /
    • pp.49-56
    • /
    • 2010
  • 본 논문에서는 클라우드 컴퓨팅을 위해 Hadoop 기반의 클러스터를 구축하고, RandomTextWriter, WordCount, PI 애플리케이션을 수행함으로써 애플리케이션 특성에 따른 클러스터의 성능을 평가한다. RandomTextWriter는 주어진 용량만큼 임의의 단어를 생성하여 HDFS에 저장하는 애플리케이션이고, WordCount는 입력 파일을 읽어서 블록 단위로 단어 빈도수를 계산하는 애플리케이션이다. 그리고 PI는 몬테카를로법을 사용하여 PI 값을 유도하는 애플리케이션이다. 이러한 애플리케이션을 실행시키면서 데이터 블록 크기와 데이터 복제본 수 증가에 따른 애플리케이션의 수행시간을 측정한다. 시뮬레이션을 통하여 RandomTextWriter 애플리케이션은 데이터 복제본 수 증가에 비례하여 수행시간이 증가함을 알 수 있었다. 반면에 WordCount와 PI 애플리케이션은 데이터 복제본 수에 큰 영향을 받지 않았다. 또한 WordCount 애플리케이션은 블록 크기가 64~256MB 일 때 최적의 수행시간을 얻을 수있었다. 따라서 이러한 애플리케이션의 특성을 고려한 스케줄링 정책을 개발한다면 애플리케이션의 실행시간을 단축하여 클라우드 컴퓨팅 시스템의 성능을 향상시킬 수 있음을 보인다.

Word2Vec과 앙상블 합성곱 신경망을 활용한 영화추천 시스템의 정확도 개선에 관한 연구 (A Study on the Accuracy Improvement of Movie Recommender System Using Word2Vec and Ensemble Convolutional Neural Networks)

  • 강부식
    • 디지털융복합연구
    • /
    • 제17권1호
    • /
    • pp.123-130
    • /
    • 2019
  • 웹 추천기법에서 가장 많이 사용하는 방식 중의 하나는 협업필터링 기법이다. 협업필터링 관련 많은 연구에서 정확도를 개선하기 위한 방안이 제시되어 왔다. 본 연구는 Word2Vec과 앙상블 합성곱 신경망을 활용한 영화추천 방안에 대해 제안한다. 먼저 사용자, 영화, 평점 정보에서 사용자 문장과 영화 문장을 구성한다. 사용자 문장과 영화 문장을 Word2Vec에 입력으로 넣어 사용자 벡터와 영화 벡터를 구한다. 사용자 벡터는 사용자 합성곱 모델에 입력하고, 영화 벡터는 영화 합성곱 모델에 입력한다. 사용자 합성곱 모델과 영화 합성곱 모델은 완전연결 신경망 모델로 연결된다. 최종적으로 완전연결 신경망의 출력 계층은 사용자 영화 평점의 예측값을 출력한다. 실험결과 전통적인 협업필터링 기법과 유사 연구에서 제안한 Word2Vec과 심층 신경망을 사용한 기법에 비해 본 연구의 제안기법이 정확도를 개선함을 알 수 있었다.

계층적 포인터 네트워크를 이용한 상호참조해결 (Coreference Resolution using Hierarchical Pointer Networks)

  • 박천음;이창기
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제23권9호
    • /
    • pp.542-549
    • /
    • 2017
  • Sequence-to-sequence 모델과 이와 유사한 포인터 네트워크는 입력이 여러 문장으로 이루어 지거나 입력 문장의 길이가 길어지면 성능이 저하되는 문제가 있다. 이러한 문제를 해결하기 위해 본 논문에서는 여러 문장으로 이루어진 입력열을 단어 레벨과 문장 레벨로 인코딩을 수행하고, 디코딩에서 단어 레벨과 문장 레벨 정보를 모두 이용하는 계층적 포인터 네트워크 모델을 제안하고, 이를 이용하여 모든 멘션(mention)에 대한 상호참조해결을 수행하는 계층적 포인터 네트워크 기반 상호참조해결을 제안한다. 실험 결과, 본 논문에서 제안한 모델이 정확률 87.07%, 재현율 65.39%, CoNLL F1 74.61%의 성능을 보였으며, 기존 규칙기반 모델 대비 24.01%의 성능 향상을 보였다.

SOC Verification Based on WGL

  • Du, Zhen-Jun;Li, Min
    • 한국멀티미디어학회논문지
    • /
    • 제9권12호
    • /
    • pp.1607-1616
    • /
    • 2006
  • The growing market of multimedia and digital signal processing requires significant data-path portions of SoCs. However, the common models for verification are not suitable for SoCs. A novel model--WGL (Weighted Generalized List) is proposed, which is based on the general-list decomposition of polynomials, with three different weights and manipulation rules introduced to effect node sharing and the canonicity. Timing parameters and operations on them are also considered. Examples show the word-level WGL is the only model to linearly represent the common word-level functions and the bit-level WGL is especially suitable for arithmetic intensive circuits. The model is proved to be a uniform and efficient model for both bit-level and word-level functions. Then Based on the WGL model, a backward-construction logic-verification approach is presented, which reduces time and space complexity for multipliers to polynomial complexity(time complexity is less than $O(n^{3.6})$ and space complexity is less than $O(n^{1.5})$) without hierarchical partitioning. Finally, a construction methodology of word-level polynomials is also presented in order to implement complex high-level verification, which combines order computation and coefficient solving, and adopts an efficient backward approach. The construction complexity is much less than the existing ones, e.g. the construction time for multipliers grows at the power of less than 1.6 in the size of the input word without increasing the maximal space required. The WGL model and the verification methods based on WGL show their theoretical and applicable significance in SoC design.

  • PDF