• Title/Summary/Keyword: words

Search Result 9,106, Processing Time 0.039 seconds

An Application of Sensory Engineering's Techniques for Customer Satisfaction (고객만족을 위한 감성공학기법의 응용 -자동차 개발을 위한 감성 어휘 구조화-)

  • 이성웅;양원섭;김정식;김영선
    • Journal of Korean Society for Quality Management
    • /
    • v.25 no.2
    • /
    • pp.154-168
    • /
    • 1997
  • This paper considers an a, pp.ication of one of the sensory engineering's techniques, extraction and categorization of the sensory words, to the product of cars. The fourty five (45) sensory words are extracted in three steps. Two groups, which are characterized by whether possessing a car or not and each group consisting of one hundred persons randomly selected from the twenties or thirties, are asked to answer the questionaires with the extracted words in the five-grade semantic differential. The factor analysis is used to categorize the extracted sensory words, and shows that the words can be grouped into four categories.

  • PDF

Analysis on Review Data of Restaurants in Google Maps through Text Mining: Focusing on Sentiment Analysis

  • Shin, Bee;Ryu, Sohee;Kim, Yongjun;Kim, Dongwhan
    • Journal of Multimedia Information System
    • /
    • v.9 no.1
    • /
    • pp.61-68
    • /
    • 2022
  • The importance of online reviews is prevalent as more people access goods or places online and make decisions to visit or purchase. However, such reviews are generally provided by short sentences or mere star ratings; failing to provide a general overview of customer preferences and decision factors. This study explored and broke down restaurant reviews found on Google Maps. After collecting and analyzing 5,427 reviews, we vectorized the importance of words using the TF-IDF. We used a random forest machine learning algorithm to calculate the coefficient of positivity and negativity of words used in reviews. As the result, we were able to build a dictionary of words for positive and negative sentiment using each word's coefficient. We classified words into four major evaluation categories and derived insights into sentiment in each criterion. We believe the dictionary of review words and analyzing the major evaluation categories can help prospective restaurant visitors to read between the lines on restaurant reviews found on the Web.

Improving Abstractive Summarization by Training Masked Out-of-Vocabulary Words

  • Lee, Tae-Seok;Lee, Hyun-Young;Kang, Seung-Shik
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.344-358
    • /
    • 2022
  • Text summarization is the task of producing a shorter version of a long document while accurately preserving the main contents of the original text. Abstractive summarization generates novel words and phrases using a language generation method through text transformation and prior-embedded word information. However, newly coined words or out-of-vocabulary words decrease the performance of automatic summarization because they are not pre-trained in the machine learning process. In this study, we demonstrated an improvement in summarization quality through the contextualized embedding of BERT with out-of-vocabulary masking. In addition, explicitly providing precise pointing and an optional copy instruction along with BERT embedding, we achieved an increased accuracy than the baseline model. The recall-based word-generation metric ROUGE-1 score was 55.11 and the word-order-based ROUGE-L score was 39.65.

Closure durations of Korean stops at three positions

  • Yungdo Yun
    • Phonetics and Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.11-17
    • /
    • 2022
  • This study investigates closure durations of Korean stops in terms of laryngeal contrasts, places of articulation, and three positions within words. Twenty-two Korean speakers produced the nonsense words containing Korean stops found in word-initial and word-final positions and between vowels. The statistical results showed that the closure durations differed significantly by laryngeal contrast and place of articulation. In addition, the differences by position within words were marginally significant. The closure durations were in the order of lenis < aspirated < fortis stops by laryngeal contrast, velar < alveolar < bilabial stops by place of articulation, and word-final < word-initial < between vowels by positions within words. The laryngeal contrasts were neutralized in word-final position as per coda neutralization in Korean phonology. This study shows that closure durations should be considered a valuable phonetic cue to identify stops on par with voice onset time and f0.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Comparison of Key Words of the Journal of Korean Academy of Fundamentals of Nursing with MeSH (2003-2007) (기본간호학회지 게재 논문의 주요어와 MeSH 용어의 비교(2003-2007년))

  • Chaung, Seung-Kyo;Sohng, Kyeong-Yae;Kim, Kyung-Hee
    • Journal of Korean Academy of Fundamentals of Nursing
    • /
    • v.15 no.4
    • /
    • pp.558-565
    • /
    • 2008
  • Purpose: The purpose of this study was to analyze how accurately authors of the Journal of Korean Academy of Fundamentals of Nursing used MeSH terms as key words. Method: A total of 724 key words used in the 225 papers of Journal of Korean Academy of Fundamentals of Nursing from 2003 to 2007 were compared with MeSH terms. Results: Fifty nine point eight percent of total key words were completely coincident with MeSH terms, 13.5% were entry terms, and 21.8% were not MeSH terms. The coincidence rates for 2003 and 2007 separately were 38.5% and 70.9%. Also, 25.3% of papers precisely used MeSH terms as key words and 8% did not use any MeSH terms. Conclusion: The results show that the coincidence rate of key words with MeSH terms was at a moderate level and gradually increased according to year. However, there is a need for us to understand MeSH more specifically and accurately.

  • PDF

The Effect of Prosodic Position and Word Type on the Production of Korean Plosives

  • Jang, Mi
    • Phonetics and Speech Sciences
    • /
    • v.3 no.4
    • /
    • pp.71-81
    • /
    • 2011
  • This paper investigated how prosodic position and word type affect the phonetic structure of Korean coronal stops. Initial segments of prosodic domains were known to be more strongly articulated and longer relative to prosodic domain-medial segments. However, there are few studies examining whether the properties of prosodic domain-initial segments are affected by the information content of words (real vs. nonsense words). In addition, since the scope of domain-initial effect was known to be local to the initial consonant and the effects on the following vowel have been found to be limited, it is thus worth examining whether the prosodic domain-initial effect extends into the vowel after the initial consonant in a systematic way across different prosodic domains. The acoustic properties of Korean coronal stops (lenis /t/, aspirated /$t^h$/, and tense /t'/) were compared across Intonational Phrase, Phonological Phrase and Word-initial positions both in real and nonsense words. The durational intervals such as VOT and CV duration were cumulatively lengthened for /t/ and /$t^h$/ in the higher prosodic domain-initial positions. However, tense stop /t'/ did not show any variation as a function of prosodic position and word type. The domain-initial lenis stop showed significantly longer duration in nonsense words than in real words. But the prosodic domain-initial effect was not found in the properties of F0 and [H1-H2] of the vowel after initial stops. The present study provided evidence that speakers tend to enhance speech clarity when there is less contextual information as in prosodic domain-initial position and in nonsense words.

  • PDF

Korean Students' Repetition of English Sentences Under Noise and Speed Conditions (소음과 속도를 변화시킨 영어 문장 따라하기에 대한 연구)

  • Kim, Eun-Jee;Yang, Byung-Gon
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.105-117
    • /
    • 2004
  • Recently, many scholars have emphasized the importance of English listening ability for smoother communication. Most audio materials, however, were recorded in a quiet sound-proof booth. Therefore, students who have spent so much time listening to the ideal audio materials are expected to have difficulty communicating with native speakers in the real life. In this study, we examined how well thirty three Korean university students and five native speakers will repeat the recorded English sentences under noise and speed conditions. The subjects' production was scored by listening to each recorded sentence and counting the number of words correctly produced and determined the percent ratios of correctly produced words to the total words in each sentence. Results showed that the student group correctly repeated around 65% of all the words in each sentence while the native speakers demonstrated almost perfect match. It seemed that the students had difficulty perceiving and repeating function words in various conditions. Also, high-proficiency student group outperformed the low-proficiency student group particularly in their repetition of function words. In addition, the student subjects' accuracy of repetition remarkably dropped when the normal sentences were both sped up and mixed with noise. Finally, it was observed that the Korean students' percent correct ratio fell down as the stimulus sentence became longer.

  • PDF

Probabilistic Segmentation and Tagging of Unknown Words (확률 기반 미등록 단어 분리 및 태깅)

  • Kim, Bogyum;Lee, Jae Sung
    • Journal of KIISE
    • /
    • v.43 no.4
    • /
    • pp.430-436
    • /
    • 2016
  • Processing of unknown words such as proper nouns and newly coined words is important for a morphological analyzer to process documents in various domains. In this study, a segmentation and tagging method for unknown Korean words is proposed for the 3-step probabilistic morphological analysis. For guessing unknown word, it uses rich suffixes that are attached to open class words, such as general nouns and proper nouns. We propose a method to learn the suffix patterns from a morpheme tagged corpus, and calculate their probabilities for unknown open word segmentation and tagging in the probabilistic morphological analysis model. Results of the experiment showed that the performance of unknown word processing is greatly improved in the documents containing many unregistered words.

Corpus-based Analysis on Vocabulary Found in 『Donguibogam』 (코퍼스 분석방법을 이용한 『동의보감(東醫寶鑑)』의 어휘 분석)

  • Jung, Ji-Hun;Kim, Dongryul
    • The Journal of Korean Medical History
    • /
    • v.28 no.1
    • /
    • pp.135-141
    • /
    • 2015
  • The purpose of this study is to analyze vocabulary found in "Donguibogam", one of the medical books in mid-Chosun, through Corpus-based analysis, one of the text analysis methods. According to it, Donguibogam has total 871,000 words in it, and Chinese characters used in it are total 5,130. Among them, 2,430 characters form 99% of the entire text. The most frequently appearing 20 Chinese characters are mainly function words, and with this, we can see that "Donguibogam" is a book equipped with complete forms of sentences just like other books. Examining the chapters of "Donguibogam" by comparison, Remedies and Acupuncture indicated lower frequencies of function words than Internal Medicine, External Medicine, and Miscellaneous Diseases. "Yixuerumen (Introduction to Medicine)" which influenced "Donguibogam" very much has lower frequencies of function words than "Donguibogam" in its most frequently appearing words. This may be because "Yixuerumen" maintains the form of Chileonjeolgu (a quatrain with seven Chinese characters in each line with seven-word lines) and adds footnotes below it. Corpus-based analysis helps us to see the words mainly used by measuring their frequencies in the book of medicine. Therefore, this researcher suggests that the results of this analysis can be used for education of Chinese characters at the college of Korean Medicine.