• Title/Summary/Keyword: Corpus Frequency

Search Result 166, Processing Time 0.027 seconds

Determining the Specificity of Terms using Compositional and Contextual Information (구성정보와 문맥정보를 이용한 전문용어의 전문성 측정 방법)

  • Ryu Pum-Mo;Bae Sun-Mee;Choi Key-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.7
    • /
    • pp.636-645
    • /
    • 2006
  • A tenn with more domain specific information has higher level of term specificity. We propose new specificity calculation methods of terms based on information theoretic measures using compositional and contextual information. Specificity of terms is a kind of necessary conditions in tenn hierarchy construction task. The methods use based on compositional and contextual information of terms. The compositional information includes frequency, $tf{\cdot}idf$, bigram and internal structure of the terms. The contextual information of a tenn includes the probabilistic distribution of modifiers of terms. The proposed methods can be applied to other domains without extra procedures. Experiments showed very promising result with the precision of 82.0% when applied to the terms in MeSH thesaurus.

Disease-Related Vocubulary and its translingual practice in Late 19th to Early 20th century (19세기 말 20세기 초 질병 어휘와 언어횡단적 실천)

  • Lee, Eunryoung
    • Journal of Sasang Constitutional Medicine
    • /
    • v.31 no.1
    • /
    • pp.65-78
    • /
    • 2019
  • Objectives This study aims to investigate how the Korean disease-related vocabulary is established or changed when it is translated into French or English. Through this, we examine changes in the meaning of diseases and the ecosystem of disease-related vocabulary in transition period of $19^{th}$ to $20^{th}$ century. Methods Korean disease-related vocabulary are extracted from a total of 148,000 Korean headwords included in our corpus of three bilingual dictionaries. Among them, the scope of analyisis is limited to group of vocabularies that include a high frequency words, disease(病) and symptom(症). Results The first type of change is the emergence of a neologism. In this case, coexistence of existing vocabulary and new words is observed. The second change is the appearance of loan words written in Hangul. The third is the case where the interpretation of meaning is changed while maintaining the word form. Finally, the fourth change is that the orthographic variants are displayed while maintaining the meaning of the existing vocabulary. Discussion Disease-related vocabulary increased greatly between 1897 and 1931. The increasing factor of vocabulary was the emergence of coined words, compound words and the influx of foreign words. The Korean language and the Western language made a new lexical form in order to introduce a new unknown concept to the Korean. We could also confirm that the way in which English word expanded its semantic field by modifying the way of representing the meaning of Korean Disease-related vocabulary.

The Stream of Uncertainty in Scientific Knowledge using Topic Modeling (토픽 모델링 기반 과학적 지식의 불확실성의 흐름에 관한 연구)

  • Heo, Go Eun
    • Journal of the Korean Society for information Management
    • /
    • v.36 no.1
    • /
    • pp.191-213
    • /
    • 2019
  • The process of obtaining scientific knowledge is conducted through research. Researchers deal with the uncertainty of science and establish certainty of scientific knowledge. In other words, in order to obtain scientific knowledge, uncertainty is an essential step that must be performed. The existing studies were predominantly performed through a hedging study of linguistic approaches and constructed corpus with uncertainty word manually in computational linguistics. They have only been able to identify characteristics of uncertainty in a particular research field based on the simple frequency. Therefore, in this study, we examine pattern of scientific knowledge based on uncertainty word according to the passage of time in biomedical literature where biomedical claims in sentences play an important role. For this purpose, biomedical propositions are analyzed based on semantic predications provided by UMLS and DMR topic modeling which is useful method to identify patterns in disciplines is applied to understand the trend of entity based topic with uncertainty. As time goes by, the development of research has been confirmed that uncertainty in scientific knowledge is moving toward a decreasing pattern.

Pronunciation of the Korean diphthong /jo/: Phonetic realizations and acoustic properties (한국어 /ㅛ/의 발음 양상 연구: 발음형 빈도와 음향적 특징을 중심으로)

  • Hyangwon Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.1
    • /
    • pp.9-17
    • /
    • 2023
  • The purpose of this study is to determine how the Korean diphthong /jo/ shows phonetic variation in various linguistic environments. The pronunciation of /jo/ is discussed, focusing on the relationship between phonetic variation and the distribution range of vowels. The location in a word (monosyllable, word-initial, word-medial, word-final) and word class (content word, function word) were analyzed using the speech of 10 female speakers of the Seoul Corpus. As a result of determining the frequency of appearance of /jo/ in each environment, the pronunciation type and word class were affected by the location in a word. Frequent phonetic reduction was observed in the function word /jo/ in the acoustic analysis. The word class did not change the average phonetic values of /jo/, but changed the distribution of individual tokens. These results indicate that the linguistic environment affects the phonetic distribution of vowels.

Synonym Emotional Adjectives in Coordination: Analyzing [Emotional Adjective + '-ko(and)'] + Emotional Adjective] Structures in Korean (감정형용사 유의어 결합 연구 -[[감정형용사 + '-고'] + 감정형용사] 구성-)

  • Park, JINA;Jeong, Yong-Ho
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.565-577
    • /
    • 2024
  • This discussion looked at how emotional adjectives are connected in the format [[emotional adjective + '-ko(and)'] + emotional adjective]. As a result, it was confirmed that there are quite a few cases in which two or more emotional adjectives are used to express emotions in Korean. This can help Korean learners understand and express the individual lexical meanings of emotional adjectives more clearly by identifying emotional adjectives that are used together with the corresponding configuration. It was believed that it could help Korean language learners express complex emotions or create rich emotional expressions when expressing their emotions in Korean. It is hoped that the examples and frequency of [[emotional adjective+'-ko(and)'+emotional adjective] shown in this discussion will be of some help in teaching and learning Korean emotional vocabulary.

A Korean Homonym Disambiguation System Using Refined Semantic Information and Thesaurus (정제된 의미정보와 시소러스를 이용한 동형이의어 분별 시스템)

  • Kim Jun-Su;Ock Cheol-Young
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.829-840
    • /
    • 2005
  • Word Sense Disambiguation(WSD) is one of the most difficult problem in Korean information processing. We propose a WSD model with the capability to filter semantic information using the specific characteristics in dictionary dictions, and nth added information, useful to sense determination, such as statistical, distance and case information. we propose a model, which can resolve the issues resulting from the scarcity of semantic information data based on the word hierarchy system (thesaurus) developed by Ulsan University's UOU Word Intelligent Network, a dictionary-based toxicological database. Among the WSD models elaborated by this study, the one using statistical information, distance and case information along with the thesaurus (hereinafter referred to as 'SDJ-X model') performed the best. In an experiment conducted on the sense-tagged corpus consisting of 1,500,000 eojeols, provided by the Sejong project, the SDJ-X model recorded improvements over the maximum frequency word sense determination (maximum frequency determination, MFC, accuracy baseline) of $18.87\%$ ($21.73\%$ for nouns and inter-eojeot distance weights by $10.49\%$ ($8.84\%$ for nouns, $11.51\%$ for verbs). Finally, the accuracy level of the SDJ-X model was higher than that recorded by the model using only statistical information, distance and case information, without the thesaurus by a margin of $6.12\%$ ($5.29\%$ for nouns, $6.64\%$ for verbs).

The Study on Possibility of Applying Word-Level Word Embedding Model of Literature Related to NOS -Focus on Qualitative Performance Evaluation- (과학의 본성 관련 문헌들의 단어수준 워드임베딩 모델 적용 가능성 탐색 -정성적 성능 평가를 중심으로-)

  • Kim, Hyunguk
    • Journal of Science Education
    • /
    • v.46 no.1
    • /
    • pp.17-29
    • /
    • 2022
  • The purpose of this study is to look qualitatively into how efficiently and reasonably a computer can learn themes related to the Nature of Science (NOS). In this regard, a corpus has been constructed focusing on literature (920 abstracts) related to NOS, and factors of the optimized Word2Vec (CBOW, Skip-gram) were confirmed. According to the four dimensions (Inquiry, Thinking, Knowledge and STS) of NOS, the comparative evaluation on the word-level word embedding was conducted. As a result of the study, according to the previous studies and the pre-evaluation on performance, the CBOW model was determined to be 200 for the dimension, five for the number of threads, ten for the minimum frequency, 100 for the number of repetition and one for the context range. And the Skip-gram model was determined to be 200 for the number of dimension, five for the number of threads, ten for the minimum frequency, 200 for the number of repetition and three for the context range. The Skip-gram had better performance in the dimension of Inquiry in terms of types of words with high similarity by model, which was checked by applying it to the four dimensions of NOS. In the dimensions of Thinking and Knowledge, there was no difference in the embedding performance of both models, but in case of words with high similarity for each model, they are sharing the name of a reciprocal domain so it seems that it is required to apply other models additionally in order to learn properly. It was evaluated that the dimension of STS also had the embedding performance that was not sufficient to look into comprehensive STS elements, while listing words related to solution of problems excessively. It is expected that overall implications on models available for science education and utilization of artificial intelligence could be given by making a computer learn themes related to NOS through this study.

Analysis of Research Trends in New Drug Development with Artificial Intelligence Using Text Mining (텍스트 마이닝을 이용한 인공지능 활용 신약 개발 연구 동향 분석)

  • Jae Woo Nam;Young Jun Kim
    • Journal of Life Science
    • /
    • v.33 no.8
    • /
    • pp.663-679
    • /
    • 2023
  • This review analyzes research trends related to new drug development using artificial intelligence from 2010 to 2022. This analysis organized the abstracts of 2,421 studies into a corpus, and words with high frequency and high connection centrality were extracted through preprocessing. The analysis revealed a similar word frequency trend between 2010 and 2019 to that between 2020 and 2022. In terms of the research method, many studies using machine learning were conducted from 2010 to 2020, and since 2021, research using deep learning has been increasing. Through these studies, we investigated the trends in research on artificial intelligence utilization by field and the strengths, problems, and challenges of related research. We found that since 2021, the application of artificial intelligence has been expanding, such as research using artificial intelligence for drug rearrangement, using computers to develop anticancer drugs, and applying artificial intelligence to clinical trials. This article briefly presents the prospects of new drug development research using artificial intelligence. If the reliability and safety of bio and medical data are ensured, and the development of the above artificial intelligence technology continues, it is judged that the direction of new drug development using artificial intelligence will proceed to personalized medicine and precision medicine, so we encourage efforts in that field.

A Performance Improvement Method using Variable Break in Corpus Based Japanese Text-to-Speech System (가변 Break를 이용한 코퍼스 기반 일본어 음성 합성기의 성능 향상 방법)

  • Na, Deok-Su;Min, So-Yeon;Lee, Jong-Seok;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.155-163
    • /
    • 2009
  • In text-to-speech systems, the conversion of text into prosodic parameters is necessarily composed of three steps. These are the placement of prosodic boundaries. the determination of segmental durations, and the specification of fundamental frequency contours. Prosodic boundaries. as the most important and basic parameter. affect the estimation of durations and fundamental frequency. Break prediction is an important step in text-to-speech systems as break indices (BIs) have a great influence on how to correctly represent prosodic phrase boundaries, However. an accurate prediction is difficult since BIs are often chosen according to the meaning of a sentence or the reading style of the speaker. In Japanese, the prediction of an accentual phrase boundary (APB) and major phrase boundary (MPB) is particularly difficult. Thus, this paper presents a method to complement the prediction errors of an APB and MPB. First, we define a subtle BI in which it is difficult to decide between an APB and MPB clearly as a variable break (VB), and an explicit BI as a fixed break (FB). The VB is chosen using the classification and regression tree, and multiple prosodic targets in relation to the pith and duration are then generated. Finally. unit-selection is conducted using multiple prosodic targets. In the MOS test result. the original speech scored a 4,99. while proposed method scored a 4.25 and conventional method scored a 4.01. The experimental results show that the proposed method improves the naturalness of synthesized speech.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.