• Title/Summary/Keyword: Word recognition test

Search Result 106, Processing Time 0.021 seconds

Study on Improving Maritime English Proficiency Through the Use of a Maritime English Platform (해사영어 플랫폼을 활용한 표준해사영어 실력 향상에 관한 연구)

  • Jin Ki Seor;Young-soo Park;Dongsu Shin;Dae Won Kim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.930-938
    • /
    • 2023
  • Maritime English is a specialized language system designed for ship operations, maritime safety, and external and internal communication onboard. According to the International Maritime Organization's (IMO) International Convention on Standards of Training, Certification and Watchkeeping for Seafarers (STCW), it is imperative that navigational officers engaged in international voyages have a thorough understanding of Maritime English including the use of Standard Marine Communication Phrases (SMCP). This study measured students' proficiency in Maritime English using a learning and testing platform that includes voice recognition, translation, and word entry tasks to evaluate the resulting improvement in Maritime English exam scores. Furthermore, the study aimed to investigate the level of platform use needed for cadets to qualify as junior navigators. The experiment began by examining the correlation between students' overall English skills and their proficiency in SMCP through an initial test, followed by the evaluation of improvements in their scores and changes in exam duration during the mid-term and final exams. The initial test revealed a significant dif erence in Maritime English test scores among groups based on individual factors, such as TOEIC scores and self-assessment of English ability, and both the mid-term and final tests confirmed substantial score improvements for the group using the platform. This study confirmed the efficacy of a learning platform that could be extensively applied in maritime education and potentially expanded beyond the scope of Maritime English education in the future.

Mild Impairments in Cognitive Function in the Elderly with Restless Legs Syndrome (노인 하지불안증후군에서의 인지기능 저하)

  • Kim, Eun Soo;Yoon, In-Young;Kweon, Kukju;Park, Hye Youn;Lee, Chung Suk;Han, Eun Kyoung;Kim, Ki Woong
    • Sleep Medicine and Psychophysiology
    • /
    • v.20 no.1
    • /
    • pp.15-21
    • /
    • 2013
  • Objectives: Cognitive impairment in restless legs syndrome (RLS) patients can be affected by sleep deprivation, anxiety and depression, which are common in RLS. The objective of this study is to investigate relationship between cognitive impairment and RLS in the non-medicated Korean elderly with controlling for psychiatric conditions. Method: The study sample for this study comprised 25 non-medicated Korean elderly RLS patients and 50 age-, sex-, and education- matched controls. All subjects were evaluated with comprehensive cognitive function assessment tools- including the Korean version of Consortium to Establish a Registry for Alzheimer's Disease Assessment Packet (CERAD-K), severe cognitive impairment rating scale (SCIRS), frontal assessment battery (FAB), and clock drawing test (CLOX). Sleep quality and depression were also assessed with Pittsburgh sleep quality index (PSQI) and geriatric depression scale (GDS). Results: PSQI and GDS score showed no difference between RLS and control group. There was no significant difference between two groups in nearly all the cognitive function except in constructional recognition test, in which subjects with RLS showed lower performance than control group (t=-2.384, p=0.02). Subjects with depression ($GDS{\geq}10$) showed significant cognitive impairment compared to control in verbal fluency, Korean version of Mini Mental Status Examination in the CERAD-K (MMSE-KC), word list memory, trail making test, and frontal assessment battery (FAB). In contrast, no difference was observed between subjects who have low sleep quality (PSQI>5) and control group. Conclusions: At the exclusion of the impact of insomnia and depression, cognitive function was found to be relatively preserved in RLS patients compared to control. Impairment of visual recognition in RLS patients can be explained in terms of dopaminergic dysfunction in RLS.

Cognitive Dysfunction in non-hypoxemic COPD Patients (저산소증을 동반하지 않는 만성폐쇄성폐질환 환자에서의 인지기능장애)

  • Kim, Woo Jin;Han, Seon-Sook;Park, Myoung-Ok;Lee, Seung-Joon;Kim, Seong Jae;Lee, Jung Hie
    • Tuberculosis and Respiratory Diseases
    • /
    • v.62 no.5
    • /
    • pp.382-388
    • /
    • 2007
  • Background: The cognitive function is impaired in patients with hypoxemic chronic obstructive pulmonary disease (COPD). However, there are conflicting results regarding the cognitive function in patients with non-hypoxemic COPD. COPD patients also have sleep disorders. This study examined the cognitive function in non-hypoxemic COPD patients, and nocturnal sleep was assessed in COPD patients with a cognitive dysfunction. Methods: Twenty-eight COPD patients (mean age, 70.7 years) with an oxygen saturation > 90%, and 33 healthy control subjects (mean age, 69.5 years) who had visited for a routine check-up were selected. The neurocognitive tests were performed using the Korean version of the Consortium to Establish a Registry for Alzheimer's Disease (CERAD-K) Neuropsychological Battery. Results: The scores of the word list recall test (p=0.03) and the word list recognition test (p=0.006) in the COPD group were significantly lower than those in the control group. Nine patients showed a significantly impaired cognitive function. Seven of these underwent polysomnography, which revealed apnea-hypopnea indices ${\geq}$ five per hour in five patients. The median oxygen desaturation index and median limb movement index were 3.6/h and 38.6/h, respectively. Conclusion: These results suggest that the verbal memory function is impaired in non-hypoxemic COPD patients. Six out of seven COPD patients with an impaired cognitive function had sleep disorders of sleep apnea and/or periodic limb movements during sleep.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.