• Title/Summary/Keyword: Sentence Analysis

Search Result 497, Processing Time 0.028 seconds

A Comparative Study on Teaching Chinese and Korean Topic Sentences (주제문을 통한 한국학생의 중국어 학습지도 연구 - 중·한 주제문의 비교를 중심으로)

  • Choo, Chui-Lan
    • Cross-Cultural Studies
    • /
    • v.19
    • /
    • pp.389-409
    • /
    • 2010
  • Chinese is a topic-prominent language, so when we learn Chinese we should know the discourse function of the Chinese language. Most of the Korean student think Chinese sentences should appear in the order of S-V-O and they always make mistakes when they use Chinese. I think Korean is very similar with Chinese in the discourse function. Hence, in this paper, I try to find a method of teaching Chinese topic sentence. It does so by comparing Chinese with Korean in the light of discourse function. I think when Korean student know how to use Korean topic sentence to explain the discourse functions of the Chinese language, they will not make similar mistakes. With this understanding in mind, chapter 2 tries to show various topic sentences to prove that 'topic' is very important in Chinese sentences. This is why we say Chinese is a topic-prominent language. In chapter 3, I analysis the sentences that students made, and highlight the reasons why they made mistake. The result lies in the reason whereby they always think Chinese should appear in the order of S-V-O. They do not understand why some sentences appear in the order of O-(S)V or S-O-V. It show that they do not know what is topic sentence and do not know how to make topic sentences. Sometime I have them translate them into Korean, but they also make Korean sentences like in the order of Chinese S-V-O. Therefore, I think, under this circumstance, to let them to translate and to speak in Korean in topic sentence, get some feelings about Chinese topic sentences, and tell and make Chinese topic sentences are naturally critical in their training.

A Historical Consideration on the External Treatment theories and diseases for which medicine is efficacious (외치료법(外治療法)의 이론(理論)과 적응증(適應症)에 대한 사적(史的) 고찰(考察))

  • Moon, Woo-Sang;Lee, Byung-Wook;Ahn, Sang-Woo;Kim, Eun-Ha
    • Korean Journal of Oriental Medicine
    • /
    • v.10 no.2
    • /
    • pp.1-21
    • /
    • 2004
  • 1) Objective External treatments have various curative effects. So it had been used to cure various patients. But, it has a limited sphere of application in the present South Korea. Therefore we would like to bring out its sphere of application and detailed method in the oriental medicine classics. 2) Methodologies We have researched external treatment history according to below the procedure. (1) Making a related words list: We have used existing external treatments technical books to make a list. It has been connected with external treatments. It includes not only technical terms, but also general terms. (2) Searching sentence: We have searched sentence that contain terms that related with external treatments. (3) Analysis of related sentence: We have searched and classified sentence by disease. (4) Analysis of external treatment methods. 3) Conclusions From long time ago people have used external treatment to cure various disease. According to the ${\ulcorner}Nei-Jing{\lrcorner}$, hot compress therapy, fumigation therapy and bathing therapy had been used to cure blockage syndrome, muscle disease, carbuncle and cellulitis. Thereafter, a sphere of external treatment had gradually enlarged. (1) After all its sphere had included dermatologic, psychologic, internal, ophthalmic, otolaryngologic, obstetrics, gynecologic, pediatric and surgical diseases. (2) External treatment methods have contained hot compress therapy, fumigation therapy, bathing therapy, application therapy, medication bag therapy, medication plug therapy, medication massotherapy, aroma therapy and so on. (3) Medication types of external treatment have contained ointment, juice, infusion, powder, suppository and so on.

  • PDF

Error Analysis: What Problems do Learners Face in the Production of the English Passive Voice?

  • Jung, Woo-Hyun
    • English Language & Literature Teaching
    • /
    • v.12 no.2
    • /
    • pp.19-40
    • /
    • 2006
  • This paper deals with a part-specific analysis of grammatical errors in the production of the English passive in writing. The purpose of the study is dual: to explore common error types in forming the passive; and to provide plausible sources of the errors, with special attention to the role of the native language. To this end, this study obtained a large amount of data from Korean EFL university students using an essay writing task. The results show that in forming the passive sentence, errors were made in various ways and that the most common problem was the formation of the be-auxiliary, in particular, the proper use of tense and S-V agreement. Another important finding was that the global errors found in this study were not necessarily those with the greatest frequency. Also corroborated was the general claim that many factors work together to account for errors. In many cases, interlingual and intralingual factors were shown to interact with each other to explain the passive errors made by Korean students. On the basis of the results, suggestions are made for effective and well-formed use of the passive sentence.

  • PDF

Automated Classification of Sentential Types in Korean with Morphological Analysis (형태소 분석을 통한 한국어 문장 유형 자동 분류)

  • Chung, Jin-Woo;Park, Jong-C.
    • Language and Information
    • /
    • v.13 no.2
    • /
    • pp.59-97
    • /
    • 2009
  • The type of a given sentence indicates the speaker's attitude towards the listener and is usually determined by its final endings and punctuation marks. However, some 6na1 endings are used in several types of sentences, which means that we cannot identify the sentential type by considering only the final endings and punctuation marks. In this paper, we propose methods of finding some other linguistic clues for indentifying the sentential type with a morphological analysis. We also propose to use these methods to implement a system that automatically classifies sentences in Korean according to their sentential types.

  • PDF

Analysis on Sentence Error Types of Mathematical Problem Posing of Pre-Service Elementary Teachers (초등학교 예비교사들의 수학적 '문제 만들기'에 나타나는 문장의 오류 유형 분석)

  • Huh, Nan;Shin, Hocheol
    • Journal of the Korean School Mathematics Society
    • /
    • v.16 no.4
    • /
    • pp.797-820
    • /
    • 2013
  • This study intended on analyzing the error patterns of mathematic problem posing sentences by the 100 elementary pre-teachers and discussing about the solutions. The results showed that the problem posing sentences have five error patterns: phonological error patterns, word error patterns, sentence error patterns, meaning error patterns, and notation error patterns. Divided into fourteen specific error patterns, they are as in the following. 1) Phonological error patterns are consisted of the 'ㄹ' addition error pattern and the abbreviated word error pattern. 2) Words error patterns are divided with the inappropriate usage of word error pattern and the inadequate abbreviation error pattern, which are formulized four subgroups such as the case maker, ending of the word, inappropriate usage of word, and inadequate abbreviation of article or word error pattern in detail. 3) Sentence error patterns are assumed four kinds of forms: the reference, ellipsis of sentence component, word order, and incomplete sentence error pattern. 4) Meaning error patterns are composed the logical contradiction and the ambiguous meaning. 5) Notation error patterns are formed four patterns as the spacing, punctuation, orthography of Hangul, and spelling rules of foreign words in Korean. Furthermore, the solutions for these error patterns were discussed: First, it has to be perceived the differences between spoken and written language. Second, it has to be rejected the spoken expressions in written contexts. Third, it should be focused on the learning of the basic sentence patterns during the class. Forth, it is suggested that the word meaning should have the logical development perception based on what it means. Finally, it is proposed that the system of spelling of Korean has to be learned. In addition to these suggestions, a new understanding is necessary regarding writing education for college students.

  • PDF

Maritime Safety Tribunal Ruling Analysis using SentenceBERT (SentenceBERT 모델을 활용한 해양안전심판 재결서 분석 방법에 대한 연구)

  • Bori Yoon;SeKil Park;Hyerim Bae;Sunghyun Sim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.843-856
    • /
    • 2023
  • The global surge in maritime traffic has resulted in an increased number of ship collisions, leading to significant economic, environmental, physical, and human damage. The causes of these maritime accidents are multifaceted, often arising from a combination of crew judgment errors, negligence, complexity of navigation routes, weather conditions, and technical deficiencies in the vessels. Given the intricate nuances and contextual information inherent in each incident, a methodology capable of deeply understanding the semantics and context of sentences is imperative. Accordingly, this study utilized the SentenceBERT model to analyze maritime safety tribunal decisions over the last 20 years in the Busan Sea area, which encapsulated data on ship collision incidents. The analysis revealed important keywords potentially responsible for these incidents. Cluster analysis based on the frequency of specific keyword appearances was conducted and visualized. This information can serve as foundational data for the preemptive identification of accident causes and the development of strategies for collision prevention and response.

The Indefinite Description Analysis of Belief Ascription Sentences: A Trouble with the Analysis\ulcorner

  • Sunwoo, Hwan
    • Lingua Humanitatis
    • /
    • v.2 no.2
    • /
    • pp.301-319
    • /
    • 2002
  • In a recent paper, I have proposed an analysis concerning propositions and 'that'-clauses as a solution to Kripke's puzzle and other similar puzzles, which I now call 'the Indefinite Description Analysis of Belief Ascription Sentences.' I have listed some of the major advantages of this analysis besides its merit as a solution to the puzzles: it is amenable to the direct-reference theory of proper names; it does not nevertheless need to introduce Russellian (singular) propositions or any other new entities. David Lewis has constructed an interesting argument to refute this analysis. His argument seems to show that my analysis has an unwelcome consequence: if someone believes any proposition, then he or she should, ipso facto, believe any necessary (mathematical or logical) proposition (such as the proposition that 1 succeeds 0). In this paper, I argue that Lewis's argument does not pose a real threat to my analysis. All his argument shows is that we should not accept the assumption called 'the equivalence thesis': if two sentences are equivalent, then they express the same proposition. I argue that this thesis is already in trouble for independent reasons. Especially, I argue that if we accept the equivalence thesis then, even without my analysis, we can derive a sentence like 'Fred believes that 1 succeeds 0 and snow is white' from a sentence like 'Fred believes that snow is white.' The consequence mentioned above is not worse than this consequence.

  • PDF

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

A study on Korean language processing using TF-IDF (TF-IDF를 활용한 한글 자연어 처리 연구)

  • Lee, Jong-Hwa;Lee, MoonBong;Kim, Jong-Weon
    • The Journal of Information Systems
    • /
    • v.28 no.3
    • /
    • pp.105-121
    • /
    • 2019
  • Purpose One of the reasons for the expansion of information systems in the enterprise is the increased efficiency of data analysis. In particular, the rapidly increasing data types which are complex and unstructured such as video, voice, images, and conversations in and out of social networks. The purpose of this study is the customer needs analysis from customer voices, ie, text data, in the web environment.. Design/methodology/approach As previous study results, the word frequency of the sentence is extracted as a word that interprets the sentence has better affects than frequency analysis. In this study, we applied the TF-IDF method, which extracts important keywords in real sentences, not the TF method, which is a word extraction technique that expresses sentences with simple frequency only, in Korean language research. We visualized the two techniques by cluster analysis and describe the difference. Findings TF technique and TF-IDF technique are applied for Korean natural language processing, the research showed the value from frequency analysis technique to semantic analysis and it is expected to change the technique by Korean language processing researcher.

Scaling Reuse Detection in the Web through Two-way Boosting with Signatures and LSH

  • Kim, Jong Wook
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.6
    • /
    • pp.735-745
    • /
    • 2013
  • The emergence of Web 2.0 technologies, such as blogs and wiki, enable even naive users to easily create and share content on the Web using freely available content sharing tools. Wide availability of almost free data and promiscuous sharing of content through social networking platforms created a content borrowing phenomenon, where the same content appears (in many cases in the form of extensive quotations) in different outlets. An immediate side effect of this phenomenon is that identifying which content is re-used by whom is becoming a critical tool in social network analysis, including expert identification and analysis of information flow. Internet-scale reuse detection, however, poses extremely challenging scalability issues: considering the large size of user created data on the web, it is essential that the techniques developed for content-reuse detection should be fast and scalable. Thus, in this paper, we propose a $qSign_{lsh}$ algorithm, a mechanism for identifying multi-sentence content reuse among documents by efficiently combining sentence-level evidences. The experiment results show that $qSign_{lsh}$ significantly improves the reuse detection speed and provides high recall.