• 제목/요약/키워드: a text linguistic approach

검색결과 29건 처리시간 0.025초

A Style-based Approach to Translating Literary Texts from Arabic into English

  • Almanna, Ali
    • 비교문화연구
    • /
    • 제32권
    • /
    • pp.5-28
    • /
    • 2013
  • In this paper, a style-based approach to translating literary texts is introduced and used. The aim of the study is to work out a stylistic approach to translating literary texts from Arabic into English. The approach proposed in the current study is a combination of four major stylistic approaches, namely linguistic stylistics, literary stylistics, affective stylistics and cognitive stylistics. It has been shown from data analysis that by adopting a style-based approach that can draw from the four stylistic approaches, translators, as special text readers, can easily derive a better understanding and appreciation of texts, in particular literary texts. Further, it has been shown that stylistics as an approach is objective in terms of drawing evidence from the text to support the argument for the important stylistic features and their functions. However, it loses some of its objectivity and becomes dependent and subjective.

"맹자" "호연지기 장"의 텍스트언어학적 접근 (A Text Linguistic Approach to the Chapter Hoyeonjigi of Mencius)

  • 이석규
    • 인문언어
    • /
    • 제5권
    • /
    • pp.127-163
    • /
    • 2003
  • This thesis analyzes the Chapter "Hoyeonjigi(浩然之氣)" of Mencius(孟子), using text linguistics theory and reading theory of Korean. In this process the model of macro-structure #1∼5 are presented, according to Vandijk′s rules of macro-structure; Auslassen, Selektierne, Generalisieren, Konstruieren odor Integrieren. As a result, this certifies; First, macro-structure could make arbitrarily a several steps of macro-structure by types of text or purpose of analysis. Second, macro-structure applies various cognitive mechanisms of outer world as well as inner world. Third, a text with profound symbolism could be figured as a two-or threefolded symbolic structure. At the same time, macro-structure enables the clearer analysis of the content of the Chapter to verify the following; first, Hoyeinjigi itself is the best measure of developing "Imperturbable Mind(不動心)" Second, benevolence-righteousness(仁義) and wisdom(智) would be reached by cultivating Hoyeonjigi. Third, Mencius′ own view of language is well expressed in "Jieon(知言)", which is not only a condition for Imperturbable Mind, but also the Oriental view of language focused especially on listening in terms of language usage, not language analysis. This Mencius′ view of language has a thread connection with that of Oriental′s.

  • PDF

조지 린드벡의 문화-언어의 종교이론 비평 (A Critical Evaluation of George Lindbeck's Cultural-Linguistic Theory of Religion)

  • 제해종
    • 한국콘텐츠학회논문지
    • /
    • 제14권4호
    • /
    • pp.456-466
    • /
    • 2014
  • 본 연구는 종교를 문화-언어로 이해한 린드벡의 후기자유주의에 관한 것이다. 전통적인 기독교 신학의 인식-명제적 접근과 자유주의의 경험-표현주의적 접근이 포스트모던적 종교 현상에 대한 해결책이 되지 못함을 인식한 조지 린드벡은 기존의 두 접근법들을 극복할 대안으로서 문화-언어적 접근법을 제시한다. 린드벡의 후기자유주의가 가진 첫 번째 통찰은 종교를 문화나 언어로 이해한다는 것인데, 이는 인간이 언어를 배우듯 종교에도 익숙해진다는 이유에서다. 두 번째 통찰은 첫 번째 것에서 파생된 것으로서 교리를 문법으로 이해하는 것이다. 만약 종교와 교리를 이런 식으로 이해한다면 종교 간의 갈등과 충돌의 문제는 자연스럽게 해결될 것인데, 이는 각각의 종교가 마치 언어에 좋고 나쁨이나 옳고 그름이 없는 것처럼 나름대로의 체계 안에서 해석될 수 있기 때문이다. 이 접근법이 종교 상호간에 화해를 가능케 하고, 실행성을 강조하고, 또 성경을 귄위 있는 신학적 텍스트로 삼았다는 점은 중요한 기여로 꼽을 수 있다. 하지만 이것은 텍스트 자체보다는 교회의 해석에 더 큰 비중을 두었고, 진리를 내적 일관성으로 격하시켰고, 모든 종교를 동일한 가치로 보는 극단적 상대주의와 언어가 종교생활에 필수적이라는 엘리트주의를 조장했을 뿐 아니라 신학적 종말론을 주장함으로써 명제주의로 회귀했다는 문제점을 안고 있다. 그러므로 연구자는 린드벡의 문화-언어의 종교이론을 신학적 보수주의와 자유주의의 한계를 극복할 대안으로는 부족하다고 결론 내린다.

Compositional rules of Korean auxiliary predicates for sentiment analysis

  • Lee, Kong Joo
    • Journal of Advanced Marine Engineering and Technology
    • /
    • 제37권3호
    • /
    • pp.291-299
    • /
    • 2013
  • Most sentiment analysis systems count the number of occurrences of sentiment expressions in a text, and evaluate the text by summing polarity values of extracted sentiment expressions. However, linguistic contexts of the expressions should be taken into account in order to analyze sentimental orientation of the text meticulously. Korean auxiliary predicates affect meaning of the main verb or adjective in some ways while attached to it in their usage. In this paper, we introduce a new approach that handles Korean auxiliary predicates in the light of sentiment analysis. We classify the auxiliary predicates according to their strength of impact on sentiment polarity values. We also define compositional rules of auxiliary predicates to update polarity values when the predicates appear along with sentiment expressions. This approach is implemented to a sentiment analysis system to extract opinions about a specific individual from review documents which were collected from various web sites. An experimental result shows approximately 72.6% precision and 52.7% recall for correctly detecting sentiment expressions from a text.

Towards a Student-centred Approach to Translation Teaching

  • Almanna, Ali;Lazim, Hashim
    • 비교문화연구
    • /
    • 제36권
    • /
    • pp.241-270
    • /
    • 2014
  • The aim of this article is to review the traditional methodologies of teaching translation that concentrate on text-typologies and, as an alternative, to propose an eclectic multi-componential approach that involves a set of interdisciplinary skills with a view to improving the trainee translators' competences and skills. To this end, three approaches, namely a minimalist approach, a pre-transferring adjustment approach and a revision vs. editing approach are proposed to shift the focus of attention from teacher-centred approaches towards student-centred approaches. It has been shown that translator training programmes need to focus on improving the trainee translators' competences and skills, such as training them how to produce and select among the different versions they produce by themselves with justified confidence as quickly as they can (minimalist approach), adjust the original text semantically, syntactically and/or textually in a way that the source text supplely accommodates itself in the linguistic system of the target language (pre-transferring adjustment), and revise and edit others' translations. As the validity of the approach proposed relies partially on instructors' competences and skills in teaching translation, universities, particularly in the Arab world, need to invest in recruiting expert practitioners instead of depending mainly on bilingual teachers to teach translation.

Pragmatic Strategies of Self (Other) Presentation in Literary Texts: A Computational Approach

  • Khafaga, Ayman Farid
    • International Journal of Computer Science & Network Security
    • /
    • 제22권2호
    • /
    • pp.223-231
    • /
    • 2022
  • The application of computer software into the linguistic analysis of texts proves useful to arrive at concise and authentic results from large data texts. Based on this assumption, this paper employs a Computer-Aided Text Analysis (CATA) and a Critical Discourse Analysis (CDA) to explore the manipulative strategies of positive/negative presentation in Orwell's Animal Farm. More specifically, the paper attempts to explore the extent to which CATA software represented by the three variables of Frequency Distribution Analysis (FDA), Content Analysis (CA), and Key Word in Context (KWIC) incorporate with CDA decipher the manipulative purposes beyond positive presentation of selfness and negative presentation of otherness in the selected corpus. The analysis covers some CDA strategies, including justification, false statistics, and competency, for positive self-presentation; and accusation, criticism, and the use of ambiguous words for negative other-presentation. With the application of CATA, some words will be analyzed by showing their frequency distribution analysis as well as their contextual environment in the selected text to expose the extent to which they are employed as strategies of positive/negative presentation in the text under investigation. Findings show that CATA software contributes significantly to the linguistic analysis of large data texts. The paper recommends the use and application of the different CATA software in the stylistic and corpus linguistics studies.

법률정보시스템의 색인에 관한 연구 -특히 2차 법률정보를 중심으로- (A Study on the Index Model for Secondary Legal Information Databases)

  • 노정란
    • 한국비블리아학회지
    • /
    • 제8권1호
    • /
    • pp.117-134
    • /
    • 1997
  • This study proves that the quoted legal text functions as the index which represents the contents of the text because of the characteristics of legal information, the automatic indexing in the secondary legal full-text databases can be possible without the assitance of the experts. In case of the establishment, amendment or repealing of law, change of words of index can be possible through revising the legal text quoted in the secondary legal full-text databases. Even when we dont input the full-text about retrospective documents, automatic indexing is also possible, and the establihment and the practice of expert knowledge and integrated databases are possible in case of the retrospective documents. This study indicates that it is necessary to have characteristic information the information experts recognize - that is to say, experimental and inherent knowledge only human being can have - built-in into the system rather than to approach the information system by the linguistic, statistic or structuralistic way, and it can be more essential and intelligent information system.

  • PDF

A Hybrid Approach for the Morpho-Lexical Disambiguation of Arabic

  • Bousmaha, Kheira Zineb;Rahmouni, Mustapha Kamel;Kouninef, Belkacem;Hadrich, Lamia Belguith
    • Journal of Information Processing Systems
    • /
    • 제12권3호
    • /
    • pp.358-380
    • /
    • 2016
  • In order to considerably reduce the ambiguity rate, we propose in this article a disambiguation approach that is based on the selection of the right diacritics at different analysis levels. This hybrid approach combines a linguistic approach with a multi-criteria decision one and could be considered as an alternative choice to solve the morpho-lexical ambiguity problem regardless of the diacritics rate of the processed text. As to its evaluation, we tried the disambiguation on the online Alkhalil morphological analyzer (the proposed approach can be used on any morphological analyzer of the Arabic language) and obtained encouraging results with an F-measure of more than 80%.

계량적 접근에 의한 조선시대 필사본 조리서의 유사성 분석 (A Quantitative Approach to a Similarity Analysis on the Culinary Manuscripts in the Chosun Periods)

  • 이기황;이재윤;백두현
    • 한국언어정보학회지:언어와정보
    • /
    • 제14권2호
    • /
    • pp.131-157
    • /
    • 2010
  • This article reports an attempt to perform a similarity analysis on a collection of 25 culinary manuscripts in Chosun periods using a set of quantitative text analysis methods. Historical culinary texts are valuable resources for linguistic, historic, and cultural studies. We consider the similarity of two texts as the distributional similarities of the functional components of the texts. In the case of culinary texts, text elements such as food names, cooking methods, and ingredients are regarded as functional components. We derive the similarity information from the distributional characteristics of the two key functional components, cooking methods and ingredients. The results are also quantified and visualized to achieve a better understanding of the properties of the individual texts and the collection of the texts as a whole.

  • PDF

Building Hybrid Stop-Words Technique with Normalization for Pre-Processing Arabic Text

  • Atwan, Jaffar
    • International Journal of Computer Science & Network Security
    • /
    • 제22권7호
    • /
    • pp.65-74
    • /
    • 2022
  • In natural language processing, commonly used words such as prepositions are referred to as stop-words; they have no inherent meaning and are therefore ignored in indexing and retrieval tasks. The removal of stop-words from Arabic text has a significant impact in terms of reducing the size of a cor- pus text, which leads to an improvement in the effectiveness and performance of Arabic-language processing systems. This study investigated the effectiveness of applying a stop-word lists elimination with normalization as a preprocessing step. The idea was to merge statistical method with the linguistic method to attain the best efficacy, and comparing the effects of this two-pronged approach in reducing corpus size for Ara- bic natural language processing systems. Three stop-word lists were considered: an Arabic Text Lookup Stop-list, Frequency- based Stop-list using Zipf's law, and Combined Stop-list. An experiment was conducted using a selected file from the Arabic Newswire data set. In the experiment, the size of the cor- pus was compared after removing the words contained in each list. The results showed that the best reduction in size was achieved by using the Combined Stop-list with normalization, with a word count reduction of 452930 and a compression rate of 30%.