• Title/Summary/Keyword: source text

Search Result 268, Processing Time 0.028 seconds

A new approach technique on Speech-to-Speech Translation (신호의 복원된 위상 공간을 이용한 오디오 상황 인지)

  • Le, Thanh Hien;Lee, Sung-young;Lee, Young-Koo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2009.11a
    • /
    • pp.239-240
    • /
    • 2009
  • We live in a flat world in which globalization fosters communication, travel, and trade among more than 150 countries and thousands of languages. To surmount the barriers among these languages, translation is required; Speech-to-Speech translation will automate the process. Thanks to recent advances in Automatic Speech Recognition (ASR), Machine Translation (MT), and Text-to-Speech (TTS), one can now utilize a system to translate a speech of source language to a speech of target language and vice versa in affordable manner. The three phase process establishes that the source speech be transcribed into a (set of) text of the source language (ASR) before the source text is translated into the target text (MT). Finally, the target speech is synthesized from the target text (TTS).

Perspective Coherence in Simultaneous Interpreting - with Reference to German-Korean Interpreting - (동시통역과 시각적 응집성 - 독한 통역을 중심으로 -)

  • Ahn In-Kyoung
    • Koreanishche Zeitschrift fur Deutsche Sprachwissenschaft
    • /
    • v.9
    • /
    • pp.169-193
    • /
    • 2004
  • In simultaneous interpreting, if the syntactic structure of the source language and the target language are very different, interpreters have to wait before being able to reformulate the source text segments into a meaningful utterance in target language. It is inevitable to adapt the target language structure to that of the source language so as not to unduly increase the memory load and to minimize the pause. While such adaptation enables simultaneous interpretating, it results in damaging the perspective coherence of the text. Discovering when such perspective coherence is impaired, and how the problem can be relieved, will enable interpreters to enhance their performance. This paper analyses the reasons for perspective coherence damage by looking at some examples of German-Korean simultaneous interpreting.

  • PDF

Improving Elasticsearch for Chinese, Japanese, and Korean Text Search through Language Detector

  • Kim, Ki-Ju;Cho, Young-Bok
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.1
    • /
    • pp.33-38
    • /
    • 2020
  • Elasticsearch is an open source search and analytics engine that can search petabytes of data in near real time. It is designed as a distributed system horizontally scalable and highly available. It provides RESTful APIs, thereby making it programming-language agnostic. Full text search of multilingual text requires language-specific analyzers and field mappings appropriate for indexing and searching multilingual text. Additionally, a language detector can be used in conjunction with the analyzers to improve the multilingual text search. Elasticsearch provides more than 40 language analysis plugins that can process text and extract language-specific tokens and language detector plugins that can determine the language of the given text. This study investigates three different approaches to index and search Chinese, Japanese, and Korean (CJK) text (single analyzer, multi-fields, and language detector-based), and identifies the advantages of the language detector-based approach compared to the other two.

Social Media Fake News in India

  • Al-Zaman, Md. Sayeed
    • Asian Journal for Public Opinion Research
    • /
    • v.9 no.1
    • /
    • pp.25-47
    • /
    • 2021
  • This study analyzes 419 fake news items published in India, a fake-news-prone country, to identify the major themes, content types, and sources of social media fake news. The results show that fake news shared on social media has six major themes: health, religion, politics, crime, entertainment, and miscellaneous; eight types of content: text, photo, audio, and video, text & photo, text & video, photo & video, and text & photo & video; and two main sources: online sources and the mainstream media. Health-related fake news is more common only during a health crisis, whereas fake news related to religion and politics seems more prevalent, emerging from online media. Text & photo and text & video have three-fourths of the total share of fake news, and most of them are from online media: online media is the main source of fake news on social media as well. On the other hand, mainstream media mostly produces political fake news. This study, presenting some novel findings that may help researchers to understand and policymakers to control fake news on social media, invites more academic investigations of religious and political fake news in India. Two important limitations of this study are related to the data source and data collection period, which may have an impact on the results.

Multi-layered attentional peephole convolutional LSTM for abstractive text summarization

  • Rahman, Md. Motiur;Siddiqui, Fazlul Hasan
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.288-298
    • /
    • 2021
  • Abstractive text summarization is a process of making a summary of a given text by paraphrasing the facts of the text while keeping the meaning intact. The manmade summary generation process is laborious and time-consuming. We present here a summary generation model that is based on multilayered attentional peephole convolutional long short-term memory (MAPCoL; LSTM) in order to extract abstractive summaries of large text in an automated manner. We added the concept of attention in a peephole convolutional LSTM to improve the overall quality of a summary by giving weights to important parts of the source text during training. We evaluated the performance with regard to semantic coherence of our MAPCoL model over a popular dataset named CNN/Daily Mail, and found that MAPCoL outperformed other traditional LSTM-based models. We found improvements in the performance of MAPCoL in different internal settings when compared to state-of-the-art models of abstractive text summarization.

Text-Mining of Online Discourse to Characterize the Nature of Pain in Low Back Pain

  • Ryu, Young Uk
    • Journal of the Korean Society of Physical Medicine
    • /
    • v.14 no.3
    • /
    • pp.55-62
    • /
    • 2019
  • PURPOSE: Text-mining has been shown to be useful for understanding the clinical characteristics and patients' concerns regarding a specific disease. Low back pain (LBP) is the most common disease in modern society and has a wide variety of causes and symptoms. On the other hand, it is difficult to understand the clinical characteristics and the needs as well as demands of patients with LBP because of the various clinical characteristics. This study examined online texts on LBP to determine of text-mining can help better understand general characteristics of LBP and its specific elements. METHODS: Online data from www.spine-health.com were used for text-mining. Keyword frequency analysis was performed first on the complete text of postings (full-text analysis). Only the sentences containing the highest frequency word, pain, were selected. Next, texts including the sentences were used to re-analyze the keyword frequency (pain-text analysis). RESULTS: Keyword frequency analysis showed that pain is of utmost concern. Full-text analysis was dominated by structural, pathological, and therapeutic words, whereas pain-text analysis was related mainly to the location and quality of the pain. CONCLUSION: The present study indicated that text-mining for a specific element (keyword) of a particular disease could enhance the understanding of the specific aspect of the disease. This suggests that a consideration of the text source is required when interpreting the results. Clinically, the present results suggest that clinicians pay more attention to the pain a patient is experiencing, and provide information based on medical knowledge.

Encoding of a run-length soruce using recursive indexing (줄길이 신호원의 순환지수 부호화)

  • 서재준;나상신
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.33A no.7
    • /
    • pp.23-33
    • /
    • 1996
  • This paper deals with the design of a recursively-indexed binary code for facsimile soruces and its performance. Sources used here are run-lengths of white pixels form higher-resolution facsimile. The modified huffman code used for G.3 facsimile is chosen for the performance comparison. Experiments confirm the fact that recursive indexing preserves the entropy of a memoryless geometric source: the entropy of recursively-indexed physical surce iwth roughly geometric distributin remains within 2% of the empirical source entropy. The designed recursively-indexed binary codes consist of a code applied to text-type documents and to graphics - type documents is compared iwth that of the modified huffman code. Numerical resutls show that the modified huffman code performs well for text-type documents and not equally well for graphics-tyep documents. On the other hand, recursively-indexed binary codes have shown a better performance for graphics-type documents whose distribution are similar to a geometric distribution. Specifically, the code rates of recursively-indexed binary codes with 60 codewords are from 8% to 20% of the empirical source entropy smaller than that of th emodified huffman code with 91 codewords.

  • PDF

The Sequence Labeling Approach for Text Alignment of Plagiarism Detection

  • Kong, Leilei;Han, Zhongyuan;Qi, Haoliang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.9
    • /
    • pp.4814-4832
    • /
    • 2019
  • Plagiarism detection is increasingly exploiting text alignment. Text alignment involves extracting the plagiarism passages in a pair of the suspicious document and its source document. The heuristics have achieved excellent performance in text alignment. However, the further improvements of the heuristic methods mainly depends more on the experiences of experts, which makes the heuristics lack of the abilities for continuous improvements. To address this problem, machine learning maybe a proper way. Considering the position relations and the context of text segments pairs, we formalize the text alignment task as a problem of sequence labeling, improving the current methods at the model level. Especially, this paper proposes to use the probabilistic graphical model to tag the observed sequence of pairs of text segments. Hence we present the sequence labeling approach for text alignment in plagiarism detection based on Conditional Random Fields. The proposed approach is evaluated on the PAN@CLEF 2012 artificial high obfuscation plagiarism corpus and the simulated paraphrase plagiarism corpus, and compared with the methods achieved the best performance in PAN@CLEF 2012, 2013 and 2014. Experimental results demonstrate that the proposed approach significantly outperforms the state of the art methods.

A Study on Information Resource Evaluation for Text Categorization (문서범주화 효율성 제고를 위한 정보원 평가에 관한 연구)

  • Chung, Eun-Kyung
    • Journal of the Korean Society for information Management
    • /
    • v.24 no.4
    • /
    • pp.305-321
    • /
    • 2007
  • The purpose of this study is to examine whether the information resources referenced by human indexers during indexing process are effective on Text Categorization. More specifically, information resources from bibliographic information as well as full text information were explored in the context of a typical scientific journal article data set. The experiment results pointed out that information resources such as citation, source title, and title were not significantly different with full text. Whereas keyword was found to be significantly different with full text. The findings of this study identify that information resources referenced by human indexers can be considered good candidates for text categorization for automatic subject term assignment.

An Enhanced Text Mining Approach using Ensemble Algorithm for Detecting Cyber Bullying

  • Z.Sunitha Bai;Sreelatha Malempati
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.1-6
    • /
    • 2023
  • Text mining (TM) is most widely used to process the various unstructured text documents and process the data present in the various domains. The other name for text mining is text classification. This domain is most popular in many domains such as movie reviews, product reviews on various E-commerce websites, sentiment analysis, topic modeling and cyber bullying on social media messages. Cyber-bullying is the type of abusing someone with the insulting language. Personal abusing, sexual harassment, other types of abusing come under cyber-bullying. Several existing systems are developed to detect the bullying words based on their situation in the social networking sites (SNS). SNS becomes platform for bully someone. In this paper, An Enhanced text mining approach is developed by using Ensemble Algorithm (ETMA) to solve several problems in traditional algorithms and improve the accuracy, processing time and quality of the result. ETMA is the algorithm used to analyze the bullying text within the social networking sites (SNS) such as facebook, twitter etc. The ETMA is applied on synthetic dataset collected from various data a source which consists of 5k messages belongs to bullying and non-bullying. The performance is analyzed by showing Precision, Recall, F1-Score and Accuracy.