• Title/Summary/Keyword: Language-based

Search Result 6,226, Processing Time 0.033 seconds

Telecommunication Services Based On Spoken Language Information Technology - In view of services provided by KT - (음성정보기술을 이용한 통신서비스 - KT 서비스를 중심으로 -)

  • Koo, Myoung-Wan;Kim, Jae-In;Jeong, Yeong-Jun;Kim, Mun-Sik;Kim, Won-U;Kim, Hak-Hun;Park, Seong-Jun;Ryu, Chang-Seon;Kim, Hui-Gyeong
    • Proceedings of the KSPS conference
    • /
    • 2004.05a
    • /
    • pp.125-130
    • /
    • 2004
  • In this paper, we explain telecommunication services based on spoken language information technology. There are three different kinds of services. The first is based on Advanced Intelligent services(AIN). We built a Intelligent Peripheral(IP)with speech recognition, speech synthesis and VoiceXML interpreter. The second is based on KT-HUVOIS, a proprietary speech platform based on VoiceXML. The third is based on VoiceXML interpreter. We explain various services depending on these platforms in detail.

  • PDF

Design Strategies for Web-Based Self-Directed Cooperative Language Learning Communities (상호자율언어학습을 위한 웹기반 학습공동체의 설계전략 연구)

  • Park, Jung-Hwan;Lee, Kun-In;Zhao, Hai-Lan
    • English Language & Literature Teaching
    • /
    • v.10 no.1
    • /
    • pp.127-152
    • /
    • 2004
  • The purpose of this study is to elaborate design strategies for a Web-based self-directed cooperative distance language learning community. Research was done regarding the theoretical foundations for self-directed cooperative language learning and Web-based learning communities. The components of a Web-based community for self-directed cooperative language learning system are also investigated. As a result of this study, design strategies for Web-based communities are suggested. There are performance and supporting environments(synchronous/asynchronous) for self- directed cooperative language learning. There are also cultural experiences and communication factors in the performance field. Furthermore, matching communicators, finding and offering information, language learning content and other supporting agents are important in the supporting environment.

  • PDF

A Status Quo Study of Using Computer Technology for Language Testing (언어평가에 대한 컴퓨터 기술의 활용방안)

  • 이영식
    • Korean Journal of English Language and Linguistics
    • /
    • v.3 no.4
    • /
    • pp.571-588
    • /
    • 2003
  • The purpose of this study is to investigate into the various ways that the computer technology is used for language testing. Three uses of computer technology are mentioned: 1) computer-adaptive language testing and computer-based language testing, 2) the scoring of performance-based language assessment, and 3) the development and use of psychometric tools for analyzing the scoring results. Although the various uses of computer technology could provide expanded possibilities for language testing development, the developers should be reminded that they are currently subject to indepth research which could support their validity. In this regard, the advantages and limitations of some uses of computer technology for language testing are discussed.

  • PDF

Language Modeling Approaches to Information Retrieval

  • Banerjee, Protima;Han, Hyo-Il
    • Journal of Computing Science and Engineering
    • /
    • v.3 no.3
    • /
    • pp.143-164
    • /
    • 2009
  • This article surveys recent research in the area of language modeling (sometimes called statistical language modeling) approaches to information retrieval. Language modeling is a formal probabilistic retrieval framework with roots in speech recognition and natural language processing. The underlying assumption of language modeling is that human language generation is a random process; the goal is to model that process via a generative statistical model. In this article, we discuss current research in the application of language modeling to information retrieval, the role of semantics in the language modeling framework, cluster-based language models, use of language modeling for XML retrieval and future trends.

A Frame-based Approach to Text Generation

  • Le, Huong Thanh
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.192-201
    • /
    • 2007
  • This paper is a study on constructing a natural language interface to database, concentrating on generating textual answers. TGEN, a system that generates textual answer from query result tables is presented. The TGEN architecture guarantees its portability across domains. A combination of a frame-based approach and natural language generation techniques in the TGEN provides text fluency and text flexibility. The implementation result shows that this approach is feasible while a deep NLG approach is still far to be reached.

  • PDF

Applying Problem-Based Language Learning in an Online Class: Designing a PBLL Unit

  • Abdullah, Mardziah Hayati;Chong, Larry Dwan
    • English Language & Literature Teaching
    • /
    • v.9 no.spc
    • /
    • pp.1-17
    • /
    • 2003
  • This paper aims to propose that Problem-Based Learning (PBL) is a method that can help meet the conditions in language learning and instruction. PBL was first used in medical education, where learners engaged in problem-solving activities that reflect the demands of real-life professional practice, thus promoting critical thinking in the content domain. The paper proposes that by applying PBL in language learning and creating situations in which learners work collaboratively on problems, the learners benefit in two respects: (i) they have the opportunity to practise the kind of thinking skills and problem-solving strategies needed in real life, and (ii) they engage in purposeful language activity with others through discussion and negotiation. The paper first provides a theoretical rationale far the use of PBL in language learning and suggests attendant changes in the role of a language instructor in a PBL context. The paper then presents an outline of the stages and components needed in designing an online PBL Unit far use in an undergraduate language class.

  • PDF

Simple and effective neural coreference resolution for Korean language

  • Park, Cheoneum;Lim, Joonho;Ryu, Jihee;Kim, Hyunki;Lee, Changki
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.1038-1048
    • /
    • 2021
  • We propose an end-to-end neural coreference resolution for the Korean language that uses an attention mechanism to point to the same entity. Because Korean is a head-final language, we focused on a method that uses a pointer network based on the head. The key idea is to consider all nouns in the document as candidates based on the head-final characteristics of the Korean language and learn distributions over the referenced entity positions for each noun. Given the recent success of applications using bidirectional encoder representation from transformer (BERT) in natural language-processing tasks, we employed BERT in the proposed model to create word representations based on contextual information. The experimental results indicated that the proposed model achieved state-of-the-art performance in Korean language coreference resolution.

Transformer-based reranking for improving Korean morphological analysis systems

  • Jihee Ryu;Soojong Lim;Oh-Woog Kwon;Seung-Hoon Na
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.137-153
    • /
    • 2024
  • This study introduces a new approach in Korean morphological analysis combining dictionary-based techniques with Transformer-based deep learning models. The key innovation is the use of a BERT-based reranking system, significantly enhancing the accuracy of traditional morphological analysis. The method generates multiple suboptimal paths, then employs BERT models for reranking, leveraging their advanced language comprehension. Results show remarkable performance improvements, with the first-stage reranking achieving over 20% improvement in error reduction rate compared with existing models. The second stage, using another BERT variant, further increases this improvement to over 30%. This indicates a significant leap in accuracy, validating the effectiveness of merging dictionary-based analysis with contemporary deep learning. The study suggests future exploration in refined integrations of dictionary and deep learning methods as well as using probabilistic models for enhanced morphological analysis. This hybrid approach sets a new benchmark in the field and offers insights for similar challenges in language processing applications.

Emotion Analysis of Characters in a Comic from State Diagram via Natural Language-based Requirement Specifications

  • Ye Jin Jin;Ji Hoon Kong;Hyun Seung Son;R. Young Chul Kim
    • International journal of advanced smart convergence
    • /
    • v.13 no.1
    • /
    • pp.92-98
    • /
    • 2024
  • The current software industry has an emerging issue with natural language-based requirement specifications. However, the accuracy of such requirement analysis remains a concern. It is noted that most errors still occur at the requirement specification stage. Defining and analyzing requirements based on natural language has become necessary. To address this issue, the linguistic theories of Chomsky and Fillmore are applied to the analysis of natural language-based requirements. This involves identifying the semantics of morphemes and nouns. Consequently, a mechanism was proposed for extracting object state designs and automatically generating code templates. Building on this mechanism, I suggest generating natural language-based comic images. Utilizing state diagrams, I apply changes to the states of comic characters (protagonists) and extract variations in their expressions. This introduces a novel approach to comic image generation. I anticipate highly productive comic creation by applying software processes to Cartoon ART.

Hypernetwork Memory-Based Model for Infant's Language Learning (유아 언어학습에 대한 하이퍼망 메모리 기반 모델)

  • Lee, Ji-Hoon;Lee, Eun-Seok;Zhang, Byoung-Tak
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.983-987
    • /
    • 2009
  • One of the critical themes in the language acquisition is its exposure to linguistic environments. Linguistic environments, which interact with infants, include not only human beings such as its parents but also artificially crafted linguistic media as their functioning elements. An infant learns a language by exploring these extensive language environments around it. Based on such large linguistic data exposure, we propose a machine learning based method on the cognitive mechanism that simulate flexibly and appropriately infant's language learning. The infant's initial stage of language learning comes with sentence learning and creation, which can be simulated by exposing it to a language corpus. The core of the simulation is a memory-based learning model which has language hypernetwork structure. The language hypernetwork simulates developmental and progressive language learning using the structure of new data stream through making it representing of high level connection between language components possible. In this paper, we simulates an infant's gradual and developmental learning progress by training language hypernetwork gradually using 32,744 sentences extracted from video scripts of commercial animation movies for children.