• Title/Summary/Keyword: Language task

Search Result 613, Processing Time 0.025 seconds

Chinese Multi-domain Task-oriented Dialogue System based on Paddle (Paddle 기반의 중국어 Multi-domain Task-oriented 대화 시스템)

  • Deng, Yuchen;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.308-310
    • /
    • 2022
  • With the rise of the Al wave, task-oriented dialogue systems have become one of the popular research directions in academia and industry. Currently, task-oriented dialogue systems mainly adopt pipelined form, which mainly includes natural language understanding, dialogue state decision making, dialogue state tracking and natural language generation. However, pipelining is prone to error propagation, so many task-oriented dialogue systems in the market are only for single-round dialogues. Usually single- domain dialogues have relatively accurate semantic understanding, while they tend to perform poorly on multi-domain, multi-round dialogue datasets. To solve these issues, we developed a paddle-based multi-domain task-oriented Chinese dialogue system. It is based on NEZHA-base pre-training model and CrossWOZ dataset, and uses intention recognition module, dichotomous slot recognition module and NER recognition module to do DST and generate replies based on rules. Experiments show that the dialogue system not only makes good use of the context, but also effectively addresses long-term dependencies. In our approach, the DST of dialogue tracking state is improved, and our DST can identify multiple slotted key-value pairs involved in the discourse, which eliminates the need for manual tagging and thus greatly saves manpower.

A Study on Generation of Parallel Task in High Performance Language (고성능 언어에서의 병렬 태스크 생성에 관한 연구)

  • Park, Sung-Soon;Koo, Mi-Soon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.6
    • /
    • pp.1636-1651
    • /
    • 1997
  • In task parallel language like Fortran M, programmer writes a task parallel program using parallel constructs which is provided. When some data dependencies exist between called procedures in various applications, it is difficult for programmer to write program according to their dependencies. Therefore, it is desirous that compiler can detect some implicit parallelisms and transform a program to parallelized form by using the task parallel constructs like PROCESSES block or PROCESSDO loop of Fortran M. But current task parallel language compilers can't provide these works. In this paper, we analyze the cases according to dependence relations and detect the implicit parallelism which can be transformed to task parallel constructs like PROCESSES block and PROCESSDO loop of Fortran M. Also, For the case which program can be paralleized both PROCESSES block and PROCESSDO loop, we analyze that which construct is more effective for various conditions.

  • PDF

Developing a task-based English lesson plan to enhance teaching ability (과제중심 영어 학습지도안 모형 개발)

  • Hyun, Taeduck
    • English Language & Literature Teaching
    • /
    • v.16 no.4
    • /
    • pp.321-346
    • /
    • 2010
  • This study was performed to develop a task-based English lesson plan. The study reviewed the background theories needed to accomplish the study purpose; types of learning, current trends in English teaching, and the task-based teaching. A frame for the task-based English lesson was developed as the result of this study. An actual task-based lesson plan was made after the frame for the task-based English lesson. The author presented task-based English lesson plans at English education conferences, and applied them to pre-teacher training and in-service trainings for English teachers. It is concluded that the task-based English lesson plan was very effective in enhancing English communicative competence and that the pre-teachers and teachers were satisfied with the lesson plans. It is hoped that more teaching material will be developed based on this task-based English lesson plan.

  • PDF

A task-oriented programming system (공정 지향적인 프로그래밍 시스템)

  • 박홍석
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1996.04a
    • /
    • pp.249-252
    • /
    • 1996
  • This paper presents an algorithmic approach used in the development of a task-level off-line programming system for the efficient applicaiton of robot. In the method, robot tasks are graphically described with manipulation functions. By applying robot language these graphic robot tasks are converted into commands for the robot. A programming example demonstrates the potentiality of task-oriented robot programming.

  • PDF

An analysis of the writing tasks in high school English textbooks: Focusing on genre, rhetorical structure, task types, and authenticity (고등학교 1학년 영어교과서 쓰기활동 과업 분석: 장르, 텍스트 전개구조, 활동 유형, 진정성을 중심으로)

  • Choi, Sunhee;Yu, Ho-Jung
    • English Language & Literature Teaching
    • /
    • v.16 no.4
    • /
    • pp.267-290
    • /
    • 2010
  • The purpose of this study is to analyze the writing tasks included in the newly developed high school English textbooks in the aspects of genre, rhetorical structure, task type, and authenticity in order to find out whether these tasks could contribute to improving Korean EFL students' writing skills. A total of nine textbooks were selected for the study and every writing task in each textbook was analyzed. The results show that various types of genres were incorporated in the tasks, but very few opportunities were provided for students to acquire characteristics of specific genres. In terms of rhetorical structure of text, narration, illustration, and transaction were required most, whereas not a single writing task asked students to use classification or cause and effect. Many of the writing tasks analyzed offered linguistic and/or content support through the use of models, which displays traces of the product-based approach to teaching writing. Lastly, most of the tasks lacked authenticity represented by explicit discussion of purpose and audience. Implications for L2 writing task development and writing instruction in the Korean EFL context are discussed.

  • PDF

Functional MRI of Language: Difference of its Activated Areas and Lateralization according to the Input Modality (언어의 기능적 자기공명영상: 자극방법에 따른 활성화와 편재화의 차이)

  • Ryoo, Jae-Wook;Cho, Jae-Min;Choi, Ho-Chul;Park, Mi-Jung;Choi, Hye-Young;Kim, Ji-Eun;Han, Heon;Kim, Sam-Soo;Jeon, Yong-Hwan;Khang, Hyun-Soo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.15 no.2
    • /
    • pp.130-138
    • /
    • 2011
  • Purpose : To compare fMRIs of visual and auditory word generation tasks, and to evaluate the difference of its activated areas and lateralization according to the mode of stimuli. Materials and Methods : Eight male normal volunteers were included and all were right handed. Functional maps were obtained during auditory and visual word generation tasks in all. Normalized group analysis were performed in each task and the threshold for significance was set at p<0.05. Activated areas in each task were compared visually and statistically. Results : In both tasks, left dominant activations were demonstrated and were more lateralized in visual task. Both frontal lobes (Broca's area, premotor area, and SMA) and left posterior middle temporal gyrus were activated in both tasks. Extensive bilateral temporal activations were noted in auditory task. Both occipital and parietal activations were demonstrated in visual task. Conclusion : Modality independent areas could be interpreted as a core area of language function. Modality specific areas may be associated with processing of stimuli. Visual task induced more lateralized activation and could be a more useful in language study than auditory task.

A Basic Study on the Development of a Grading Scale of Discourse Competence in Korean Speaking Assessment -Focusing on the Scale of 'REFUSAL' Task (한국어 말하기 평가에서 '담화 능력' 등급 기술을 위한 기초 연구 -'부탁'에 대한 '거절하기' 과제를 중심으로-)

  • Lee, Haeyong;Lee, Hyang
    • Journal of Korean language education
    • /
    • v.29 no.3
    • /
    • pp.255-292
    • /
    • 2018
  • Most grading scales of Korean language proficiency tests are based on existing grading scales that are not empirically verified. The purpose of this study is to develop an empirically verified scale descriptor. The 'Performance data-driven approach' that is suggested by Fulcher (1987) was used to develop the detailed description of characteristics for each level of performance. This study is focused on the functional phase of speech samples analysis (coding data) to create explanatory categories of discourse skills into which individual observations of speech phenomena can be scored. The speech samples that were collected through this study demonstrated stages of speech that can be a foundation of a grading scale. The data used in the study was collected from 23 native speakers of Korean. Speech samples were recorded from simulated speaking tests using the 'REFUSAL' task, and transcribed for analysis. The transcript was analyzed using discourse analysis. The result showed that the 'REFUSAL' task needs to go through four functional phases in actual communication. Furthermore, this study found specific and detailed explanatory categories of discourse competence based on the actual native speaker's speech data. Such findings are expected to contribute to the development of more valid and reliable speaking assessment.

Compressing intent classification model for multi-agent in low-resource devices (저성능 자원에서 멀티 에이전트 운영을 위한 의도 분류 모델 경량화)

  • Yoon, Yongsun;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.45-55
    • /
    • 2022
  • Recently, large-scale language models (LPLM) have been shown state-of-the-art performances in various tasks of natural language processing including intent classification. However, fine-tuning LPLM requires much computational cost for training and inference which is not appropriate for dialog system. In this paper, we propose compressed intent classification model for multi-agent in low-resource like CPU. Our method consists of two stages. First, we trained sentence encoder from LPLM then compressed it through knowledge distillation. Second, we trained agent-specific adapter for intent classification. The results of three intent classification datasets show that our method achieved 98% of the accuracy of LPLM with only 21% size of it.

Dual-scale BERT using multi-trait representations for holistic and trait-specific essay grading

  • Minsoo Cho;Jin-Xia Huang;Oh-Woog Kwon
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.82-95
    • /
    • 2024
  • As automated essay scoring (AES) has progressed from handcrafted techniques to deep learning, holistic scoring capabilities have merged. However, specific trait assessment remains a challenge because of the limited depth of earlier methods in modeling dual assessments for holistic and multi-trait tasks. To overcome this challenge, we explore providing comprehensive feedback while modeling the interconnections between holistic and trait representations. We introduce the DualBERT-Trans-CNN model, which combines transformer-based representations with a novel dual-scale bidirectional encoder representations from transformers (BERT) encoding approach at the document-level. By explicitly leveraging multi-trait representations in a multi-task learning (MTL) framework, our DualBERT-Trans-CNN emphasizes the interrelation between holistic and trait-based score predictions, aiming for improved accuracy. For validation, we conducted extensive tests on the ASAP++ and TOEFL11 datasets. Against models of the same MTL setting, ours showed a 2.0% increase in its holistic score. Additionally, compared with single-task learning (STL) models, ours demonstrated a 3.6% enhancement in average multi-trait performance on the ASAP++ dataset.

Recent R&D Trends for Pretrained Language Model (딥러닝 사전학습 언어모델 기술 동향)

  • Lim, J.H.;Kim, H.K.;Kim, Y.K.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.3
    • /
    • pp.9-19
    • /
    • 2020
  • Recently, a technique for applying a deep learning language model pretrained from a large corpus to fine-tuning for each application task has been widely used as a language processing technology. The pretrained language model shows higher performance and satisfactory generalization performance than existing methods. This paper introduces the major research trends related to deep learning pretrained language models in the field of language processing. We describe in detail the motivations, models, learning methods, and results of the BERT language model that had significant influence on subsequent studies. Subsequently, we introduce the results of language model studies after BERT, focusing on SpanBERT, RoBERTa, ALBERT, BART, and ELECTRA. Finally, we introduce the KorBERT pretrained language model, which shows satisfactory performance in Korean language. In addition, we introduce techniques on how to apply the pretrained language model to Korean (agglutinative) language, which consists of a combination of content and functional morphemes, unlike English (refractive) language whose endings change depending on the application.