• Title/Summary/Keyword: Representation-Language

Search Result 525, Processing Time 0.029 seconds

A Study on Trend and Application of Internet Scripting Language (인터넷 스크립팅 언어의 동향 및 응용에 관한 연구)

  • Lee, Jong-Seop;Choe, Yeong-Geun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.11S
    • /
    • pp.3209-3218
    • /
    • 1999
  • Currently in the Web(World Wide Web) environment, HTML(Hyper Text Markup Language) is used for information representation and exchange. But it is thought that HTML has some constraints in information representation of various kinds because of its limited tag set. And it is considered that combining the HTML, which is used for static information representation in Web environment, with Scripting language, which is usually used for multimedia information representation in a synchronized framework, can be very useful. Consequently we show the general trend of the Scripting language in Web environment and show the possibility of HTML and Scripting language amalgamation for Web service improvement.

  • PDF

Korean Abstract Meaning Representation (AMR) Guidelines for Graph-structured Representations of Sentence Meaning (문장 의미의 그래프 구조 표상을 위한 한국어 Abstract Meaning Representation 가이드라인)

  • Choe, Hyonsu;Han, Jiyoon;Park, Hyejin;Oh, Taehwan;Park, Seokwon;Kim, Hansaem
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.252-257
    • /
    • 2019
  • 이 논문은 한국어 Abstract Meaning Representation (AMR; 추상 의미 표상) 가이드라인 1.0*을 소개한다. AMR은 통합적인 의미 표상 체계로, 의미 분석(semantic parsing)의 주요 Task 중 하나로 자리매김하고 있다. 한국어 AMR 가이드라인은 현행 AMR 1.2.6을 심도 있게 분석하고 이를 한국어 상황에 맞게 로컬라이징한 것이다. 해당 가이드라인은 추후 한국어 AMR 말뭉치 구축(sembanking)에 대비하여 일관된 주석 세부 지침을 제공하기 위해 작성되었다.

  • PDF

A Didactic Analysis of Prospective Elementary Teachers' Representation of Trapezoid Area (예비초등교사의 사다리꼴 넓이 표상에 대한 교수학적 분석)

  • Lee Jonge-Uk
    • The Mathematical Education
    • /
    • v.45 no.2 s.113
    • /
    • pp.177-189
    • /
    • 2006
  • This study focuses on the analysis of prospective elementary teachers' representation of trapezoid area and teacher educator's reflecting in the context of a mathematics course. In this study, I use my own teaching and classroom of prospective elementary teachers as the site for investigation. 1 examine the ways in which my own pedagogical content knowledge as a teacher educator influence and influenced by my work with students. Data for the study is provided by audiotape of class proceeding. Episode describes the ways in which the mathematics was presented with respect to the development and use of representation, and centers around trapezoid area. The episode deals with my gaining a deeper understanding of different types of representations-symbolic, visual, and language. In conclusion, I present two major finding of this study. First, Each representation influences mutually. Prospective elementary teachers reasoned visual representation from symbolic and language. And converse is true. Second, Teacher educator should be prepared proper mathematical language through teaching and learning with his students.

  • PDF

Language-based Classification of Words using Deep Learning (딥러닝을 이용한 언어별 단어 분류 기법)

  • Zacharia, Nyambegera Duke;Dahouda, Mwamba Kasongo;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.411-414
    • /
    • 2021
  • One of the elements of technology that has become extremely critical within the field of education today is Deep learning. It has been especially used in the area of natural language processing, with some word-representation vectors playing a critical role. However, some of the low-resource languages, such as Swahili, which is spoken in East and Central Africa, do not fall into this category. Natural Language Processing is a field of artificial intelligence where systems and computational algorithms are built that can automatically understand, analyze, manipulate, and potentially generate human language. After coming to discover that some African languages fail to have a proper representation within language processing, even going so far as to describe them as lower resource languages because of inadequate data for NLP, we decided to study the Swahili language. As it stands currently, language modeling using neural networks requires adequate data to guarantee quality word representation, which is important for natural language processing (NLP) tasks. Most African languages have no data for such processing. The main aim of this project is to recognize and focus on the classification of words in English, Swahili, and Korean with a particular emphasis on the low-resource Swahili language. Finally, we are going to create our own dataset and reprocess the data using Python Script, formulate the syllabic alphabet, and finally develop an English, Swahili, and Korean word analogy dataset.

Comparative study of text representation and learning for Persian named entity recognition

  • Pour, Mohammad Mahdi Abdollah;Momtazi, Saeedeh
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.794-804
    • /
    • 2022
  • Transformer models have had a great impact on natural language processing (NLP) in recent years by realizing outstanding and efficient contextualized language models. Recent studies have used transformer-based language models for various NLP tasks, including Persian named entity recognition (NER). However, in complex tasks, for example, NER, it is difficult to determine which contextualized embedding will produce the best representation for the tasks. Considering the lack of comparative studies to investigate the use of different contextualized pretrained models with sequence modeling classifiers, we conducted a comparative study about using different classifiers and embedding models. In this paper, we use different transformer-based language models tuned with different classifiers, and we evaluate these models on the Persian NER task. We perform a comparative analysis to assess the impact of text representation and text classification methods on Persian NER performance. We train and evaluate the models on three different Persian NER datasets, that is, MoNa, Peyma, and Arman. Experimental results demonstrate that XLM-R with a linear layer and conditional random field (CRF) layer exhibited the best performance. This model achieved phrase-based F-measures of 70.04, 86.37, and 79.25 and word-based F scores of 78, 84.02, and 89.73 on the MoNa, Peyma, and Arman datasets, respectively. These results represent state-of-the-art performance on the Persian NER task.

A Study on the representation-language from image features of Interior Design - Focused on 2008 International Fair - (실내디자인 이미지 유형의 특성에 따른 표현어휘 연구 - 2008년도 국제박람회를 중심으로 -)

  • Sheen, Dong-Kwan;Han, Young-Ho
    • Korean Institute of Interior Design Journal
    • /
    • v.17 no.6
    • /
    • pp.216-224
    • /
    • 2008
  • The represented Design Language have to include design meaning by functions in Interior. It also is able to easy and quick to understand in conversation for the design proposal. In this study, 6 stages suggest for the basic forming image in Interior Design. Those are form, line, space, color, material and principles of design. And essential image language arranged by preceding research. The fundamental 6 elements of space are used for explanation with the minimum method to make consumer understand through some image. Image has the communication function as a visual conversation in Space Design. The purpose of using the image language is the exchange into communication by written visual image. In order to it is necessary to delivery correct meaning of Interior Design for the understand between consumer and designer for the suggestion through images. Therefore, making categories for representation-language from image features of interior design is a important research with the value to share the spatial pattern. It will be expected to add the spatial Image language by processing with new trend.

RBM-based distributed representation of language (RBM을 이용한 언어의 분산 표상화)

  • You, Heejo;Nam, Kichun;Nam, Hosung
    • Korean Journal of Cognitive Science
    • /
    • v.28 no.2
    • /
    • pp.111-131
    • /
    • 2017
  • The connectionist model is one approach to studying language processing from a computational perspective. And building a representation in the connectionist model study is just as important as making the structure of the model in that it determines the level of learning and performance of the model. The connectionist model has been constructed in two different ways: localist representation and distributed representation. However, the localist representation used in the previous studies had limitations in that the unit of the output layer having a rare target activation value is inactivated, and the past distributed representation has the limitation of difficulty in confirming the result by the opacity of the displayed information. This has been a limitation of the overall connection model study. In this paper, we present a new method to induce distributed representation with local representation using abstraction of information, which is a feature of restricted Boltzmann machine, with respect to the limitation of such representation of the past. As a result, our proposed method effectively solves the problem of conventional representation by using the method of information compression and inverse transformation of distributed representation into local representation.

Zero-anaphora resolution in Korean based on deep language representation model: BERT

  • Kim, Youngtae;Ra, Dongyul;Lim, Soojong
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.299-312
    • /
    • 2021
  • It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional encoder representations from transformers (BERT). Notably, BERT is a general language representation model that enables systems to utilize deep bidirectional contextual information in a natural language text. It extensively exploits the attention mechanism based upon the sequence-transduction model Transformer. In our model, classification is simultaneously performed for all the words in the input word sequence to decide whether each word can be an antecedent. We seek end-to-end learning by disallowing any use of hand-crafted or dependency-parsing features. Experimental results show that compared with other models, our approach can significantly improve the performance of ZAR.

Shopping Mall Avatar System Using Behavior and Motion Description Language (수준별 행위 표현 기법을 이용한 쇼핑몰도우미 아바타 시스템의 구현)

  • Kim, Jung-Hee;Lee, Gui-Hyun;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.4
    • /
    • pp.566-574
    • /
    • 2005
  • In spite of recent increase in the use of avatar in Web and Virtual Reality, there has not been a service that allows users to control directly the avatar behaviors. In addition, the conventional behavior control languages required a lot of complicated information for controlling the avatar motions. Moreover, in order to apply written languages to a different task domain, it was necessary to modify or rewrite the languages. In this paper, we define Task-Level Behavior Description Language and Motion Representation Language for more simple control of the avatar behavior. The first thing allows describing the avatar behaviors in each task domain, and The second thing enables writing detailed data for motion control. And in this paper, we developed an interpreter which can automatically change the Behavior Description Language to the Motion Representation Language. So this system allow users control the avatar behavior simply with only use the Behavior Description Language. The system was applied to shopping mall and the Task-level Behavior Description Language was compared with conventional languages to see how it was more effective in behavior description.

  • PDF

Improving Stack LSTMs by Combining Syllables and Morphemes for Korean Dependency Parsing (Stack LSTM 기반 한국어 의존 파싱을 위한 음절과 형태소의 결합 단어 표상 방법)

  • Na, Seung-Hoon;Shin, Jong-Hoon;Kim, Kangil
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.9-13
    • /
    • 2016
  • Stack LSTM기반 의존 파싱은 전이 기반 파싱에서 스택과 버퍼의 내용을 Stack LSTM으로 인코딩하여 이들을 조합하여 파서 상태 벡터(parser state representation)를 유도해 낸후 다음 전이 액션을 결정하는 방식이다. Stack LSTM기반 의존 파싱에서는 버퍼 초기화를 위해 단어 표상 (word representation) 방식이 중요한데, 한국어와 같이 형태적으로 복잡한 언어 (morphologically rich language)의 경우에는 무수히 많은 단어가 파생될 수 있어 이들 언어에 대해 단어 임베딩 벡터를 직접적으로 얻는 방식에는 한계가 있다. 본 논문에서는 Stack LSTM 을 한국어 의존 파싱에 적용하기 위해 음절-태그과 형태소의 표상들을 결합 (hybrid)하여 단어 표상을 얻어내는 합성 방법을 제안한다. Sejong 테스트셋에서 실험 결과, 제안 단어 표상 방법은 음절-태그 및 형태소를 이용한 방법을 더욱 개선시켜 UAS 93.65% (Rigid평가셋에서는 90.44%)의 우수한 성능을 보여주었다.

  • PDF