• 제목/요약/키워드: language data

검색결과 3,790건 처리시간 0.031초

R 프로그래밍: 통계 계산과 데이터 시각화를 위한 환경 (R programming: Language and Environment for Statistical Computing and Data Visualization)

  • 이두호
    • 전자통신동향분석
    • /
    • 제28권1호
    • /
    • pp.42-51
    • /
    • 2013
  • The R language is an open source programming language and a software environment for statistical computing and data visualization. The R language is widely used among a lot of statisticians and data scientists to develop statistical software and data analysis. The R language provides a variety of statistical and graphical techniques, including basic descriptive statistics, linear or nonlinear modeling, conventional or advanced statistical tests, time series analysis, clustering, simulation, and others. In this paper, we first introduce the R language and investigate its features as a data analytics tool. As results, we may explore the application possibility of the R language in the field of data analytics.

  • PDF

정보검색 기법과 동적 보간 계수를 이용한 N-gram 언어모델의 적응 (N- gram Adaptation Using Information Retrieval and Dynamic Interpolation Coefficient)

  • 최준기;오영환
    • 대한음성학회지:말소리
    • /
    • 제56호
    • /
    • pp.207-223
    • /
    • 2005
  • The goal of language model adaptation is to improve the background language model with a relatively small adaptation corpus. This study presents a language model adaptation technique where additional text data for the adaptation do not exist. We propose the information retrieval (IR) technique with N-gram language modeling to collect the adaptation corpus from baseline text data. We also propose to use a dynamic language model interpolation coefficient to combine the background language model and the adapted language model. The interpolation coefficient is estimated from the word hypotheses obtained by segmenting the input speech data reserved for held-out validation data. This allows the final adapted model to improve the performance of the background model consistently The proposed approach reduces the word error rate by $13.6\%$ relative to baseline 4-gram for two-hour broadcast news speech recognition.

  • PDF

A Spatial Structural Query Language-G/SQL

  • Fang, Yu;Chu, Fang;Xinming, Tang
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.860-879
    • /
    • 2002
  • Traditionally, Geographical Information Systems can only process spatial data in a procedure-oriented way, and the data can't be treated integrally. This method limits the development of spatial data applications. A new and promising method to solve this problem is the spatial structural query language, which extends SQL and provides integrated accessing to spatial data. In this paper, the theory of spatial structural query language is discussed, and a new geographical data model based on the concepts and data model in OGIS is introduced. According to this model, we implemented a spatial structural query language G/SQL. Through the studies of the 9-Intersection Model, G/SQL provides a set of topological relational predicates and spatial functions for GIS application development. We have successfully developed a Web-based GIS system-WebGIS-using G/SQL. Experiences show that the spatial operators G/SQL offered are complete and easy-to-use. The BNF representation of G/SQL syntax is included in this paper.

  • PDF

한의학 증상용어의 형태소 분석을 위한 자연어 표기 분석 (Analyzing Morpheme of the Natural Language to Express the Symptoms of Korean Medicine)

  • 김혜은;성호경;엄동명;이충열;이병욱
    • 대한예방한의학회지
    • /
    • 제17권2호
    • /
    • pp.179-187
    • /
    • 2013
  • Objectives : In many cases, patient's symptoms have been recorded on EMR in natural language instead of medical terminologies. It is possible to build a database by analyzing the symptoms of Korean Medicine(KM) that indicates patient's symptoms in natural language. Using the database, when doctors record patient's symptoms on EMR in natural language, conversely it'll be also possible to extract the symptoms of KM from those natural language. The database will enhance the value of EMR as a medical data. Methods : In this study, we aimed to make data structure of the terminologies that represent the symptoms of KM. The data structure is combinations of smallest unit in natural language. We made the database by analyzing morpheme of the natural language to express the symptoms of KM. Results & Conclusions : By classifying the natural language in 15 features, we made the structure of concept and the data available for morphological analysis.

Language- Independent Sentence Boundary Detection with Automatic Feature Selection

  • Lee, Do-Gil
    • Journal of the Korean Data and Information Science Society
    • /
    • 제19권4호
    • /
    • pp.1297-1304
    • /
    • 2008
  • This paper proposes a machine learning approach for language-independent sentence boundary detection. The proposed method requires no heuristic rules and language-specific features, such as part-of-speech information, a list of abbreviations or proper names. With only the language-independent features, we perform experiments on not only an inflectional language but also an agglutinative language, having fairly different characteristics (in this paper, English and Korean, respectively). In addition, we obtain good performances in both languages. We have also experimented with the methods under a wide range of experimental conditions, especially for the selection of useful features.

  • PDF

Towards a small language model powered chain-of-reasoning for open-domain question answering

  • Jihyeon Roh;Minho Kim;Kyoungman Bae
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.11-21
    • /
    • 2024
  • We focus on open-domain question-answering tasks that involve a chain-of-reasoning, which are primarily implemented using large language models. With an emphasis on cost-effectiveness, we designed EffiChainQA, an architecture centered on the use of small language models. We employed a retrieval-based language model to address the limitations of large language models, such as the hallucination issue and the lack of updated knowledge. To enhance reasoning capabilities, we introduced a question decomposer that leverages a generative language model and serves as a key component in the chain-of-reasoning process. To generate training data for our question decomposer, we leveraged ChatGPT, which is known for its data augmentation ability. Comprehensive experiments were conducted using the HotpotQA dataset. Our method outperformed several established approaches, including the Chain-of-Thoughts approach, which is based on large language models. Moreover, our results are on par with those of state-of-the-art Retrieve-then-Read methods that utilize large language models.

UML을 이용한 XML/EDI 시스템 설계 및 구현 (Design and Implementation of XML-based Electronic Data Interchange Using Unified Modeling Language)

  • 문태수;김호진
    • 한국전자거래학회지
    • /
    • 제7권3호
    • /
    • pp.139-158
    • /
    • 2002
  • Most of companies related to the area of B2B electronic commerce are making their efforts to innovate their existing business process into new designed process. XML-based electronic data interchange has potential to impact on reshaping the traditional EDI systems. This study intends to suggest a prototype of XML-based electronic data interchange using unified modeling language, with a case study applied in Korean automobile industry. In order to accomplish the research objectives, we employed UML as its standard modeling language, In this study, four diagramming techniques such as use case diagram, sequence diagram, class diagram, component diagram among eight modeling techniques are used for analyzing hierarchical business process. As a result of applying UML methodology, we design and develop XML/EDI applications efficiently. Our field test applied to Korean automobile industry shows that data modeling to design XML application using UML is better than existing methodologies in representing object schema of XML data and in extension and interoperability of systems.

  • PDF

문맥의존 철자오류 후보 생성을 위한 통계적 언어모형 개선 (Improved Statistical Language Model for Context-sensitive Spelling Error Candidates)

  • 이정훈;김민호;권혁철
    • 한국멀티미디어학회논문지
    • /
    • 제20권2호
    • /
    • pp.371-381
    • /
    • 2017
  • The performance of the statistical context-sensitive spelling error correction depends on the quality and quantity of the data for statistical language model. In general, the size and quality of data in a statistical language model are proportional. However, as the amount of data increases, the processing speed becomes slower and storage space also takes up a lot. We suggest the improved statistical language model to solve this problem. And we propose an effective spelling error candidate generation method based on a new statistical language model. The proposed statistical model and the correction method based on it improve the performance of the spelling error correction and processing speed.

An Account of LAD with ESL/SLI Data

  • Kim, Jeong-Seok;Han, Ho
    • 영어어문교육
    • /
    • 제9권1호
    • /
    • pp.49-66
    • /
    • 2003
  • This paper explores the language acquisition mechanism within a recent theoretical nativist framework that assumes some computational principles. We will review previous accounts of the logical problem of language acquisition, arguing that language acquisition is part of general cognitive mechanism or at least associated with maturation of cognitive skills. For a theoretical framework, we will adopt the minimalist program and its principles. To support our theoretical argument, we will introduce empirical evidence from ESL (English as a Second Language) and SLI (Specific Language Impairment) data. The two types of data will illustrate that there might be some relationship between the development of language skills and that of the cognitive skills.

  • PDF

Named entity recognition using transfer learning and small human- and meta-pseudo-labeled datasets

  • Kyoungman Bae;Joon-Ho Lim
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.59-70
    • /
    • 2024
  • We introduce a high-performance named entity recognition (NER) model for written and spoken language. To overcome challenges related to labeled data scarcity and domain shifts, we use transfer learning to leverage our previously developed KorBERT as the base model. We also adopt a meta-pseudo-label method using a teacher/student framework with labeled and unlabeled data. Our model presents two modifications. First, the student model is updated with an average loss from both human- and pseudo-labeled data. Second, the influence of noisy pseudo-labeled data is mitigated by considering feedback scores and updating the teacher model only when below a threshold (0.0005). We achieve the target NER performance in the spoken language domain and improve that in the written language domain by proposing a straightforward rollback method that reverts to the best model based on scarce human-labeled data. Further improvement is achieved by adjusting the label vector weights in the named entity dictionary.