• Title/Summary/Keyword: 위키피디어

Search Result 6, Processing Time 0.019 seconds

A Semantic Text Model with Wikipedia-based Concept Space (위키피디어 기반 개념 공간을 가지는 시멘틱 텍스트 모델)

  • Kim, Han-Joon;Chang, Jae-Young
    • The Journal of Society for e-Business Studies
    • /
    • v.19 no.3
    • /
    • pp.107-123
    • /
    • 2014
  • Current text mining techniques suffer from the problem that the conventional text representation models cannot express the semantic or conceptual information for the textual documents written with natural languages. The conventional text models represent the textual documents as bag of words, which include vector space model, Boolean model, statistical model, and tensor space model. These models express documents only with the term literals for indexing and the frequency-based weights for their corresponding terms; that is, they ignore semantical information, sequential order information, and structural information of terms. Most of the text mining techniques have been developed assuming that the given documents are represented as 'bag-of-words' based text models. However, currently, confronting the big data era, a new paradigm of text representation model is required which can analyse huge amounts of textual documents more precisely. Our text model regards the 'concept' as an independent space equated with the 'term' and 'document' spaces used in the vector space model, and it expresses the relatedness among the three spaces. To develop the concept space, we use Wikipedia data, each of which defines a single concept. Consequently, a document collection is represented as a 3-order tensor with semantic information, and then the proposed model is called text cuboid model in our paper. Through experiments using the popular 20NewsGroup document corpus, we prove the superiority of the proposed text model in terms of document clustering and concept clustering.

Summarization of News Articles Based on Centroid Vector (중심 벡터에 기반한 신문 기사 요약)

  • Kim, Gwon-Yang
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.382-385
    • /
    • 2007
  • 본 논문은 "X라는 인물은 누구인가?"와 같은 질의어가 주어질 때, X라는 인물에 대한 나이, 직업, 학력 또는 특정 사건에서 X라는 인물의 역할에 대한 정보를 기술하는 문장을 인식하고 추출함으로써 해당 인물에 대한 신문 기사 내용을 요약하는 방법을 제시한다. 질의어 용어에 대해 가능한 많은 관련 문장을 추출하기 위하여 중심 벡터에 기반한 통계적 방법을 적용하였으며, 정확도와 재현율 성능을 개선하기 위해 위키피디어 같은 외부 지식을 사용한 중심 단어의 개선된 가중치 측도를 적용하였다. 실험 대상인 전자신문 말뭉치 상에서 출현 빈도수가 큰 20 인의 IT 인물에 대해 제안한 방법이 개선된 성능을 보임을 알 수 있었다.

  • PDF

A Trend Analysis and Book Recommendation through Bigdata Analysis (빅데이터 분석을 통한 트렌드 파악 및 사용자 맞춤 도서 추천)

  • Kyungseo Yoon;Seungshik Kang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.363-364
    • /
    • 2023
  • 카테고리별 베스트셀러를 통해 트렌드 파악 및 사용자 맞춤형 도서 추천을 위해 카테고리별로 도서 데이터를 수집하고, 대용량 데이터인 위키피디어 데이터를 이용하여 워드임베딩 모델을 구축한다. 도서 데이터에 대한 키워드 분석 및 LDA 주제분석 기법에 의해 카테고리별 핵심 단어 분석을 통해 도서 트렌드를 파악하고, 사용자 맞춤형 도서 정보 제공 및 도서를 추천하는 기능을 구현한다.

Design and Implementation of Collaborative Knowledge Management System for Collaborative Learning (협력학습을 위한 협력지식관리시스템의 설계 및 구현)

  • Han, Hee-Seop;Kim, Hyeoncheol
    • The Journal of Korean Association of Computer Education
    • /
    • v.10 no.2
    • /
    • pp.115-123
    • /
    • 2007
  • Collaborative knowledge is continuously producted and modified by group individuals during collaboration and it is also fostered in a radical trust environment like Wiki. The example is Wikipedia. However I found out a big problem as difficulties of exploring when the knowledge space is extended more and more widely. To solve this problem, collaborative knowledge management systems are implemented based on wiki. The one is navigation map that supports the efficient exploring and the another is knowledge map that supports a convergent thinking in a group. In this study, we examined the effectiveness of navigation map and knowledge map.

  • PDF

A Tensor Space Model based Semantic Search Technique (텐서공간모델 기반 시멘틱 검색 기법)

  • Hong, Kee-Joo;Kim, Han-Joon;Chang, Jae-Young;Chun, Jong-Hoon
    • The Journal of Society for e-Business Studies
    • /
    • v.21 no.4
    • /
    • pp.1-14
    • /
    • 2016
  • Semantic search is known as a series of activities and techniques to improve the search accuracy by clearly understanding users' search intent without big cognitive efforts. Usually, semantic search engines requires ontology and semantic metadata to analyze user queries. However, building a particular ontology and semantic metadata intended for large amounts of data is a very time-consuming and costly task. This is why commercialization practices of semantic search are insufficient. In order to resolve this problem, we propose a novel semantic search method which takes advantage of our previous semantic tensor space model. Since each term is represented as the 2nd-order 'document-by-concept' tensor (i.e., matrix), and each concept as the 2nd-order 'document-by-term' tensor in the model, our proposed semantic search method does not require to build ontology. Nevertheless, through extensive experiments using the OHSUMED document collection and SCOPUS journal abstract data, we show that our proposed method outperforms the vector space model-based search method.

Evaluating the Impact of Training Conditions on the Performance of GPT-2-Small Based Korean-English Bilingual Models

  • Euhee Kim;Keonwoo Koo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.9
    • /
    • pp.69-77
    • /
    • 2024
  • This study evaluates the performance of second language acquisition models learning Korean and English using the GPT-2-Small model, analyzing the impact of various training conditions on performance. Four training conditions were used: monolingual learning, sequential learning, sequential-interleaved learning, and sequential-EWC learning. The model was trained using datasets from the National Institute of Korean Language and English from BabyLM Challenge, with performance measured through PPL and BLiMP metrics. Results showed that monolingual learning had the best performance with a PPL of 16.2 and BLiMP accuracy of 73.7%. In contrast, sequential-EWC learning had the highest PPL of 41.9 and the lowest BLiMP accuracy of 66.3%(p < 0.05). Monolingual learning proved most effective for optimizing model performance. The EWC regularization in sequential-EWC learning degraded performance by limiting weight updates, hindering new language learning. This research improves understanding of language modeling and contributes to cognitive similarity in AI language learning.