• Title/Summary/Keyword: Wikipedia

Search Result 152, Processing Time 0.037 seconds

Improving the Biography Archive Service of Wikipedia (위키피디아 인물 아카이브 서비스 개선을 위한 분석 연구)

  • Choi, Sanghee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.52 no.1
    • /
    • pp.447-467
    • /
    • 2018
  • Biographical information about people is usually collected and provided by a company or an institute which has a specific standard to select people for service. Recently, user oriented contents service like Wikipedia has started biographical information service, Wikipedia Biography Portal, in which users select people and freely describe about them. This study collected 500 biographical data from three categories of Wikipedia biography portal such as criminals, faculty, and directors. The contents of data from each category were analyzed with the word frequency and the divergence indicator to identify the characteristics of each category. As a result, divergency indicator is effective to represent the differential factors of each category. This study provides word clouds of top 100 word with divergence indicator and top 100 common words of three categories with word frequency as a guide for users to write about a person in these categories and for editors to accept and monitor the biography from users.

Keyword Selection for Visual Search based on Wikipedia (비주얼 검색을 위한 위키피디아 기반의 질의어 추출)

  • Kim, Jongwoo;Cho, Soosun
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.960-968
    • /
    • 2018
  • The mobile visual search service uses a query image to acquire linkage information through pre-constructed DB search. From the standpoint of this purpose, it would be more useful if you could perform a search on a web-based keyword search system instead of a pre-built DB search. In this paper, we propose a representative query extraction algorithm to be used as a keyword on a web-based search system. To do this, we use image classification labels generated by the CNN (Convolutional Neural Network) algorithm based on Deep Learning, which has a remarkable performance in image recognition. In the query extraction algorithm, dictionary meaningful words are extracted using Wikipedia, and hierarchical categories are constructed using WordNet. The performance of the proposed algorithm is evaluated by measuring the system response time.

Term Extraction for Ontology Concept Recognition in Wikipedia (Wikipedia에서 온톨로지 개념 인식을 위한 핵심어 추출)

  • Ko, Byeong-Kyu;Kim, Pan-Koo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.04a
    • /
    • pp.344-347
    • /
    • 2010
  • 최근 주목받고 있는 의미적 정보처리의 지식베이스인 온톨로지는 정형화된 표현을 통해 정확한 지식 처리와 추론관계를 명시해야 하기 때문에 온톨로지 확장에 대한 중요성 역시 강조되고 있다. 온톨로지 확장을 위한 기존의 방법들은 전문가를 통한 수작업 형태이거나 보편화된 사전이나 시소러스 집단의 분석을 통한 통계의 확률분포를 이용하는 반자동화된 방법들이 있다. 이에 본 논문에서는 Wikipedia에서 특정 도메인 문서들만을 수집한 후 중요문장 추출과정을 통해 해당 문서 내의 핵심어를 파악하여 이를 온톨로지의 개념 인식을 위한 정보로 활용할 수 있는 방안을 제시하고자 한다.

Tagged Web Image Retrieval Re-ranking with Wikipedia-based Semantic Relatedness (위키피디아 기반의 의미 연관성을 이용한 태깅된 웹 이미지의 검색순위 조정)

  • Lee, Seong-Jae;Cho, Soo-Sun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.11
    • /
    • pp.1491-1499
    • /
    • 2011
  • Now a days, to make good use of tags is a general tendency when users need to upload or search some multimedia data such as images and videos on the Web. In this paper, we introduce an approach to calculate semantic importance of tags and to make re-ranking with them on tagged Web image retrieval. Generally, most photo images stored on the Web have lots of tags added with user's subjective judgements not by the importance of them. So they become the cause of precision rate decrease with simple matching of tags to a given query. Therefore, if we can select semantically important tags and employ them on the image search, the retrieval result would be enhanced. In this paper, we propose a method to make image retrieval re-ranking with the key tags which share more semantic information with a query or other tags based on Wikipedia-based semantic relatedness. With the semantic relatedness calculated by using huge on-line encyclopedia, Wikipedia, we found the superiority of our method in precision and recall rate as experimental results.

Building a Korean-English Parallel Corpus by Measuring Sentence Similarities Using Sequential Matching of Language Resources and Topic Modeling (언어 자원과 토픽 모델의 순차 매칭을 이용한 유사 문장 계산 기반의 위키피디아 한국어-영어 병렬 말뭉치 구축)

  • Cheon, JuRyong;Ko, YoungJoong
    • Journal of KIISE
    • /
    • v.42 no.7
    • /
    • pp.901-909
    • /
    • 2015
  • In this paper, to build a parallel corpus between Korean and English in Wikipedia. We proposed a method to find similar sentences based on language resources and topic modeling. We first applied language resources(Wiki-dictionary, numbers, and online dictionary in Daum) to match word sequentially. We construct the Wiki-dictionary using titles in Wikipedia. In order to take advantages of the Wikipedia, we used translation probability in the Wiki-dictionary for word matching. In addition, we improved the accuracy of sentence similarity measuring method by using word distribution based on topic modeling. In the experiment, a previous study showed 48.4% of F1-score with only language resources based on linear combination and 51.6% with the topic modeling considering entire word distributions additionally. However, our proposed methods with sequential matching added translation probability to language resources and achieved 9.9% (58.3%) better result than the previous study. When using the proposed sequential matching method of language resources and topic modeling after considering important word distributions, the proposed system achieved 7.5%(59.1%) better than the previous study.

English Word Game System Recognizing Newly Coined Words (신조어를 인식할 수 있는 영어단어 게임시스템)

  • Shim, Dong-uk;Park, So-young;Kim, Ki-sub;Kang, Han-gu;Jang, Jun-ho;Kim, Dae-woong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.521-524
    • /
    • 2009
  • Everyone can easily acquire learning materials on web environment that rapidly develops. Because the importance of English education has been emphasized day by day, many English education systems are introduced. However, previous most English education systems support only single user mode, and cannot deal with a newly coined word such as 'WIKIPEDIA'. In order to lead a user's learning ability with interest and enjoyment, this paper propose an online English word game system implementing a 'scrabble' board game. The proposed English word game system has the following characteristics. First, the proposed system supports both single user mode and multi user mode with a virtual user based on artificial intelligence. Second, the proposed system can recognize newly coined words such as 'WIKIPEDIA' by using NEVER Open API dictionary. Third, the proposed system offers familiar user interface so that a user can play the game without any manual. Therefore, it is expected that the proposed system can help users to learn English words with interest and enjoyment.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

English-Korean Cross-lingual Link Discovery Using Link Probability and Named Entity Recognition (링크확률과 개체명 인식을 이용한 영-한 교차언어 링크 탐색)

  • Kang, Shin-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.3
    • /
    • pp.191-195
    • /
    • 2013
  • This paper proposes an automatic method for discovering cross-lingual links from English Wikipedia documents to Korean ones in order to increase connectivity among vast web resources. Compared to the existing methods roughly estimating link probability of phrases, candidate anchors are selected from English documents by using various information such as title lists and linking probability extracted from Wikipedia dumps and the results of named-entity recognition, and the anchors are translated into Korean words, and then the most suitable Korean documents with the words are selected as cross-lingual links. The experimental results showed 0.375 of MAP.

A Study on the User Acceptance Model of Mass Collective Intelligence (대중 집단지성의 사용자 수용 모형에 관한 연구)

  • Lee, Hyoung-Yong;Ahn, Hyun-Chul
    • Journal of Information Technology Applications and Management
    • /
    • v.17 no.4
    • /
    • pp.1-17
    • /
    • 2010
  • As web technologies evolve and so-called Web 2.0 technologies appear, collective intelligence is being applied in widespread areas. In general, mass collective intelligence like Wikipedia is created, revised, and managed by anonymous participants in an uncontrolled system. Thus, the knowledge provided by mass collective intelligence may be distorted, and may not be true, which may affect the user acceptance behavior. However, there have been few academic studies that analyzed the factors that affect user acceptance of mass collective intelligence, and their relationships. Under this academic background, we develop a model to examine how mass collective intelligence is accepted by users. The theoretical model is validated through an online survey of the Wikipedia users from three universities in Korea. The results reveal that the users will have positive attitude towards adopting mass collective knowledge when they perceive that the knowledge from mass collective intelligence is useful. We also find that the perceived usefulness of the knowledge is affected by perceived knowledge quality and trust in knowledge contributors. The results also suggest that perceived knowledge quality is determined by perceived level of collaboration, perceived objectivity, and recipient expertise, whereas trust in knowledge contributors is determined by natural propensity to trust and perceived objectivity. Theoretical and practical implications about mass collective knowledge are discussed.

  • PDF

Topical Clustering Techniques of Twitter Documents Using Korean Wikipedia (한글 위키피디아를 이용한 트위터 문서의 주제별 클러스터링 기법)

  • Chang, Jae-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.5
    • /
    • pp.189-196
    • /
    • 2014
  • Recently, the need for retrieving documents is growing in SNS environment such as twitter. For supporting the twitter search, a clustering technique classifying the massively retrieved documents in terms of topics is required. However, due to the nature of twitter, there is a limit in applying previous simple techniques to clustering the twitter documents. To overcome such problem, we propose in this paper a new clustering technique suitable to twitter environment. In proposed method, we augment new terms to feature vectors representing the twitter documents, and recalculate the weights of features using Korean Wikipedia. In addition, we performed the experiments with Korean twitter documents, and proved the usability of proposed method through performance comparison with the previous techniques.