• Title/Summary/Keyword: 온톨로지 기반 검색시스템

Search Result 243, Processing Time 0.019 seconds

A Logical Cell-Based Approach for Robot Component Repositories (논리적 셀 기반의 로봇 소프트웨어 컴포넌트 저장소)

  • Koo, Hyung-Min;Ko, In-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.8
    • /
    • pp.731-742
    • /
    • 2007
  • Self-growing software is a software system that has the capability of evolving its functionalities and configurations by itself based on dynamically monitored situations. Self-growing software is especially necessary for intelligent service robots, which must have the capability to monitor their surrounding environments and provide appropriate behaviors for human users. However, it is hard to anticipate all situations that robots face with, and it is hard to make robots have all functionalities for various environments. In addition, robots have limited internal capacity. To support self-growing software for intelligent service robots, we are developing a cell-based distributed repository system that allows robots and developers transparently to share robot functionalities. To accomplish the creation of evolutionary repositories, we invented the concept of a cell, which is a logical group of distributed repositories based upon the functionalities of components. In addition, a cell can be used as a unit for the evolutionary growth of the components within the repositories. In this paper, we describe the requirements and architecture of the cell-based repository system for self-growing software. We also present a prototype implementation and experiment of the repository system. Through the cell-based repositories, we achieve improved performance of self-growing actions for robots and efficient sharing of components among robots and developers.

A Personal Memex System Using Uniform Representation of the Data from Various Devices (다양한 기기로부터의 데이터 단일 표현을 통한 개인 미멕스 시스템)

  • Min, Young-Kun;Lee, Bog-Ju
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.309-318
    • /
    • 2009
  • The researches on the system that automatically records and retrieves one's everyday life is relatively actively worked recently. These systems, called personal memex or life log, usually entail dedicated devices such as SenseCam in MyLifeBits project. This research paid attention to the digital devices such as mobile phones, credit cards, and digital camera that people use everyday. The system enables a person to store everyday life systematically that are saved in the devices or the deviced-related web pages (e.g., phone records in the cellular phone company) and to refer this quickly later. The data collection agent in the proposed system, called MyMemex, collects the personal life log "web data" using the web services that the web sites provide and stores the web data into the server. The "file data" stored in the off-line digital devices are also loaded into the server. Each of the file data or web data is viewed as a memex event that can be described by 4W1H form. The different types of data in different services are transformed into the memex event data in 4W1H form. The memex event ontology is used in this transform. Users can sign in to the web server of this service to view their life logs in the chronological manner. Users can also search the life logs using keywords. Moreover, the life logs can be viewed as a diary or story style by converting the memex events to sentences. The related memex events are grouped to be displayed as an "episode" by a heuristic identification method. A result with high accuracy has been obtained by the experiment for the episode identification using the real life log data of one of the authors.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.