• Title/Summary/Keyword: Retrieval systems

Search Result 1,016, Processing Time 0.038 seconds

Mixed Mobile Education System using SIFT Algorithm (SIFT 알고리즘을 이용한 혼합형 모바일 교육 시스템)

  • Hong, Kwang-Jin;Jung, Kee-Chul;Han, Eun-Jung;Yang, Jong-Yeol
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.2
    • /
    • pp.69-79
    • /
    • 2008
  • Due to popularization of the wireless Internet and mobile devices the infrastructure of the ubiquitous environment, where users can get information whatever they want anytime and anywhere, is created. Therefore, a variety of fields including the education studies methods for efficiency of information transmission using on-line and off-line contents. In this paper, we propose the Mixed Mobile Education system(MME) that improves educational efficiency using on-line and off-line contents on mobile devices. Because it is hard to input new data and cannot use similar off-line contents in systems used additional tags, the proposed system does not use additional tags but recognizes of-line contents as we extract feature points in the input image using the mobile camera. We use the Scale Invariant Feature Transform(SIFT) algorithm to extract feature points which are not affected by noise, color distortion, size and rotation in the input image captured by the low resolution camera. And we use the client-server architecture for solving the limited storage size of the mobile devices and for easily registration and modification of data. Experimental results show that compared with previous work, the proposed system has some advantages and disadvantages and that the proposed system has good efficiency on various environments.

  • PDF

An Interconnection Method for Streaming Framework and Multimedia Database (스트리밍 프레임워크와 멀티미디어 데이타베이스와의 연동기법)

  • Lee, Jae-Wook;Lee, Sung-Young;Lee, Jong-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.7
    • /
    • pp.436-449
    • /
    • 2002
  • This paper describes on our experience of developing the Database Connector as an interconnection method between multimedia database, and the streaming framework. It is possible to support diverse and mature multimedia database services such as retrieval and join operation during the streaming if an interconnection method is provided in between streaming system and multimedia databases. The currently available interconnection schemes, however have mainly used the file systems or the relational databases that are Implemented with separated form of meta data, which deafs with information of multimedia contents, and streaming data which deals with multimedia data itself. Consequently, existing interconnection mechanisms could not come up with many virtues of multimedia database services during the streaming operation. In order to resolve these drawbacks, we propose a novel scheme for an interconnection between streaming framework and multimedia database, called the Inter-Process Communication (IPC) based Database connector, under the assumption that two systems are located in a same host. We define four transaction primitives; Read, Write, Find, Play, as well as define the interface for transactions that are implemented based on the plug-in, which in consequence can extend to other multimedia databases that will come for some later years. Our simulation study show that performance of the proposed IPC based interconnection scheme is not much far behind compared with that of file systems.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

A Study on Image Indexing Method based on Content (내용에 기반한 이미지 인덱싱 방법에 관한 연구)

  • Yu, Won-Gyeong;Jeong, Eul-Yun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.6
    • /
    • pp.903-917
    • /
    • 1995
  • In most database systems images have been indexed indirectly using related texts such as captions, annotations and image attributes. But there has been an increasing requirement for the image database system supporting the storage and retrieval of images directly by content using the information contained in the images. There has been a few indexing methods based on contents. Among them, Pertains proposed an image indexing method considering spatial relationships and properties of objects forming the images. This is the expansion of the other studies based on '2-D string. But this method needs too much storage space and lacks flexibility. In this paper, we propose a more flexible index structure based on kd-tree using paging techniques. We show an example of extracting keys using normalization from the from the raw image. Simulation results show that our method improves in flexibility and needs much less storage space.

  • PDF

Personal Information Management Based on the Concept Lattice of Formal Concept Analysis (FCA 개념 망 기반 개인정보관리)

  • Kim, Mi-Hye
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.163-178
    • /
    • 2005
  • The ultimate objective of Personal Information Management (PIM) is to collect, handle and manage wanted information in a systematic way that enables individuals to search the information more easily and effectively, However, existing personal information management systems are usually based on a traditional hierarchical directory model for storing information, limiting effective organization and retrieval of information as well as providing less support in search by associative interrelationship between objects (documents) and their attributes, To improve these problems, in this paper we propose a personal information management model based on the concept lattice of Formal Concept Analysis (FCA) to easily build and maintain individuals' own information on the Web, The proposed system can overcome the limitations of the traditional hierarchy approach as well as supporting search of other useful information by the inter-relationships between objects and their attributes in the concept lattice of FCA beyond a narrow search.

  • PDF

Estimation of Corn Growth by Radar Scatterometer Data

  • Kim, Yihyun;Hong, Sukyoung;Lee, Kyoungdo;Na, Sangil;Jung, Gunho
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.47 no.2
    • /
    • pp.85-91
    • /
    • 2014
  • Ground-based polarimetric scatterometers have been effective tools to monitor the growth of crop with multi-polarization and frequencies and various incident angles. An important advantage of these systems that can be exploited is temporal observation of a specific crop target. Polarimetric backscatter data at L-, C- and X-bands were acquired every 10 minutes. We analyzed the relationships between L-, C- and X-band signatures, biophysical measurements over the whole corn growth period. The Vertical transmit and Vertical receive polarization (VV) backscattering coefficients for all bands were greater than those of the Horizontal transmit and Horizontal receive polarization (HH) until early-July, and then thereafter HH-polarization was greater than VV-polarization or Horizontal transmit and Vertical receive polarization (HV) until the harvesting stage (Day Of Year, DOY 240). The results of correlation analysis between the backscattering coefficients for all bands and corn growth data showed that L-band HH-polarization (L-HH) was the most suited for monitoring the fresh weight ($r=0.95^{***}$), dry weight ($r=0.95^{***}$), leaf area index ($r=0.86^{**}$), and vegetation water content ($r=0.93^{***}$). Retrieval equations were developed for estimating corn growth parameters using L-HH. The results indicated that L-HH could be used for estimating the vegetation biophysical parameters considered here with high accuracy. Those results can be useful in determining frequency and polarization of satellite Synthetic Aperture Radar stem and in designing a future ground-based microwave system for a long-term monitoring of corn.

E-Discovery Process Model and Alternative Technologies for an Effective Litigation Response of the Company (기업의 효과적인 소송 대응을 위한 전자증거개시 절차 모델과 대체 기술)

  • Lee, Tae-Rim;Shin, Sang-Uk
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.287-297
    • /
    • 2012
  • In order to prepare for the introduction of the E-Discovery system from the United States and to cope with some causable changes of legal systems, we propose a general E-Discovery process and essential tasks of the each phase. The proposed process model is designed by the analysis of well-known projects such as EDRM, The Sedona Conference, which are advanced research for the standardization of E-Discovery task procedures and for the supply of guidelines to hands-on workers. In addition, Machine Learning Algorithms, Open-source libraries for the Information Retrieval and Distributed Processing technologies based on the Hadoop for big data are introduced and its application methods on the E-Discovery work scenario are proposed. All this information will be useful to vendors or people willing to develop the E-Discovery service solution. Also, it is very helpful to company owners willing to rebuild their business process and it enables people who are about to face a major lawsuit to handle a situation effectively.

New Re-ranking Technique based on Concept-Network Profiles for Personalized Web Search (웹 검색 개인화를 위한 개념네트워크 프로파일 기반 순위 재조정 기법)

  • Kim, Han-Joon;Noh, Joon-Ho;Chang, Jae-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.69-76
    • /
    • 2012
  • This paper proposes a novel way of personalized web search through re-ranking the search results with user profiles of concept-network structure. Basically, personalized search systems need to be based on user profiles that contain users' search patterns, and they actively use the user profiles in order to expand initial queries or to re-rank the search results. The proposed method is a sort of a re-ranking personalized search method integrated with query expansion facility. The method identifies some documents which occur commonly among a set of different search results from the expanded queries, and re-ranks the search results by the degree of co-occurring. We show that the proposed method outperforms the conventional ones by performing the empirical web search with a number of actual users who have diverse information needs and query intents.

Fingerprint Identification Using the Distribution of Ridge Directions (방향분포를 이용한 지문인식)

  • Kim Ki-Cheol;Choi Seung-Moon;Lee Jung-Moon
    • Journal of Digital Contents Society
    • /
    • v.2 no.2
    • /
    • pp.179-189
    • /
    • 2001
  • This paper aims at faster processing and retrieval in fingerprint identification systems by reducing the amount of preprocessing and the size of the feature vector. The distribution of fingerprint directions is a set of local directions of ridges and furrows in small overlapped blocks in a fingerprint image. It is extracted initially as a set of 8-direction components through the Gabor filter bank. The discontinuous distribution of directions is smoothed to a continuous one and visualized as a direction image. Then the center of the distribution is selected as a reference point. A feature vector is composed of 192 sine values of the ridge angles at 32-equiangular positions with 6 different distances from the reference point in the direction image. Experiments show that the proposed algorithm performs the same level of correct identification as a conventional algorithm does, while speeding up the overall processing significantly by reducing the length of the feature vector.

  • PDF

Design of Semantic Models for Teaching and Learning based on Convergence of Ontology Technology (온톨로지 기술 융합을 통합 교수학습 시맨틱 모델 설계)

  • Chung, Hyun-Sook;Kim, Jeong-Min
    • Journal of the Korea Convergence Society
    • /
    • v.6 no.3
    • /
    • pp.127-134
    • /
    • 2015
  • In this paper, we design a semantic-based syllabus template including learning ontologies. A syllabus has been considered as a important blueprint of teaching in universities. However, the current syllabus has no importance in real world because most of all syllabus management systems provide simple functionalities such as, creation, modification, and retrieval. In this paper, our approach consists of definition of hierarchical structure of syllabus and semantic relationships of syllabuses, formalization of learning goals, learning activity, and learning evaluation using Bloom's taxonomy and design of learning subject ontologies for improving the usability of syllabus. We prove the correctness of our proposed methods according to implementing a real syllabus for JAVA programing course and experiments for retrieving syllabuses.