• Title/Summary/Keyword: Distributed Information Retrieval

Search Result 169, Processing Time 0.03 seconds

Design of Intrusion Detection System applying for data mining agent (데이터 마이닝 에이전트를 적용한 침입 탐지 시스템 설계)

  • 정종근;구제영;김용호;오근탁;이윤배
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.619-622
    • /
    • 2002
  • IDS has been studied mainly in the field of the detection derision and collecting of audit data. The detection decision should decide whether successive behaviors are intrusions or not , the collecting of audit data needs ability that collects precisely data for intrusion decision. Artificial methods such as rule based system and neural network are recently introduced in order to solve this problem. However, these methods have simple host structures and defects that can't detect transformed intrusion patterns. So, we propose the method using data mining agent that can retrieve and estimate the patterns and retrieval of user's behavior in the distributed different hosts.

  • PDF

A Study of the Education of Information Specialists (정보학 교육의 개혁방안 연구)

  • Choi Sung Jin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.16
    • /
    • pp.111-176
    • /
    • 1989
  • The purpose of this study is to evaluate the information science education provided by the undergraduate courses of the departments of library science of the Korean universities by looking at major topics included in the syllabi distributed to students in the past three years. It is important to determine the evaluation of the professional education for information specialists by the graduates of the departments of library science who have acquired a critical appreciation of their professional studies and speak from experience about the relavance of the programme to their work and careers, and by the managers of information service units where the graduates would eventually make their careers. Specifically, the study addresses the following four questions. (a) To what excent do the information science curricula contribute to advancement of theory and practice of the information profession? (b) To what extent do the information science curricula contribute to students in acquiring the knowledge and skills required of the information specialist? (c) To what extent are the employers' concerns reflected in the information science curricula? (d) What reforms are needed to bring the current information science curricula closer to the present and future needs of the information profession? To answer these questions, the study is conducted in two main parts: an in-depth subject analysis of the articles of three important journals in the field of information science published during the past ten years and of the syllabi used for information science subjects taught in the departments of library science during the past three years and an extensive survey of the graduates of departments of library science and their principal employers. The major findings are as follows. The average number of 4.1 subjects of information science is offered in departments of library science, and the most common subjects offered are introduction to information science, information storage and retrieval, and library automation. Approximately two thirds of the total output of research and development in the field of information science are taught at one or more departments of library science in Korea. Majority of the graduates of the departments of library science comment that their professional education did not offer to them systematic orientation to the specifics of the first job. The employers of the graduates believe that departments of library science should provide sufficient practicums to enable students to understand and apply the theory.

  • PDF

Extraction of a Central Object in a Color Image Based on Significant Colors (특이 칼라에 기반한 칼라 영상에서의 중심 객체 추출)

  • SungYoung Kim;Eunkyung Lim;MinHwan Kim
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.5
    • /
    • pp.648-657
    • /
    • 2004
  • A method of extracting central objects in color images without any prior-knowledge is proposed in this paper, which uses basically information of significant color distribution. A central object in an image is defined as a set of regions that lie around center of the image and have significant color distribution against the other surround (or background) regions. Significant colors in an image are first defined as the colors that are distributed more densely around center of the image than near borders. Then core object regions (CORs) are selected as the regions a lot of pixels of which have the significant colors. Finally, the adjacent regions to the CORs are iteratively merged if they are similar to the CORs but not to the background regions in color distribution. The merging result is accepted as the central object that may include differently color-characterized regions and/or two or more objects of interest. Usefulness of the significant colors in extracting the central object was verified through experiments on several kinds of test images. We expect that central objects shall be used usefully in image retrieval applications.

  • PDF

Web Service based Recommendation System using Inference Engine (추론엔진을 활용한 웹서비스 기반 추천 시스템)

  • Kim SungTae;Park SooMin;Yang JungJin
    • Journal of Intelligence and Information Systems
    • /
    • v.10 no.3
    • /
    • pp.59-72
    • /
    • 2004
  • The range of Internet usage is drastically broadened and diversed from information retrieval and collection to many different functions. Contrasting to the increase of Internet use, the efficiency of finding necessary information is decreased. Therefore, the need of information system which provides customized information is emerged. Our research proposes Web Service based recommendation system which employes inference engine to find and recommend the most appropriate products for users. Web applications in present provide useful information for users while they still carry the problem of overcoming different platforms and distributed computing environment. The need of standardized and systematic approach is necessary for easier communication and coherent system development through heterogeneous environments. Web Service is programming language independent and improves interoperability by describing, deploying, and executing modularized applications through network. The paper focuses on developing Web Service based recommendation system which will provide benchmarks of Web Service realization. It is done by integrating inference engine where the dynamics of information and user preferences are taken into account.

  • PDF

Retrieval of Sulfur Dioxide Column Density from TROPOMI Using the Principle Component Analysis Method (주성분분석방법을 이용한 TROPOMI로부터 이산화황 칼럼농도 산출 연구)

  • Yang, Jiwon;Choi, Wonei;Park, Junsung;Kim, Daewon;Kang, Hyeongwoo;Lee, Hanlim
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_3
    • /
    • pp.1173-1185
    • /
    • 2019
  • We, for the first time, retrieved sulfur dioxide (SO2) vertical column density (VCD) in industrial and volcanic areas from TROPOspheric Monitoring Instrument (TROPOMI) using the Principle component analysis(PCA) algorithm. Furthermore, SO2 VCDs retrieved by the PCA algorithm from TROPOMI raw data were compared with those retrieved by the Differential Optical Absorption Spectroscopy (DOAS) algorithm (TROPOMI Level 2 SO2 product). In East Asia, where large amounts of SO2 are released to the surface due to anthropogenic source such as fossil fuels, the mean value of SO2 VCD retrieved by the PCA (DOAS) algorithm was shown to be 0.05 DU (-0.02 DU). The correlation between SO2 VCD retrieved by the PCA algorithm and those retrieved by the DOAS algorithm were shown to be low (slope = 0.64; correlation coefficient (R) = 0.51) for cloudy condition. However, with cloud fraction of less than 0.5, the slope and correlation coefficient between the two outputs were increased to 0.68 and 0.61, respectively. It means that the SO2 retrieval sensitivity to surface is reduced when the cloud fraction is high in both algorithms. Furthermore, the correlation between volcanic SO2 VCD retrieved by the PCA algorithm and those retrieved by the DOAS algorithm is shown to be high (R = 0.90) for cloudy condition. This good agreement between both data sets for volcanic SO2 is thought to be due to the higher accuracy of the satellite-based SO2 VCD retrieval for SO2 which is mainly distributed in the upper troposphere or lower stratosphere in volcanic region.

Design and Implemention of Real-time web Crawling distributed monitoring system (실시간 웹 크롤링 분산 모니터링 시스템 설계 및 구현)

  • Kim, Yeong-A;Kim, Gea-Hee;Kim, Hyun-Ju;Kim, Chang-Geun
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.1
    • /
    • pp.45-53
    • /
    • 2019
  • We face problems from excessive information served with websites in this rapidly changing information era. We find little information useful and much useless and spend a lot of time to select information needed. Many websites including search engines use web crawling in order to make data updated. Web crawling is usually used to generate copies of all the pages of visited sites. Search engines index the pages for faster searching. With regard to data collection for wholesale and order information changing in realtime, the keyword-oriented web data collection is not adequate. The alternative for selective collection of web information in realtime has not been suggested. In this paper, we propose a method of collecting information of restricted web sites by using Web crawling distributed monitoring system (R-WCMS) and estimating collection time through detailed analysis of data and storing them in parallel system. Experimental results show that web site information retrieval is applied to the proposed model, reducing the time of 15-17%.

Design of a Z39.50 Server, and Integration of the Z39.50 Server and Database Engines using COBRA (Z39.50 서버의 설계 및 CORBA를 이용한 Z39.50 서버와 데이터베이스 엔진의 통합)

  • Son, Chung-Beom;Yoo, Jae-Soo
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.12
    • /
    • pp.3775-3784
    • /
    • 2000
  • The CORBA presents the method of integrating heterogeneous systems in the distributed environment. In recent, many existing information retrieval servers with Z19.50 protocol have been developed and used in the libraries and companies, etc. The servers construct each database and provide users with various information services. In this paper, we design and implement a Z39.50 server that supports various Z39.50 protocol services over the existing servers. We also integrate various database engines and the Z39.50 server using CORBA. Our Z39.50 server basically provides the init service, the search service, and the close service. In addition, it supports the scan service browsing a term list, the segment service presenting large records, and the explain facility explaining the implementation information of the server.

  • PDF

High-Dimensional Image Indexing based on Adaptive Partitioning ana Vector Approximation (적응 분할과 벡터 근사에 기반한 고차원 이미지 색인 기법)

  • Cha, Gwang-Ho;Jeong, Jin-Wan
    • Journal of KIISE:Databases
    • /
    • v.29 no.2
    • /
    • pp.128-137
    • /
    • 2002
  • In this paper, we propose the LPC+-file for efficient indexing of high-dimensional image data. With the proliferation of multimedia data, there Is an increasing need to support the indexing and retrieval of high-dimensional image data. Recently, the LPC-file (5) that based on vector approximation has been developed for indexing high-dimensional data. The LPC-file gives good performance especially when the dataset is uniformly distributed. However, compared with for the uniformly distributed dataset, its performance degrades when the dataset is clustered. We improve the performance of the LPC-file for the strongly clustered image dataset. The basic idea is to adaptively partition the data space to find subspaces with high-density clusters and to assign more bits to them than others to increase the discriminatory power of the approximation of vectors. The total number of bits used to represent vector approximations is rather less than that of the LPC-file since the partitioned cells in the LPC+-file share the bits. An empirical evaluation shows that the LPC+-file results in significant performance improvements for real image data sets which are strongly clustered.

(Design of data mining IDS for new intrusion pattern) (새로운 침입 패턴을 위한 데이터 마이닝 침입 탐지 시스템 설계)

  • 편석범;정종근;이윤배
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.39 no.1
    • /
    • pp.77-82
    • /
    • 2002
  • IDS has been studied mainly in the field of the detection decision and collecting of audit data. The detection decision should decide whether successive behaviors are intrusions or not , the collecting of audit data needs ability that collects precisely data for intrusion decision. Artificial methods such as rule based system and neural network are recently introduced in order to solve this problem. However, these methods have simple host structures and defects that can't detect changed new intrusion patterns. So, we propose the method using data mining that can retrieve and estimate the patterns and retrieval of user's behavior in the distributed different hosts.

Design and Implementation of an XML-based Planning Agent for Internet Marketplaces (인터넷 마켓플레이스를 위한 XML 기반 계획 에이전트의 설계와 구현)

  • Lee, Yong-Ju
    • The KIPS Transactions:PartD
    • /
    • v.8D no.3
    • /
    • pp.211-220
    • /
    • 2001
  • A planning agent supporting customers plays a distinguished role in internet marketplaces. Although several internet marketplaces have been built with the maturity of tools based on internet and distributed technologies, there has been no actual study up to now with respect to the implementation of the planning agent. This paper describes the design and implementation of an XML-based planning agent for internet marketplaces. Since implementing internet marketplaces encounter problems similar to those in other fields such as multidatabase or workflow management systems, we first compare those features. Next we identify functions and roles of the planning agent. The planning agent is implemented using COM+, ASP, and XML, and demonstrated using real data used in an existing system.

  • PDF