• Title/Summary/Keyword: 게일 문서

Search Result 45, Processing Time 0.021 seconds

An Interactive Search Agent based on DotQuery (닷큐어리를 활용한 대화형 검색 에이전트)

  • Kim Sun-Ok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.4 s.42
    • /
    • pp.271-281
    • /
    • 2006
  • Due to the development of Internet, number of online documents and the amount of web services are increasing dramatically. However, there are several procedures required, before you actually find what you were looking for. These procedures are necessary to Internet users, but it takes time to search. As a method to systematize and simplify this repetitive job, this paper suggests a DotQuery based interactive search agent. This agent enables a user to search, from his computer, a plenty of information through the DotQuery service. which includes natural languages. and it executes several procedures required instead. This agent also functions as a plug-in service within general web browsers such as Internet Explorer and decodes the DotQuery service. Then it analyzes the DotQuery from a user through its own program and acquires service results through multiple browsers of its own.

  • PDF

Slaves Observed in Chinese Poem (한국 한시에 나타난 노비)

  • Pak, dong uk
    • (The)Study of the Eastern Classic
    • /
    • no.66
    • /
    • pp.103-128
    • /
    • 2017
  • Slaves have been investigated around diaries or slave ownership papers of nobilities up to now. While slaves were described in tales or stories on loyal slaves or historic tales, slaves were not sufficiently examined. This paper analyzed the actual awareness on slaves through the description on slaves in Chinese poem. It was generally very difficult to deal with young slaves because young slaves were included in the lowest class without education and not an adult. The fugitive slaves were loss of labor and brought the emotional betrayal. There were spells to make fugitive slaves return. The sense of loss was nearly same in the death of slaves as in the death of family members. The longer the slaves lived with owners, the greater the sense of loss was. However, the difference of awareness on slaves per period was not identified in this paper. It can be identified only by fully examining more data on slaves. It will be the theme of further study.

A Study on File Transfer Methods on IEEE-1394 Serial Bus (IEEE-1394 버스에서의 파일 전송 기법에 연구)

  • 편기현;강성일;이흥규
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.261-263
    • /
    • 1998
  • IEEE-1394 버스(이하 13394 버스)는 가정용 가전기기와 컴퓨터를 서로연결 할 수 있는 버스로 100Mbps, 200Mbps, 400Mbps의 광대역폭을 제공하고 데이터를 전달하는 방법으로 비동기 (asynchromous)전송뿐만 아니라 등시성(isochronous)전송도 제공한다. 비동기 전송은 시간의 제약이 없고 정확한 전달이 필요한 데이터에 적합한 반면 등시성 전송은 실시간을 요구하고정확한 전잘이 필요없는 데이터에 적합한 특성을 가진다. 1394 버스는 멀티미디어 데이터를 생성하고 편집하는 장치들을 서로 연결시키고 기존의 LAN이 갖는 대역폭의 부족과 프로토콜살의 실시간 전송 방법의 부재 때문에 진정한 의미의 실시간 구현이 어려웠던 화상 전화기, 화상회의 시스템, 비디오편집 시스템을 가능하게 한다. 이러한 멀티미디어 응용 시스템은1)대용향의 멀티미디어 파일 전송을 필요로 하고 화상전화기 혹은 화상회의 시스템은 2) 고속의 파일 전송을 요구하게 된다. 또 두사람이 화상 전화기를 통해 통신을 하면서 여러 가지 문서를 교환하는 경우 최대한 빠르게 파일 전송이 완료될 필요가 있다. 현재 1394버스에서 IP가 올리는 일이 진행중이므로 기존의 FTP는 사용할 수 없다. 게다가 1394 버스에 IP가 올라가더라도 1394 버스 내에서 대용량의 파일을 더 빠르고 정확하게 전달하기 위해서는 IP오버헤드가 없고 1394 버스의 특성을 직접 이용한 새로운 파일 전송 기법이 필요하다. 1394 버스내에서 대용량의 파일을 빠르고 정확하게 전송하는 기법을 찾기 위해서는 비동기 전송과 등기성 전송의 특성을 잘 이해해야하고 각 전송 방법으로 파일을 전송 할 때 생기는 장단점들을 파악해야한다. 본 논문에서는 비동기 전송과 등기성 전송을 이용한 팔일 전송 기법을 각각 제시하고 실험을 통해 이들의 특성을 비교분석하였다.미에서 uronic acid 함량이 두 배 이상으로 나타났다. 흑미의 uronic acid 함량이 가장 많이 용출된 분획은 sodium hydroxide 부분으로서 hemicellulose구조가 polyuronic acid의 형태인 것으로 사료된다. 추출획분의 구성단당은 여러 곡물연구의 보고와 유사하게 glucose, arabinose, xylose 함량이 대체로 높게 나타났다. 점미가 수가용성분에서 goucose대비 용출함량이 고르게 나타나는 경향을 보였고 흑미는 알칼리가용분에서 glucose가 상당량(0.68%) 포함되고 있음을 보여주었고 arabinose(0.68%), xylose(0.05%)도 다른 종류에 비해서 다량 함유한 것으로 나타났다. 흑미는 총식이섬유 함량이 높고 pectic substances, hemicellulose, uronic acid 함량이 높아서 콜레스테롤 저하 등의 효과가 기대되며 고섬유식품으로서 조리 특성 연구가 필요한 것으로 사료된다.리하였다. 얻어진 소견(所見)은 다음과 같았다. 1. 모년령(母年齡), 임신회수(姙娠回數), 임신기간(姙娠其間), 출산시체중등(出産時體重等)의 제요인(諸要因)은 주산기사망(周産基死亡)에 대(對)하여 통계적(統計的)으로 유의(有意)한 영향을 미치고 있어 $25{\sim}29$세(歲)의 연령군에서, 2번째 임신과 2번째의 출산에서 그리고 만삭의 임신 기간에, 출산시체중(出産時體重) $3.50{\sim}3.99kg$사이의 아이에서 그 주산기사망률(周産基死亡率)이 각각 가장 낮았다. 2. 사산(死産)과 초생아사망(初生兒死亡)을 구분(區分)하여 고려해 볼때 사산(死産)은 모성(母性)의 임신력(姙娠歷)과 매우 밀접한 관련이 있는 것으로 사료(思料)되었고 초생아사망(初生兒死亡)은 미숙아(未熟兒)와 이에 관련된 병발이 거의 결정

  • PDF

Analysis of Threats and Countermeasures on Mobile Smartphone (스마트폰 보안위협과 대응기술 분석)

  • Jeon, Woong-Ryul;Kim, Jee-Yeon;Lee, Young-Sook;Won, Dong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.2
    • /
    • pp.153-163
    • /
    • 2011
  • Smartphone is a mobile device which can perform better than feature phone. Recently, growth in demand for advanced mobile devices boasting powerful performance, market share of smartphone is increasing rapidly in mobile device market, for example, iphone and android phone. Smartphone can provide many functionalities, e-mail, scheduler, word-processing, 3D-game, and etc, based on its powerful performance. Thus, various secret information is integrated in smartphone. To provide service, sometimes, smartphone transmits informations to outside via wireless network. Because smartphone is a mobile device, user can lose his/her smartphone, easily, and losing smartphone can cause serious security threats, because of integrated information in smartphone. Also data which is transmitted in wireless network can be protected for privacy. Thus, in present, it is very important to keep secure smartphone. In this paper, we analyze threats and vulnerabilities of smartphone based on its environments and describe countermeasures against threats and vulnerabilities.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.