• Title/Summary/Keyword: Parsing technology

Search Result 153, Processing Time 0.018 seconds

Implementation of Security Information and Event Management for Realtime Anomaly Detection and Visualization (실시간 이상 행위 탐지 및 시각화 작업을 위한 보안 정보 관리 시스템 구현)

  • Kim, Nam Gyun;Park, Sang Seon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.5
    • /
    • pp.303-314
    • /
    • 2018
  • In the past few years, government agencies and corporations have succumbed to stealthy, tailored cyberattacks designed to exploit vulnerabilities, disrupt operations and steal valuable information. Security Information and Event Management (SIEM) is useful tool for cyberattacks. SIEM solutions are available in the market but they are too expensive and difficult to use. Then we implemented basic SIEM functions to research and development for future security solutions. We focus on collection, aggregation and analysis of real-time logs from host. This tool allows parsing and search of log data for forensics. Beyond just log management it uses intrusion detection and prioritize of security events inform and support alerting to user. We select Elastic Stack to process and visualization of these security informations. Elastic Stack is a very useful tool for finding information from large data, identifying correlations and creating rich visualizations for monitoring. We suggested using vulnerability check results on our SIEM. We have attacked to the host and got real time user activity for monitoring, alerting and security auditing based this security information management.

Performance of Uncompressed Audio Distribution System over Ethernet with a L1/L2 Hybrid Switching Scheme (L1/L2 혼합형 중계 방법을 적용한 이더넷 기반 비압축 오디오 분배 시스템의 성능 분석)

  • Nam, Wie-Jung;Yoon, Chong-Ho;Park, Pu-Sik;Jo, Nam-Hong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.12
    • /
    • pp.108-116
    • /
    • 2009
  • In this paper, we propose a Ethernet based audio distribution system with a new L1/L2 hybrid switching scheme, and evaluate its performance. The proposed scheme not only offers guaranteed low latency and jitter characteristics that are essentially required for the distribution of high-quality uncompressed audio traffic, and but also provide an efficient transmission of data traffic on the Ethernet environment. The audio distribution system with a proposed scheme consists of a master node and a number of relay nodes, and all nodes are mutually connected as a daisy-chain topology through up and downlinks. The master node generates an audio frame for each cycle of 125us, and the audio frame has 24 time slotted audio channels for carrying stereo 24 channels of 16-bit PCM sampled audio. On receiving the audio frame from its upstream node via the downlink, each intermediate node inserts its audio traffic to the reserved time slot for itself, then relays again to next node through its physical layer(L1) transmission - repeating. After reaching the end node, the audio frame is loopbacked through the uplink. On repeating through the uplink, each node makes a copy of audio slot that node has to receive, then play the audio. When the audio transmission is completed, each node works as a normal L2 switch, thus data frames are switched during the remaining period. For supporting this L1/L2 hybrid switching capability, we insert a glue logic for parsing and multiplexing audio and data frames at MII(Media Independent Interlace) between the physical and data link layers. The proposed scheme can provide a good delay performance and transmission efficiency than legacy Ethernet based audio distribution systems. For verifying the feasibility of the proposed L1/L2 hybrid switching scheme, we use OMNeT++ as a simulation tool with various parameters. From the simulation results, one can find that the proposed scheme can provides outstanding characteristics in terms of both jitter characteristic for audio traffic and transmission efficiency of data traffics.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.