• Title/Summary/Keyword: 소프트웨어 클러스터링

Search Result 148, Processing Time 0.039 seconds

Implementation of High Available Web-Servers using Cold-standby (Cold-standby 방식의 고가용 웹 서버 시스템 구현)

  • 김용희;성영락;이철훈
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04a
    • /
    • pp.64-66
    • /
    • 2003
  • 본 논문은 서버를 구성할 수 있는 방식 중의 하나인 Cold-standby 방식을 이용하여 리눅스를 기반으로 한 고가용의 웹 서버 설계 및 구현에 대해 그 방안을 제시하고자 한다. Cold-standby 방식은 프라이머리 서버와 필요시 그 기능을 대신하는 백업 서버로 구성되며, 서비스를 제공하는 프라이머리 서버에 결함이 발생하여 서비스를 제공할 수 없을 경우 백업 서버가 그 역할을 대신하는 동작 방식이다. 프라이머리 서버에서 백업 서버로의 스위칭은 리눅스에서 오픈 소스로 제공하는 heartbeat 기법을 이용하여 프라이머리 서버의 결함발생시 백업 서버로의 스위칭이 가능하며, 프라이머리 서버의 과중한 트래픽으로 인해 발생할 수 있는 오버헤드를 줄이기 위해 실제 서비스를 처리하는 리얼 서버들을 클러스터링하였다. 또한 mon 기법을 이용하여 리얼 서버들의 상태를 모니터링함으로써 리얼 서버들의 추가 및 삭제가 용이하며, 웹 서버를 위한 하드웨어 및 소프트웨어 기법의 고가용을 제공함으로써 클라이언트에 대하여 안정성 및 신뢰성을 보장한 고가용의 서비스를 제공하는 웹 서버를 구현하였다.

  • PDF

Application of Hidden Markov Model to Intrusion Detection System (침입탐지 시스템을 위한 은닉 마르코프 모델의 적용)

  • Choe, Jong-Ho;Jo, Seong-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.6
    • /
    • pp.429-438
    • /
    • 2001
  • 정보통신 구조의 확산과 함께 전산시스템에 대한 침입과 피해가 증가되고 있으며 침입탐지 시스템에 대한 관심과 연구가 늘어나고 있다. 본 논문에서는 은닉 마르코프 모델(HMM)을 이용하여 사용자의 정상행위에서 생성된 이벤트ID 정보를 모델링한 후 사용자의 비정상행위를 탐지하는 침입탐지 시스템을 제안한다. 전처리를 거친 이벤트ID열은 전방향-역방향 절차와 Baum-Welch 재추정식을 이용하여 정상행위로 구축된다. 판정은 전방향 절차를 이용해서 판정하려는 열이 정상행위로부터 생성되었을 확률을 계산하며, 이 값을 임계값과 비교함으로써 수행된다. 실험을 통해 침입탐지를 위한 최적의 HMM 매개변수를 결정하고 사용자 구분이 없는 단일모델링, 사용자별 모델링, 사용자 그룹별 모델링 방식을 비교하여 정상행위 모델링 성능을 평가하였다. 실험결과 제안한 시스템이 발생한 침입을 적절히 탐지함을 확인할 수 있었지만, 신뢰도 높은 침입탐지 시스템의 구축을 위해서는 보다 정교한 모델의 클러스터링이 필요함을 알 수 있었다.

  • PDF

A Study on Proposal of the Priority Installation Area of Smart Shelter in Seoul (서울시 스마트쉘터 우선 설치 지역 제안에 대한 연구)

  • Sang-Hyun Yoo;Min-Jeong Kim;Yun-Hwa Kim;Chul-Hee Jung;Go-Eun Han
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.335-336
    • /
    • 2023
  • 본 연구에서는 서울시에서 시범 운영을 진행 중인 스마트쉘터를 우선으로 설치해야 하는 지역을 제안한다. 행정동별 유동 인구뿐만 아니라 교통약자, 폭염, 대기오염을 고려한 클러스터링을 통해 우선순위를 부여한 서울시 스마트쉘터 설치 지역을 제안하였고, 이를 통해 스마트쉘터를 효율적으로 설치할 수 있을 것으로 기대한다.

Case Study of Software Reverse Engineering using McCabe and BP/Win Tools (McCabe 및 BP/Win도구를 이용한 소프트웨어 역공학 사례연구)

  • Jo, Hyeon-Hun;Choe, Yong-Rak;Rhew, Sung-Yul
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.5
    • /
    • pp.528-535
    • /
    • 2000
  • This paper aims at providing guidelines enabling productive software construction by offering reusable modules which is used not only for effective maintenance for each step, but also for a re-engineering process after analyzing developed source code. There are four processing steps. The first is to analyze source code. The second is module slicing and clustering using McCabe and BP/Win Tools, The third is to transform the outputs extracted from the business model to reusable modules. The final step is to design repository and to construct a system. In this paper, we applied the fourth step to our case study, which was specified from the first step to the fourth. The specified fourth step contains various things for constructing repository. And the fourth step reanalyzes informal and unstructured information by using reverse engineering tools, in order to provide effective guidelines for productive software maintenance and re-engineering.

  • PDF

A Motion Correspondence Algorithm based on Point Series Similarity (점 계열 유사도에 기반한 모션 대응 알고리즘)

  • Eom, Ki-Yeol;Jung, Jae-Young;Kim, Moon-Hyun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.4
    • /
    • pp.305-310
    • /
    • 2010
  • In this paper, we propose a heuristic algorithm for motion correspondence based on a point series similarity. A point series is a sequence of points which are sorted in the ascending order of their x-coordinate values. The proposed algorithm clusters the points of a previous frame based on their local adjacency. For each group, we construct several potential point series by permuting the points in it, each of which is compared to the point series of the following frame in order to match the set of points through their similarity based on a proximity constraint. The longest common subsequence between two point series is used as global information to resolve the local ambiguity. Experimental results show an accuracy of more than 90% on two image sequences from the PETS 2009 and the CAVIAR data sets.

A Dynamic Resource Allocation Method in Tactical Network Environments Based on Graph Clustering (전술 네트워크 환경에서 그래프 클러스터링 방법을 이용한 동적 자원 할당 방법)

  • Kim, MinHyeop;Ko, In-Young;Lee, Choon-Woo
    • Journal of KIISE:Software and Applications
    • /
    • v.41 no.8
    • /
    • pp.569-579
    • /
    • 2014
  • In a tactical-edge environment, where multiple weapon resources are coordinated together via services, it is essential to make an efficient binding between an abstract service and a resource that are needed to execute composite services for accomplishing a given mission. However, the tactical network that is used in military operation has low bandwidth and a high rate of packet loss. Therefore, communication overhead between services must be minimized to execute composite services in a stable manner in the tactical network. In addition, a tactical-edge environment changes dynamically, and it affects the connectivity and bandwidth of the tactical network. To deal with these characteristics of the tactical network we propose two service-resource reallocation methods which minimize the communication overhead between service gateways and effectively manage neutralization of gateways during distributed service coordination. We compared the effectiveness of these two - methods in terms of total communication overhead between service gateways and resource-allocation similarity between the initial resource allocation and the reallocation result.

Topic Analysis of the National Petition Site and Prediction of Answerable Petitions Based on Deep Learning (국민청원 주제 분석 및 딥러닝 기반 답변 가능 청원 예측)

  • Woo, Yun Hui;Kim, Hyon Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.2
    • /
    • pp.45-52
    • /
    • 2020
  • Since the opening of the national petition site, it has attracted much attention. In this paper, we perform topic analysis of the national petition site and propose a prediction model for answerable petitions based on deep learning. First, 1,500 petitions are collected, topics are extracted based on the petitions' contents. Main subjects are defined using K-means clustering algorithm, and detailed subjects are defined using topic modeling of petitions belonging to the main subjects. Also, long short-term memory (LSTM) is used for prediction of answerable petitions. Not only title and contents but also categories, length of text, and ratio of part of speech such as noun, adjective, adverb, verb are also used for the proposed model. Our experimental results show that the type 2 model using other features such as ratio of part of speech, length of text, and categories outperforms the type 1 model without other features.

Multi-document Summarization Based on Cluster using Term Co-occurrence (단어의 공기정보를 이용한 클러스터 기반 다중문서 요약)

  • Lee, Il-Joo;Kim, Min-Koo
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.2
    • /
    • pp.243-251
    • /
    • 2006
  • In multi-document summarization by means of salient sentence extraction, it is important to remove redundant information. In the removal process, the similarities and differences of sentences are considered. In this paper, we propose a method for multi-document summarization which extracts salient sentences without having redundant sentences by way of cohesive term clustering method that utilizes co-occurrence Information. In the cohesive term clustering method, we assume that each term does not exist independently, but rather it is related to each other in meanings. To find the relations between terms, we cluster sentences according to topics and use the co-occurrence information oi terms in the same topic. We conduct experimental tests with the DUC(Document Understanding Conferences) data. In the tests, our method shows better performance of summarization than other summarization methods which use term co-occurrence information based on term cohesion of document or sentence unit, and simple statistical information.

A Hybrid Recommendation Method based on Attributes of Items and Ratings (항목 속성과 평가 정보를 이용한 혼합 추천 방법)

  • Kim Byeong Man;Li Qing
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.12
    • /
    • pp.1672-1683
    • /
    • 2004
  • Recommender system is a kind of web intelligence techniques to make a daily information filtering for people. Researchers have developed collaborative recommenders (social recommenders), content-based recommenders, and some hybrid systems. In this paper, we introduce a new hybrid recommender method - ICHM where clustering techniques have been applied to the item-based collaborative filtering framework. It provides a way to integrate the content information into the collaborative filtering, which contributes to not only reducing the sparsity of data set but also solving the cold start problem. Extensive experiments have been conducted on MovieLense data to analyze the characteristics of our technique. The results show that our approach contributes to the improvement of prediction quality of the item-based collaborative filtering, especially for the cold start problem.

A Study on Research Paper Classification Using Keyword Clustering (키워드 군집화를 이용한 연구 논문 분류에 관한 연구)

  • Lee, Yun-Soo;Pheaktra, They;Lee, JongHyuk;Gil, Joon-Min
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.12
    • /
    • pp.477-484
    • /
    • 2018
  • Due to the advancement of computer and information technologies, numerous papers have been published. As new research fields continue to be created, users have a lot of trouble finding and categorizing their interesting papers. In order to alleviate users' this difficulty, this paper presents a method of grouping similar papers and clustering them. The presented method extracts primary keywords from the abstracts of each paper by using TF-IDF. Based on TF-IDF values extracted using K-means clustering algorithm, our method clusters papers to the ones that have similar contents. To demonstrate the practicality of the proposed method, we use paper data in FGCS journal as actual data. Based on these data, we derive the number of clusters using Elbow scheme and show clustering performance using Silhouette scheme.