• Title/Summary/Keyword: Web cluster system

Search Result 117, Processing Time 0.024 seconds

Fips : Dynamic File Prefetching Scheme based on File Access Patterns (Fips : 파일 접근 유형을 고려한 동적 파일 선반입 기법)

  • Lee, Yoon-Young;Kim, Chei-Yol;Seo, Dae-Wha
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.7
    • /
    • pp.384-393
    • /
    • 2002
  • A Parallel file system is normally used to support excessive file requests from parallel applications in a cluster system, whereas prefetching is useful for improving the file system performance. This paper proposes a new prefetching method, Fips(dynamic File Prefetching Scheme based on file access patterms), that is particularly suitable for parallel scientific applications and multimedia web services in a parallel file system. The proposed prefetching method introduces a dynamic prefetching scheme to predict data blocks precisely in run-time although the file access patterns are irregular. In addition, it includes an algorithm to determine whether and when the prefetching is performed using the current available I/O bandwidth. Experimental results confirmed that the use of the proposed prefetching policy in a parallel file system produced a higher file system performance.

Distributed Intrusion Detection System for Safe E-Business Model (안전한 E-Business 모델을 위한 분산 침입 탐지 시스템)

  • 이기준;정채영
    • Journal of Internet Computing and Services
    • /
    • v.2 no.4
    • /
    • pp.41-53
    • /
    • 2001
  • Multi-distributed web cluster model built for high availability E-Business model exposes internal system nodes on its structural characteristics and has a potential that normal job performance is impossible due to the intentional prevention and attack by an illegal third party. Therefore, the security system which protects the structured system nodes and can correspond to the outflow of information from illegal users and unfair service requirements effectively is needed. Therefore the suggested distributed invasion detection system is the technology which detects the illegal requirement or resource access of system node distributed on open network through organic control between SC-Agents based on the shared memory of SC-Server. Distributed invasion detection system performs the examination of job requirement packet using Detection Agent primarily for detecting illegal invasion, observes the job process through monitoring agent when job is progressed and then judges the invasion through close cooperative works with other system nodes when there is access or demand of resource not permitted.

  • PDF

Spatial Analysis Methods for Asbestos Exposure Research (석면노출연구를 위한 공간분석기법)

  • Kim, Ju-Young;Kang, Dong-Mug
    • Journal of Environmental Health Sciences
    • /
    • v.38 no.5
    • /
    • pp.369-379
    • /
    • 2012
  • Objectives: Spatial analysis is useful for understanding complicated causal relationships. This paper focuses trends and appling methods for spatial analysis associated with environmental asbestos exposure. Methods: Literature review and reflection of experience of authors were conducted to know academic background of spatial analysis, appling methods on epidemiology and asbestos exposure. Results: Spatial analysis based on spatial autocorrelation provides a variety of methods through which to conduct mapping, cluster analysis, diffusion, interpolation, and identification. Cause of disease occurrence can be investigated through spatial analysis. Appropriate methods can be applied according to contagiousness and continuity. Spatial analysis for asbestos exposure source is needed to study asbestos related diseases. Although a great amount of research has used spatial analysis to study exposure assessment and distribution of disease occurrence, these studies tend to focus on the construction of a thematic map without different forms of analysis. Recently, spatial analysis has been advanced by merging with web tools, mobile computing, statistical packages, social network analysis, and big data. Conclusions: Because the trend in spatial analysis has evolved from simple marking into a variety of forms of analyses, environmental researchers including asbestos exposure study are required to be aware of recent trends.

Contextual Advertisement System based on Document Clustering (문서 클러스터링을 이용한 문맥 광고 시스템)

  • Lee, Dong-Kwang;Kang, In-Ho;An, Dong-Un
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.73-80
    • /
    • 2008
  • In this paper, an advertisement-keyword finding method using document clustering is proposed to solve problems by ambiguous words and incorrect identification of main keywords. News articles that have similar contents and the same advertisement-keywords are clustered to construct the contextual information of advertisement-keywords. In addition to news articles, the web page and summary of a product are also used to construct the contextual information. The given document is classified as one of the news article clusters, and then cluster-relevant advertisement-keywords are used to identify keywords in the document. We could achieve 21% precision improvement by our proposed method.

Design and Implementation of Service based Virtual Screening System in Grids (그리드에서 서비스 기반 가상 탐색 시스템 설계 및 구현)

  • Lee, Hwa-Min;Chin, Sung-Ho;Lee, Jong-Hyuk;Lee, Dae-Won;Park, Seong-Bin;Yu, Heon-Chang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.6
    • /
    • pp.237-247
    • /
    • 2008
  • A virtual screening is the process of reducing an unmanageable number of compounds to a limited number of compounds for the target of interest by means of computational techniques such as molecular docking. And it is one of a large-scale scientific application that requires large computing power and data storage capability. Previous applications or softwares for molecular docking such as AutoDock, FlexX, Glide, DOCK, LigandFit, ViSION were developed to be run on a supercomputer, a workstation, or a cluster-computer. However the virtual screening using a supercomputer has a problem that a supercomputer is very expensive and the virtual screening using a workstation or a cluster-computer requires a long execution time. Thus we propose a service-based virtual screening system using Grid computing technology which supports a large data intensive operation. We constructed 3-dimensional chemical molecular database for virtual screening. And we designed a resource broker and a data broker for supporting efficient molecular docking service and proposed various services for virtual screening. We implemented service based virtual screening system with DOCK 5.0 and Globus 3.2 toolkit. Our system can reduce a timeline and cost of drug or new material design.

KUGI: A Database and Search System for Korean Unigene and Pathway Information

  • Yang, Jin-Ok;Hahn, Yoon-Soo;Kim, Nam-Soon;Yu, Ung-Sik;Woo, Hyun-Goo;Chu, In-Sun;Kim, Yong-Sung;Yoo, Hyang-Sook;Kim, Sang-Soo
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.407-411
    • /
    • 2005
  • KUGI (Korean UniGene Information) database contains the annotation information of the cDNA sequences obtained from the disease samples prevalent in Korean. A total of about 157,000 5'-EST high throughput sequences collected from cDNA libraries of stomach, liver, and some cancer tissues or established cell lines from Korean patients were clustered to about 35,000 contigs. From each cluster a representative clone having the longest high quality sequence or the start codon was selected. We stored the sequences of the representative clones and the clustered contigs in the KUGI database together with their information analyzed by running Blast against RefSeq, human mRNA, and UniGene databases from NCBI. We provide a web-based search engine fur the KUGI database using two types of user interfaces: attribute-based search and similarity search of the sequences. For attribute-based search, we use DBMS technology while we use BLAST that supports various similarity search options. The search system allows not only multiple queries, but also various query types. The results are as follows: 1) information of clones and libraries, 2) accession keys, location on genome, gene ontology, and pathways to public databases, 3) links to external programs, and 4) sequence information of contig and 5'-end of clones. We believe that the KUGI database and search system may provide very useful information that can be used in the study for elucidating the causes of the disease that are prevalent in Korean.

  • PDF

A high speed processing method of web server cluster through round robin load balancing (라운드로빈 부하균형을 통한 웹 서버 클러스터 고속화 처리기법)

  • Sung Kyung;Kim Seok-soo;Park Gil-cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.7
    • /
    • pp.1524-1531
    • /
    • 2004
  • This study analyzes a load balancing technique using Round Robin Algorithm. The study uses two software packages (Packet Capture and Round Robin Test Package) to check packet volume from Virtual Network structure (data generator, virtual server, Server1,2,3), and finds out traffic distribution toward Server1,2 and 3. The functions of implemented Round Robin Load Balancing Monitoring System include round robin testing, system monitoring, and graphical indication of data transmission and packet volume. As the result of the study shows, Round Robin Algorithm allows servers to ensure definite traffic distribution, unless incoming data loads differ much. Although error levels are high in some cases, they were eventually alleviated by repeated tests for a long period of time.

Establishment and Application of GIS-Based DongNam Kwon Industry Information System (GIS기반 동남 광역권 산업체 정보시스템 구축 및 활용)

  • Nam, Kwang-Woo;Kwon, Il-Hwa;Park, Jun-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.1
    • /
    • pp.70-79
    • /
    • 2014
  • Following the technology developments of traffic network and communication for the wide area, the importance of the cooperation system to vitalize the wide area economy is increasing. Therefore, in this study, DongNam Kwon industry information system is established for the industrial information sharing based on GIS in the DongNam Kwon wide area economy. The DongNam Kwon is an industrial integration area centered with the manufacturing so that the operation of effective industrial cluster and cooperation systems are required across the administrational boundaries. To establish the database of the information, the information system was established utilizing already established industrial databases in Busan, Ulsan and Gyeongnam. But, various issues caused by the discordances among the data of each local government and the insufficiency of GIS based location information have been found. According to the analysis, the standardization considering the courses of collection, distributions and utilization are required immediately to solve the issues. This study establishes an 2-way industrial information system enabling the information creation and the phased approach between the administrator and the user in the bi-directions on the web by utilizing cadastral and numerical maps. The result of this study would have a meaning in providing a fundamental frame for cooperative responses and cooperation system for DongNam Kwon's industrial promotion using industrial information sharing.

Gathering Common-word and Document Reclassification to improve Accuracy of Document Clustering (문서 군집화의 정확률 향상을 위한 범용어 수집과 문서 재분류 알고리즘)

  • Shin, Joon-Choul;Ock, Cheol-Young;Lee, Eung-Bong
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.53-62
    • /
    • 2012
  • Clustering technology is used to deal efficiently with many searched documents in information retrieval system. But the accuracy of the clustering is satisfied to the requirement of only some domains. This paper proposes two methods to increase accuracy of the clustering. We define a common-word, that is frequently used but has low weight during clustering. We propose the method that automatically gathers the common-word and calculates its weight from the searched documents. From the experiments, the clustering error rates using the common-word is reduced to 34% compared with clustering using a stop-word. After generating first clusters using average link clustering from the searched documents, we propose the algorithm that reevaluates the similarity between document and clusters and reclassifies the document into more similar clusters. From the experiments using Naver JiSikIn category, the accuracy of reclassified clusters is increased to 1.81% compared with first clusters without reclassification.

A Study on the Data Collection Methods based Hadoop Distributed Environment (하둡 분산 환경 기반의 데이터 수집 기법 연구)

  • Jin, Go-Whan
    • Journal of the Korea Convergence Society
    • /
    • v.7 no.5
    • /
    • pp.1-6
    • /
    • 2016
  • Many studies have been carried out for the development of big data utilization and analysis technology recently. There is a tendency that government agencies and companies to introduce a Hadoop of a processing platform for analyzing big data is increasing gradually. Increased interest with respect to the processing and analysis of these big data collection technology of data has become a major issue in parallel to it. However, study of the collection technology as compared to the study of data analysis techniques, it is insignificant situation. Therefore, in this paper, to build on the Hadoop cluster is a big data analysis platform, through the Apache sqoop, stylized from relational databases, to collect the data. In addition, to provide a sensor through the Apache flume, a system to collect on the basis of the data file of the Web application, the non-structured data such as log files to stream. The collection of data through these convergence would be able to utilize as a basic material of big data analysis.