• Title/Summary/Keyword: Client-cluster

Search Result 49, Processing Time 0.021 seconds

The development of the high effective and stoppageless file system for high performance computing (High Performance Computing 환경을 위한 고성능, 무정지 파일시스템 구현)

  • Park, Yeong-Bae;Choe, Seung-Hwan;Lee, Sang-Ho;Kim, Gyeong-Su;Gong, Yong-Jun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.11a
    • /
    • pp.395-401
    • /
    • 2004
  • In the current high network-centralized computing and enterprising environment, it is getting essential to transmit data reliably at very high rates. Until now previous client/server model based NFS(Network File System) or AFS(Andrew's Files System) have met the various demands but from now couldn't satisfy those of the today's scalable high-performance computing environment. Not only performance but data sharing service redundancy have risen as a serious problem. In case of NFS, the locking issue and cache cause file system to reboot and make problem when it is used simply as ip-take over for H/A service. In case of AFS, it provides file sharing redundancy but it is not possible until the storage supporting redundancy and equipments are prepared. Lustre is an open source based cluster file system developed to meet both demands. Lustre consists of three types of subsystems : MDS(Meta-Data Server) which offers the meta-data services, OST(Objec Storage Targets) which provide file I/O, and Lustre Clients which interact with OST and MDS. These subsystems with message exchanging and pursuing scalable high-performance file system service. In this paper, we compare the transmission speed of gigabytes file between Lustre and NFS on the basis of concurrent users and also present the high availability of the file system by removing more than one OST in operation.

  • PDF

A Fuzzy Technique-based Web Server Performance Improvement Using a Load Balancing Mechanism (퍼지기법에 기초한 로드분배 방식에 의한 웹서버 성능향상)

  • Park, Bum-Joo;Park, Kie-Jin;Kang, Myeong-Koo;Kim, Sung-Soo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.3
    • /
    • pp.111-119
    • /
    • 2008
  • This paper combines fuzzy concepts with an existing dynamic performance isolation technique in order to improve the response time performance of a Web server supporting differentiated services. A load balancing mechanism based on the fuzzy control technique is developed in such a way that ambiguous situations caused by workload estimation of cluster-based Web servers, client request rates, and dynamic request rates can be represented in an effective way. In addition, we verify that the fuzzy-based performance isolation technique improves the performance and robustness of differentiated service systems efficiently through comparing 95-percentile of response time between the fuzzy-based Performance isolation technique and the existing one, which do not use the fuzzy concept.

Traffic Analysis Monitoring System for Web Server Load Balancing (웹서버의 부하균형을 위한 트래픽상황분석 모니터링 시스템)

  • Choi E-Jung;Lee Eun-Seok;Kim Seok-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.2
    • /
    • pp.79-85
    • /
    • 2005
  • In order to handle client's requests while multiple servers seamlessly work in the web server cluster environment, it is vital to implement a router that execute a routing using TCP information and requested target content. The implemented package software measured packet volume that was generated from data generator, virtual server, and server 1, 2, 3, and could find out traffic distribution toward Server 1, 2, 3. As the result of the study shows, Round Robin Algorithm ensured definite traffic distribution, unless incoming data loads differ much. Although error levels were high in some partial cases, they were eventually alleviated by repeated tests for a longer time.

  • PDF

Trustworthy Mutual Attestation Protocol for Local True Single Sign-On System: Proof of Concept and Performance Evaluation

  • Khattak, Zubair Ahmad;Manan, Jamalul-Lail Ab;Sulaiman, Suziah
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.9
    • /
    • pp.2405-2423
    • /
    • 2012
  • In a traditional Single Sign-On (SSO) scheme, the user and the Service Providers (SPs) have given their trust to the Identity Provider (IdP) or Authentication Service Provider (ASP) for the authentication and correct assertion. However, we still need a better solution for the local/native true SSO to gain user confidence, whereby the trusted entity must play the role of the ASP between distinct SPs. This technical gap has been filled by Trusted Computing (TC), where the remote attestation approach introduced by the Trusted Computing Group (TCG) is to attest whether the remote platform integrity is indeed trusted or not. In this paper, we demonstrate a Trustworthy Mutual Attestation (TMutualA) protocol as a proof of concept implementation for a local true SSO using the Integrity Measurement Architecture (IMA) with the Trusted Platform Module (TPM). In our proposed protocol, firstly, the user and SP platform integrity are checked (i.e., hardware and software integrity state verification) before allowing access to a protected resource sited at the SP and releasing a user authentication token to the SP. We evaluated the performance of the proposed TMutualA protocol, in particular, the client and server attestation time and the round trip of the mutual attestation time.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

The Distribution Structure of the Internet Movie and Spatial Clustering of the Internet Movie Industry (인터넷 영화의 유통구조와 인터넷 영화산업의 공간적 집적화)

  • Lee, Hee-Yeon;Lee, Nan-Kyung
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.8 no.1
    • /
    • pp.107-130
    • /
    • 2005
  • The purpose of this study were to examine the spatial distribution and locational characteristics of the Internet movie industry, to seize the value chains of the Internet movie industry and distribution structure of the internet movies, and to analyze the vertical-horizontal linkages of the Internet movie firms and their spatial clustering. Recently, the Internet movie industry has developed rapidly due to the development of techniques related to movie contents, the broadband Internet and a wide expansion of the high speed communication network and the increase of demands on movie contents. It has been found that 74$\%$ of the Internet movie industry was concentrated in Seoul. Especially this industry was quite agglomerated in several dongs of Gangnam-gu such as Yoeksam, Nonhyeon, Daechi and Samseung. The proximity of the same or similar business firms was the primary locational factors that influenced on the Internet movie industry, followed by other factors such as convenience of transportation, the reputation of the place, and proximity of technically supporting firms. The Internet movie industry had the valve chain composed of 'contents suppliers $\rightarrow$ contents distributors $\rightarrow$ service providers', However, there were also a complex network of the VOD copyright owner, VOD syndicator, and service providers in each category of the value chain. This research clearly revealed that the localized clustering has been formed with the movie contents providers, technically supporting firms, client firms, and cooperative-affiliated business firms related to the Internet movie industry, Additionally, a very intimate network has been established within the clustering, inducing the enlargement of the market and decrease of costs, the co-sharing of tacit knowledge, and the synergy effect.

  • PDF

A Case of Ependymoma in a Dog; Computed Tomography, Histopathological and Immunohistochemical Findings (개에서 발생한 뇌실막종 증례; 컴퓨터 단층영상, 조직병리학적 그리고 면역조직화학적 소견)

  • Lee, Hee-Chun;Kim, Na-Hyun;Cho, Kyu-Woan;Jung, Hae-Won;Moon, Jong-Hyun;Kim, Ji-Hyun;Sur, Jung-Hyang;Jung, Dong-In
    • Journal of Veterinary Clinics
    • /
    • v.31 no.2
    • /
    • pp.117-120
    • /
    • 2014
  • An 11-year-old intact female Maltese was referred because of 1 week history of cluster seizure episodes. Based on brain CT scan, brain tumor was strongly suspected. The patient was euthanized according to client's request and we performed necropsy after euthanasia. The gross findings of the postmortem coronal sections of the brain showed that the mass was relatively well-demarcated, reddish in colored, and was present inside the left lateral ventricle and compressed adjacent tissues. The tumor mass had 2 distinct histopathological features: perivascular pseudorosette-like structures and a whirl-like arrangement of fibrillary cells. The immunohistochemical profile showed strong GFAP positivity and moderate S-100 expression, sparsely dotted staining with Ki-67. Based on the histopathological and immunohistochemical findings, the present case diagnosed to ependymoma.

A Survey on Home Health Care Needs in Youn-Cheon County in Korea (일개 군지역의 가정간호 요구조사)

  • 한경자;박성애;하양숙;윤순녕;송미순
    • Journal of Korean Academy of Nursing
    • /
    • v.24 no.3
    • /
    • pp.484-498
    • /
    • 1994
  • The purpose of this study was to investigate the home care needs in a rural county as a basic study to develop a Korean home care model. A stratified cluster sampling method was used to select 1, 352 household which accounted for 8.8% of Youn Cheon County population. A Standard criterias for home care subject were delineated by five nursing professors representing five different areas of nursing specialty. The developed criteria for home care subjects were as below, 1) Patients who had been discharged from hospital during the previous week. 2) Patients with special medical devices 3) Newborns and the mothers. 4) The chronically ill with poor recovery or control of disease. 5) Subjects with poor health care behavior or ability 6) Subjects with poor social support and / or family resources. 7) Subjects with health related educational needs. Three types of questionnaires were developed to screen home care subjects, one for adults, one for infants and one for the elderly. Also different questionnaire items were developed to evaluate the control and self care ability of chronically ill subjects. After training in interview methods for 2 days, 39 interviewers visited individual households for interviews. As the results of the study showed that 14.1% of adult subjects and 76.5% of infants and child were judged as having at least one criterion related to home care need, 15.69% of adults and 53% of elderly had at least one chronic illness. The most prevalent chronic illnesses were hypertension, skeletal-neurological disease and diabetes. The prevalence of subjects with home care needs were, those with poor health care behavior(8.89%), with health-re-lated educational needs(8.71%), with poor recovery or control of disease (3.52%), and with poor social support and inadequate family resources(3.19%). There were only 0.3%, 0.37%, 0.11% who were discharged patients, patients with medical devices, or newborns respectively. Thus, the largest home care client group were those who need direct health care and health education. Seventy five percent of the subjects responded that they were willing to use and pay for home care service if it is offered in the future. It is suggested that recently discharged patients and patients with special medical devices can be cared for by hospital based home care nurses, but other home care clients can be cared for by com-munity based home care nurses.

  • PDF

Design of MAHA Supercomputing System for Human Genome Analysis (대용량 유전체 분석을 위한 고성능 컴퓨팅 시스템 MAHA)

  • Kim, Young Woo;Kim, Hong-Yeon;Bae, Seungjo;Kim, Hag-Young;Woo, Young-Choon;Park, Soo-Jun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.81-90
    • /
    • 2013
  • During the past decade, many changes and attempts have been tried and are continued developing new technologies in the computing area. The brick wall in computing area, especially power wall, changes computing paradigm from computing hardwares including processor and system architecture to programming environment and application usage. The high performance computing (HPC) area, especially, has been experienced catastrophic changes, and it is now considered as a key to the national competitiveness. In the late 2000's, many leading countries rushed to develop Exascale supercomputing systems, and as a results tens of PetaFLOPS system are prevalent now. In Korea, ICT is well developed and Korea is considered as a one of leading countries in the world, but not for supercomputing area. In this paper, we describe architecture design of MAHA supercomputing system which is aimed to develop 300 TeraFLOPS system for bio-informatics applications like human genome analysis and protein-protein docking. MAHA supercomputing system is consists of four major parts - computing hardware, file system, system software and bio-applications. MAHA supercomputing system is designed to utilize heterogeneous computing accelerators (co-processors like GPGPUs and MICs) to get more performance/$, performance/area, and performance/power. To provide high speed data movement and large capacity, MAHA file system is designed to have asymmetric cluster architecture, and consists of metadata server, data server, and client file system on top of SSD and MAID storage servers. MAHA system softwares are designed to provide user-friendliness and easy-to-use based on integrated system management component - like Bio Workflow management, Integrated Cluster management and Heterogeneous Resource management. MAHA supercomputing system was first installed in Dec., 2011. The theoretical performance of MAHA system was 50 TeraFLOPS and measured performance of 30.3 TeraFLOPS with 32 computing nodes. MAHA system will be upgraded to have 100 TeraFLOPS performance at Jan., 2013.