• Title/Summary/Keyword: Web Cluster

Search Result 238, Processing Time 0.022 seconds

Study on An Admission Control Scheme to Improve QoS of Overload Cluster Web Server (클러스터 웹서버 성능개선을 위한 과부하 방지 QoS 보장 수락 제어 기법 연구)

  • Nahm Eui-seok;Kang E.G.;Chung H.S.;Lee J.H.;Hyun D.C.
    • Journal of the Korea Computer Industry Society
    • /
    • v.6 no.3
    • /
    • pp.391-400
    • /
    • 2005
  • 최근 몇 년 사이 컴퓨터와 초고속 통신망의 대중적인 보급으로 인하여 인터넷 사용 인구가 급격하게 증가하였고 인터넷 사용량 또한 폭발적으로 증가하였다. 이와 같이 막대한 양으로 늘어난 트래픽은 서버에 과도한 양의 부하를 발생시켜 서비스 전반에 걸친 성능 저하를 가져왔다. 따라서 서버에 많은 양의 트래픽이 발생하더라도 정상적인 서비스를 제공할 수 있도록 하는 것이 필요하다. 본 논문에서는 이러한 서버의 트래픽 폭주로 인한 서버 과부하 문제를 해결하기 위해 수락 제어 알고리즘을 이용한다. 수락 제어 기법은 컨텐츠 변환이나 로드 밸런싱과 같이 수동적으로 부하를 덜어주거나 분산시키는 방식이 아닌 트래픽에 제한을 가하여 능동적으로 서버의 부하를 조절하는 방식이다. 클라이언트에 의한 연결 요청이 이루어질 때 서버의 부하 상태를 평가한 후 서버가 수용 가능한 수준이면 클라이언트 새로운 연결 요청을 수락하여 이를 서비스할 수 있도록 한다. <중략> 또한 서비스 요청에 제한적인 정책을 따를 경우 서버의 부하 변화에 빠르게 반응하지만 QoS(Quality of Service)의 향상을 얻을 수 있는 반면 더 많은 서비스 요청을 수용하는 정책을 취할 경우 서버의 부하 변화에 대한 반응이 느리지만 throughput이 증가하는 이득을 얻을 수 있게 된다. QoS와 throughput가 서로 tradeoff 관계에 있다는 것을 알 수 있다. 또한 요구되는 서비스의 성격에 따라 즉 더 높은 수준의 QoS 보장이 중요한 서비스인가 아니면 더 높은 수준의 throughput 보장이 요구되는 서비스인가에 따라 다른 수락 제어 정책을 적용하는 것이 가능하게 된다.

  • PDF

An Optimized e-Lecture Video Search and Indexing framework

  • Medida, Lakshmi Haritha;Ramani, Kasarapu
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.87-96
    • /
    • 2021
  • The demand for e-learning through video lectures is rapidly increasing due to its diverse advantages over the traditional learning methods. This led to massive volumes of web-based lecture videos. Indexing and retrieval of a lecture video or a lecture video topic has thus proved to be an exceptionally challenging problem. Many techniques listed by literature were either visual or audio based, but not both. Since the effects of both the visual and audio components are equally important for the content-based indexing and retrieval, the current work is focused on both these components. A framework for automatic topic-based indexing and search depending on the innate content of the lecture videos is presented. The text from the slides is extracted using the proposed Merged Bounding Box (MBB) text detector. The audio component text extraction is done using Google Speech Recognition (GSR) technology. This hybrid approach generates the indexing keywords from the merged transcripts of both the video and audio component extractors. The search within the indexed documents is optimized based on the Naïve Bayes (NB) Classification and K-Means Clustering models. This optimized search retrieves results by searching only the relevant document cluster in the predefined categories and not the whole lecture video corpus. The work is carried out on the dataset generated by assigning categories to the lecture video transcripts gathered from e-learning portals. The performance of search is assessed based on the accuracy and time taken. Further the improved accuracy of the proposed indexing technique is compared with the accepted chain indexing technique.

Co-occurrence Network Analysis of Keywords in Geriatric Frailty

  • Kim, Youngji;Jang, Soong-nang;Lee, Jung Lim
    • Research in Community and Public Health Nursing
    • /
    • v.29 no.4
    • /
    • pp.429-439
    • /
    • 2018
  • Purpose: The aim of this study is to identify core keyword of frailty research in the past 35 years to understand the structure of knowledge of frailty. Methods: 10,367 frailty articles published between 1981 and April 2016 were retrieved from Web of Science. Keywords from these articles were extracted using Bibexcel and social network analysis was conducted with the occurrence network using NetMiner program. Results: The top five keywords with a high frequency of occurrence include 'disability', 'nursing home', 'sarcopenia', 'exercise', and 'dementia'. Keywords were classified by subheadings of MeSH and the majority of them were included under the healthcare and physical dimensions. The degree centralities of the keywords were arranged in the order of 'long term care' (0.55), 'gait' (0.42), 'physical activity' (0.42), 'quality of life' (0.42), and 'physical performance' (0.38). The betweenness centralities of the keywords were listed in the order of depression' (0.32), 'quality of life' (0.28), 'home care' (0.28), 'geriatric assessment' (0.28), and 'fall' (0.27). The cluster analysis shows that the frailty research field is divided into seven clusters: aging, sarcopenia, inflammation, mortality, frailty index, older people, and physical activity. Conclusion: After reviewing previous research in the 35 years, it has been found that only physical frailty and frailty related to medicine have been emphasized. Further research in psychological, cognitive, social, and environmental frailty is needed to understand frailty in a multifaceted and integrative manner.

Author Co-citation Analysis for Digital Twin Studies (디지털 트윈 연구의 저자 동시인용 분석)

  • Kim, Sumin;Suh, Chang-Kyo
    • The Journal of Information Systems
    • /
    • v.28 no.3
    • /
    • pp.39-58
    • /
    • 2019
  • Purpose A digital twin is a digital replication of a physical system. Gartner identified the digital twin as one of the Gartner Top 10 Strategic Technology Trend for three years from 2017. The rapid development of the digital twin market is expected to bring about innovation and change throughout society, and much research has been done recently in academia. In this research, we tried to explore the main research trends for digital twin research. Design/methodology/approach We collected the digital twin research from Web of Science, and analyzed 804 articles that was published during time span of 2010-2018. A total of 41 key authors were selected based on the frequency of citation. We created a co-citation matrix for the core authors, and performed multivariate analysis such as cluster analysis and multidimensional scaling. We also conducted social network analysis to find the influential researchers in digital twin research. Findings We identified four major sub- areas of digital twin research: "Infrastructure", "Prospects and Challenges", "Security", and "Smart Manufacturing". We also identified the most influential researchers in digital twin research: Lee EA, Rajkumar R, Wan J, Karnouskos S, Kim K, and Cardenas AA. Limitation and further research suggestion were also discussed as a concluding remarks.

Particle Swarm Optimization in Gated Recurrent Unit Neural Network for Efficient Workload and Resource Management (효율적인 워크로드 및 리소스 관리를 위한 게이트 순환 신경망 입자군집 최적화)

  • Ullah, Farman;Jadhav, Shivani;Yoon, Su-Kyung;Nah, Jeong Eun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.3
    • /
    • pp.45-49
    • /
    • 2022
  • The fourth industrial revolution, internet of things, and the expansion of online web services have increased an exponential growth and deployment in the number of cloud data centers (CDC). The cloud is emerging as new paradigm for delivering the Internet-based computing services. Due to the dynamic and non-linear workload and availability of the resources is a critical problem for efficient workload and resource management. In this paper, we propose the particle swarm optimization (PSO) based gated recurrent unit (GRU) neural network for efficient prediction the future value of the CPU and memory usage in the cloud data centers. We investigate the hyper-parameters of the GRU for better model to effectively predict the cloud resources. We use the Google Cluster traces to evaluate the aforementioned PSO-GRU prediction. The experimental shows the effectiveness of the proposed algorithm.

User Oriented clustering of news articles using Tweets Heterogeneous Information Network (트위트 이형 정보 망을 이용한 뉴스 기사의 사용자 지향적 클러스터링)

  • Shoaib, Muhammad;Song, Wang-Cheol
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.85-94
    • /
    • 2013
  • With the emergence of world wide web, in particular web 2.0 the rapidly growing amount of news articles has created a problem for users in selection of news articles according to their requirements. To overcome this problem different clustering mechanism has been proposed to broadly categorize news articles. However these techniques are totally machine oriented techniques and lack users' participation in the process of decision making for membership of clustering. In order to overcome the issue of zero-participation in the process of clustering news articles in this paper we have proposed a framework for clustering news articles by combining users' judgments that they post on twitter with the news articles to cluster the objects. We have employed twitter hash-tags for this purpose. Furthermore we have computed the credibility of users' based on frequency of retweets for their tweets in order to enhance the accuracy of the clustering membership function. In order to test performance of proposed methodology, we performed experiments on tweets messages tweeted during general election 2013 in Pakistan. Our results proved over claim that using users' output better outcome can be achieved then ordinary clustering algorithms.

Design and Implementation of Service based Virtual Screening System in Grids (그리드에서 서비스 기반 가상 탐색 시스템 설계 및 구현)

  • Lee, Hwa-Min;Chin, Sung-Ho;Lee, Jong-Hyuk;Lee, Dae-Won;Park, Seong-Bin;Yu, Heon-Chang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.6
    • /
    • pp.237-247
    • /
    • 2008
  • A virtual screening is the process of reducing an unmanageable number of compounds to a limited number of compounds for the target of interest by means of computational techniques such as molecular docking. And it is one of a large-scale scientific application that requires large computing power and data storage capability. Previous applications or softwares for molecular docking such as AutoDock, FlexX, Glide, DOCK, LigandFit, ViSION were developed to be run on a supercomputer, a workstation, or a cluster-computer. However the virtual screening using a supercomputer has a problem that a supercomputer is very expensive and the virtual screening using a workstation or a cluster-computer requires a long execution time. Thus we propose a service-based virtual screening system using Grid computing technology which supports a large data intensive operation. We constructed 3-dimensional chemical molecular database for virtual screening. And we designed a resource broker and a data broker for supporting efficient molecular docking service and proposed various services for virtual screening. We implemented service based virtual screening system with DOCK 5.0 and Globus 3.2 toolkit. Our system can reduce a timeline and cost of drug or new material design.

A Study on the Intellectual Structure of Metadata Research by Using Co-word Analysis (동시출현단어 분석에 기반한 메타데이터 분야의 지적구조에 관한 연구)

  • Choi, Ye-Jin;Chung, Yeon-Kyoung
    • Journal of the Korean Society for information Management
    • /
    • v.33 no.3
    • /
    • pp.63-83
    • /
    • 2016
  • As the usage of information resources produced in various media and forms has been increased, the importance of metadata as a tool of information organization to describe the information resources becomes increasingly crucial. The purposes of this study are to analyze and to demonstrate the intellectual structure in the field of metadata through co-word analysis. The data set was collected from the journals which were registered in the Core collection of Web of Science citation database during the period from January 1, 1998 to July 8, 2016. Among them, the bibliographic data from 727 journals was collected using Topic category search with the query word 'metadata'. From 727 journal articles, 410 journals with author keywords were selected and after data preprocessing, 1,137 author keywords were extracted. Finally, a total of 37 final keywords which had more than 6 frequency were selected for analysis. In order to demonstrate the intellectual structure of metadata field, network analysis was conducted. As a result, 2 domains and 9 clusters were derived, and intellectual relations among keywords from metadata field were visualized, and proposed keywords with high global centrality and local centrality. Six clusters from cluster analysis were shown in the map of multidimensional scaling, and the knowledge structure was proposed based on the correlations among each keywords. The results of this study are expected to help to understand the intellectual structure of metadata field through visualization and to guide directions in new approaches of metadata related studies.

A Study on Interdisciplinary Structure of Big Data Research with Journal-Level Bibliographic-Coupling Analysis (학술지 단위 서지결합분석을 통한 빅데이터 연구분야의 학제적 구조에 관한 연구)

  • Lee, Boram;Chung, EunKyung
    • Journal of the Korean Society for information Management
    • /
    • v.33 no.3
    • /
    • pp.133-154
    • /
    • 2016
  • Interdisciplinary approach has been recognized as one of key strategies to address various and complex research problems in modern science. The purpose of this study is to investigate the interdisciplinary characteristics and structure of the field of big data. Among the 1,083 journals related to the field of big data, multiple Subject Categories (SC) from the Web of Science were assigned to 420 journals (38.8%) and 239 journals (22.1%) were assigned with the SCs from different fields. These results show that the field of big data indicates the characteristics of interdisciplinarity. In addition, through bibliographic coupling network analysis of top 56 journals, 10 clusters in the network were recognized. Among the 10 clusters, 7 clusters were from computer science field focusing on technical aspects such as storing, processing and analyzing the data. The results of cluster analysis also identified multiple research works of analyzing and utilizing big data in various fields such as science & technology, engineering, communication, law, geography, bio-engineering and etc. Finally, with measuring three types of centrality (betweenness centrality, nearest centrality, triangle betweenness centrality) of journals, computer science journals appeared to have strong impact and subjective relations to other fields in the network.

Design and Implementation of HPC Job Management Framework for Computational Scientific Simulation (계산과학 시뮬레이션을 위한 HPC 작업 관리 프레임워크의 설계 및 구현)

  • Yu, Jung-Lok;Kim, Han-Gi;Byun, Hee-Jung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.554-557
    • /
    • 2016
  • Recently, supercomputer has been increasingly adopted as a computing environment for scientific simulation as well as education, healthcare and national defence. Especially, supercomputing system with heterogeneous computing resources is gaining resurgence of interest as a next-generation problem solving environment, allowing theoretical and/or experimental research in various fields to be free of time and spatial limits. However, traditional supercomputing services have only been handled through a simple form of command-line based console, which leads to the critical limit of accessibility and usability of heterogeneous computing resources. To address this problem, in this paper, we provide the design and implementation of web-based HPC (High Performance Computing) job management framework for computational scientific simulation. The proposed framework has highly extensible design principles, providing the abstraction interfaces of job scheduler (as well as bundle scheduler plug-ins for LoadLeveler, Sun Grid Engine, OpenPBS scheduler) in order to easily incorporate the broad spectrum of heterogeneous computing resources such as cluster, computing cloud and grid. We also present the detailed specification of HTTP standard based RESTful endpoints, which manage simulation job's life-cycles such as job creation, submission, control and status monitoring, etc., enabling various 3rd-party applications to be newly created on top of the proposed framework.

  • PDF