• 제목/요약/키워드: Community algorithm

검색결과 191건 처리시간 0.025초

소분자 도킹에서의 평가함수의 개발 동향 (Recent Development of Scoring Functions on Small Molecular Docking)

  • 정환원;조승주
    • 통합자연과학논문집
    • /
    • 제3권1호
    • /
    • pp.49-53
    • /
    • 2010
  • Molecular docking is a critical event which mostly forms Van der waals complex in molecular recognition. Since the majority of developed drugs are small molecules, docking them into proteins has been a prime concern in drug discovery community. Since the binding pose space is too vast to cover completely, many search algorithms such as genetic algorithm, Monte Carlo, simulated annealing, distance geometry have been developed. Proper evaluation of the quality of binding is an essential problem. Scoring functions derived from force fields handle the ligand binding prediction with the use of potential energies and sometimes in combination with solvation and entropy contributions. Knowledge-based scoring functions are based on atom pair potentials derived from structural databases. Forces and potentials are collected from known protein-ligand complexes to get a score for their binding affinities (e.g. PME). Empirical scoring functions are derived from training sets of protein-ligand complexes with determined affinity data. Because non of any single scoring function performs generally better than others, some other approaches have been tried. Although numerous scoring functions have been developed to locate the correct binding poses, it still remains a major hurdle to derive an accurate scoring function for general targets. Recently, consensus scoring functions and target specific scoring functions have been studied to overcome the current limitations.

MIMO 채널에서 LLR 추정을 위한 저 계산량 알고리즘 (Low Computational Algorithm for Estimating LLR in MIMO Channel)

  • 박태두;김민혁;김철승;정지원
    • 한국정보통신학회논문지
    • /
    • 제14권12호
    • /
    • pp.2791-2797
    • /
    • 2010
  • 차세대 무선통신에서는 다양한 서비스, 높은 신뢰도와 함께 빠른 전송속도를 요구한다. 이러한 요구를 만족시키기 위해서 MIMO 시스템과 LDPC 부호를 결합하는 방법에 대한 많은 연구가 이루어지고 있다. MIMO 시스템과 LDPC 부호의 결합시 LDPC 복호기에는 각 채널에서 수신되는 정보를 이용하여 연판정된 비트를 LDPC 복호기에 입력되어야 한다. 기존의 방식은 모든 수신된 신호를 이용하여 연판정된 비트를 분리하여 많은 계산량이 필요로 하는데, 본 논문에서는 후보 벡터를 이용하여 성능의 감소없이 연판정된 비트를 분리하여 최대 61%의 계산량을 감소하는 방식을 제시하였다.

A FRF-based algorithm for damage detection using experimentally collected data

  • Garcia-Palencia, Antonio;Santini-Bell, Erin;Gul, Mustafa;Catbas, Necati
    • Structural Monitoring and Maintenance
    • /
    • 제2권4호
    • /
    • pp.399-418
    • /
    • 2015
  • Automated damage detection through Structural Health Monitoring (SHM) techniques has become an active area of research in the bridge engineering community but widespread implementation on in-service infrastructure still presents some challenges. In the meantime, visual inspection remains as the most common method for condition assessment even though collected information is highly subjective and certain types of damage can be overlooked by the inspector. In this article, a Frequency Response Functions-based model updating algorithm is evaluated using experimentally collected data from the University of Central Florida (UCF)-Benchmark Structure. A protocol for measurement selection and a regularization technique are presented in this work in order to provide the most well-conditioned model updating scenario for the target structure. The proposed technique is composed of two main stages. First, the initial finite element model (FEM) is calibrated through model updating so that it captures the dynamic signature of the UCF Benchmark Structure in its healthy condition. Second, based upon collected data from the damaged condition, the updating process is repeated on the baseline (healthy) FEM. The difference between the updated parameters from subsequent stages revealed both location and extent of damage in a "blind" scenario, without any previous information about type and location of damage.

Multi-scale and Interactive Visual Analysis of Public Bicycle System

  • Shi, Xiaoying;Wang, Yang;Lv, Fanshun;Yang, Xiaohang;Fang, Qiming;Zhang, Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권6호
    • /
    • pp.3037-3054
    • /
    • 2019
  • Public bicycle system (PBS) is a new emerging and popular mode of public transportation. PBS data can be adopted to analyze human movement patterns. Previous work usually focused on specific scales, and the relationships between different levels of hierarchies are ignored. In this paper, we introduce a multi-scale and interactive visual analytics system to investigate human cycling movement and PBS usage condition. The system supports level-of-detail explorative analysis of spatio-temporal characteristics in PBS. Visual views are designed from global, regional and microcosmic scales. For the regional scale, a bicycle network is constructed to model PBS data, and an flow-based community detection algorithm is applied on the bicycle network to determine station clusters. In contrast to the previous used Louvain algorithm, our method avoids producing super-communities and generates better results. We provide two cases to demonstrate how our system can help analysts explore the overall cycling condition in the city and spatio-temporal aggregation of stations.

이동 분산 시스템에서 상호배제 알고리즘의 설계 (Design of a Mutual Exclusion Algorithm in Mobile Distributed Systems)

  • 박성훈
    • 한국콘텐츠학회논문지
    • /
    • 제6권12호
    • /
    • pp.50-58
    • /
    • 2006
  • 상호배제 패러다임은 그룹통신, 원자적 커밋먼트 프로토콜이나 복제 데이터 관리 등의 하나의 객체에 대한 독점적인 이용이 필요한 실질적인 문제들의 해결방안을 위한 기본적인 수단으로 이용 될 수 있다. 이러한 문제는 그 동안 광범위하게 연구자들에게 연구되어 왔던 바, 그 이유는 많은 분산 프로토콜들이 상호배제 프로토콜을 필요로 하기 때문이다. 그러나 이러한 유용성에도 불구하고 아직까지 이동 컴퓨팅 환경에서 이러한 문제를 다룬 적은 별로 없었다. 본 논문에서는 이동 컴퓨팅 시스템 하에서 상호배제의 문제를 기술코자 한다. 본 논문에서 제시하는 해결방안은 토큰 기반의 알고리즘에 기반을 두게 된다.

  • PDF

시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법 (A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach)

  • 노상규;박현정;박진수
    • Asia pacific journal of information systems
    • /
    • 제17권4호
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

공유 네트워크에서 최대 요구대역폭 트리 구축을 위한 효율적인 알고리즘 (An Efficient Algorithm for Constructing a Maximal Request Bandwidth Tree on Public-shared Network)

  • 정균락
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권4호
    • /
    • pp.87-93
    • /
    • 2015
  • 최근에 사용자 스스로가 네트워크를 구축하여 자신이 소유한 AP의 일부 대역폭을 다른 사람과 공유하는 방법이 대두되었는데 이러한 네트워크를 공유 네트워크라 한다. 응용 애플리케이션으로 공유 네트워크에서 SVC 기술을 사용하는 비디오 스트리밍 전송 시스템을 구축하는 방안이 제안되었는데, 서버로부터 클라이언트에게 비디오 스트림을 보내기 위해서는 루트는 서버이고 내부노드는 공유 AP이며 리프는 클라이언트인 트리 구조를 만들게 된다. 기존의 연구들은 공유 AP의 공유대역폭의 합을 최소로 사용해서 모든 클라이언트를 서비스하는 최소 공유대역폭 트리를 구축하는데 주안점을 두고 있다. 본 논문에서는 공유 AP들의 집합이 주어졌을 때 클라이언트의 비디오 스트림 요구를 최대로 만족시키는 최대 요구대역폭 트리를 구축하는 문제가 NP-하드임을 증명하였다. 또 이 문제를 해결하기 위한 효율적인 휴리스틱 알고리즘을 개발하고, 실험을 통해 성능을 평가하였다.

High-performance computing for SARS-CoV-2 RNAs clustering: a data science-based genomics approach

  • Oujja, Anas;Abid, Mohamed Riduan;Boumhidi, Jaouad;Bourhnane, Safae;Mourhir, Asmaa;Merchant, Fatima;Benhaddou, Driss
    • Genomics & Informatics
    • /
    • 제19권4호
    • /
    • pp.49.1-49.11
    • /
    • 2021
  • Nowadays, Genomic data constitutes one of the fastest growing datasets in the world. As of 2025, it is supposed to become the fourth largest source of Big Data, and thus mandating adequate high-performance computing (HPC) platform for processing. With the latest unprecedented and unpredictable mutations in severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the research community is in crucial need for ICT tools to process SARS-CoV-2 RNA data, e.g., by classifying it (i.e., clustering) and thus assisting in tracking virus mutations and predict future ones. In this paper, we are presenting an HPC-based SARS-CoV-2 RNAs clustering tool. We are adopting a data science approach, from data collection, through analysis, to visualization. In the analysis step, we present how our clustering approach leverages on HPC and the longest common subsequence (LCS) algorithm. The approach uses the Hadoop MapReduce programming paradigm and adapts the LCS algorithm in order to efficiently compute the length of the LCS for each pair of SARS-CoV-2 RNA sequences. The latter are extracted from the U.S. National Center for Biotechnology Information (NCBI) Virus repository. The computed LCS lengths are used to measure the dissimilarities between RNA sequences in order to work out existing clusters. In addition to that, we present a comparative study of the LCS algorithm performance based on variable workloads and different numbers of Hadoop worker nodes.

재가장애인 사례관리의 욕구사정 정확도 향상을 위한 사정도구 개발과 욕구추출 알고리즘 과정 연구 - 데이터 마이닝 분석기법을 활용하여 - (Development of Needs Assessment tool and Extraction Algorithm Fitting for Individuals in Care Management for the disabled in Home)

  • 김영숙;정국인
    • 한국사회복지학
    • /
    • 제60권2호
    • /
    • pp.155-173
    • /
    • 2008
  • 본 연구는 지역사회 내에 거주하는 재가 장애인의 신체적, 심리적, 사회 환경적 상황을 종합적으로 평가하여 그에 적합한 서비스를 제공하기 위한 욕구 중심의 사정도구를 개발하고, 개발된 도구를 활용하여 재가 장애인 200명의 사정 데이터를 수집한 후 데이터마이닝의 의사결정 나무분석 기법을 활용하여 욕구에 적합한 서비스제공을 위한 욕구 추출 알고리즘을 구성하였다. 본 연구는 2006년 6월부터 10월까지 5개월간 이루어졌으며, 크게 사정도구 개발과 개발된 도구를 활용한 욕구추출 과정으로 나뉠 수 있다. 도구개발은 문헌고찰을 통하여 기본적인 틀을 구성하였고, 포커스집단과 전문가들을 통하여 사정도구의 주관적 호소와 욕구 문항을 개발하였으며, 도구의 타당도를 확인하기 위해 통계적인 검증과정을 거쳤다. 검증결과 본 도구는 <표 2>와 <표 3>의 결과처럼 타당도와 신뢰도를 확보하였으며, 이 도구를 활용하여 욕구추출 알고리즘 요약을 <표 5>와 같이 제시하였다. 본 연구의 결과로 제시한 사정도구와 알고리즘은 재가 장애인의 객관적 욕구를 사정하고 확인함으로써 체계적인 사례관리를 수행하는 자료로 활용될 수 있다.

  • PDF

An Efficient Mutual Exclusion Protocol in a Mobile Computing Environment

  • Park, Sung-Hoon
    • International Journal of Contents
    • /
    • 제2권4호
    • /
    • pp.25-30
    • /
    • 2006
  • The mutual exclusion (MX) paradigm can be used as a building block in many practical problems such as group communication, atomic commitment and replicated data management where the exclusive use of an object might be useful. The problem has been widely studied in the research community since one reason for this wide interest is that many distributed protocols need a mutual exclusion protocol. However, despite its usefulness, to our knowledge there is no work that has been devoted to this problem in a mobile computing environment. In this paper, we describe a solution to the mutual exclusion problem from mobile computing systems. This solution is based on the token-based mutual exclusion algorithm.

  • PDF