• Title/Summary/Keyword: 간선 가중치

Search Result 38, Processing Time 0.027 seconds

Disease related Gene Identification Using Literature and Google data (텍스트마이닝 기법과 구글데이터를 이용한 질병관련 유전자 식별)

  • Kim, Jeong-U;Kim, Hyeon-Jin;Park, Sang-Hyeon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1084-1087
    • /
    • 2013
  • 텍스트마이닝은(Text mining) 바이오분야에서 사용되는 도구 중 하나이다. 본 논문에서는 전립선암(Prostate cancer)과 관련된 질병 유전자(Disease gene)를 찾기 위해 텍스트마이닝을 이용하여 유전자 네트워크(Gene-network)를 구축하였다. 추가적으로 구글(Google) 검색을 통해 네트워크 내의 유전자 노드(Node)들 사이의 간선(Edge)에 새로운 가중치(Weight)를 추가하고 네트워크를 재구성하였다. 구축된 네트워크에서 노드와 노드 사이의 가중치를 기반으로 전립선암과 관련된 질병 유전자를 추출하였다. 본 논문의 방법은 성공적으로 네트워크를 구축하고 질병 유전자를 찾았으며, 구글 데이터를 사용하지 않고 네트워크를 구축하는 경우보다 더 높은 정확성을 입증했다.

A Disambiguation and Weighting Method using Mutual Information for Query Translation in Korean-to-English Cross-Language IR (한-영 교차언어 정보검색에서 상호정보를 이용한 질의 변환 모호성 해소 및 가중치 부여 방법)

  • Jang, Myung-Gil;Myaeng, Sung-Hyon;Park, Se-Young
    • Annual Conference on Human and Language Technology
    • /
    • 1999.10e
    • /
    • pp.55-62
    • /
    • 1999
  • 교차언어 문서검색에서는 단일언어 문서 상황을 만들기 위하여 질의나 문서를 다른 언어로 변환하게 되는데, 일반적으로 간단하면서도 실용적인 질의 변환의 방법을 주로 사용하고 있다. 하지만 단순한 대역 사전을 사용한 질의 변환의 경우에 변환 모호성 때문에 40% 이상의 검색 효과의 감소를 가져온다. 본 논문에서는 이러한 변환 모호성을 해결하기 위하여 대역 코퍼스로부터 추출한 상호 정보를 이용하는 단순하지만 효과적인 사전 기반 질의 변환 방법을 제안한다. 본 연구에서는 변환 모호성으로 발생한 다수의 후보들에서 가장 좋은 후보를 선택하는 모호성 해소 뿐 아니라 후보 단어들에 적절히 가중치를 부여하는 방법을 사용한다. 본 질의 변환 방법은 단순히 가장 큰 상호 정보의 단어를 선택하여 모호성 해소만을 적용하는 방법과 Krushall의 최소 스패닝 트리 구성과 유사한 방법으로 상호 정보가 큰 순서대로 간선들을 연결하여 모호성 해소와 가중치 부여를 적용하는 방법들과 질의 변환의 검색 효과를 비교한다. 본 질의 변환 방법은 TREC-6 교차언어 문서검색 환경의 실험에서 단일 언어 문서검색의 경우의 85%, 수작업 모호성 해소의 경우의 96%에 도달하는 성능을 얻었다.

  • PDF

DNA Computing Adopting DNA coding Method to solve Traveling Salesman Problem (Traveling Salesman Problem을 해결하기 위한 DNA 코딩 방법을 적용한 DNA 컴퓨팅)

  • Kim, Eun-Gyeong;Yun, Hyo-Gun;Lee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.1
    • /
    • pp.105-111
    • /
    • 2004
  • DNA computing has been using to solve TSP (Traveling Salesman Problems). However, when the typical DNA computing is applied to TSP, it can`t efficiently express vertices and weights of between vertices. In this paper, we proposed ACO (Algorithm for Code Optimization) that applies DNA coding method to DNA computing to efficiently express vertices and weights of between vertices for TSP. We applied ACO to TSP and as a result ACO could express DNA codes which have variable lengths and weights of between vertices more efficiently than Adleman`s DNA computing algorithm could. In addition, compared to Adleman`s DNA computing algorithm, ACO could reduce search time and biological error rate by 50% and could search for a shortest path in a short time.

Development of an Automatic Program to Analyze Sunspot Groups for Solar Flare Forecasting (태양 플레어 폭발 예보를 위한 흑점군 자동분석 프로그램 개발)

  • Park, Jongyeob;Moon, Yong-Jae;Choi, SeongHwan;Park, Young-Deuk
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.2
    • /
    • pp.98-98
    • /
    • 2013
  • 태양의 활동영역에서 관측할 수 있는 흑점은 주로 흑점군으로 관측되며, 태양폭발현상의 발생을 예보하기 위한 중요한 관측 대상 중 하나이다. 현재 태양 폭발을 예보하는 모델들은 McIntosh 흑점군 분류법을 사용하며 통계적 모델과 기계학습 모델로 나누어진다. 컴퓨터는 흑점군의 형태학적 특성을 연속적인 값으로 계산하지만 흑점군의 형태적 다양성으로 인해 McIntosh 분류법과 일치하지 않는 경우가 있다. 이러한 이유로 컴퓨터가 계산한 흑점군의 형태학적인 특성을 예보에 직접 적용하는 것이 필요하다. 우리는 흑점군을 검출하기 위해 최소신장트리(Minimum spanning tree : MST)를 이용한 계층적 군집화 기법을 수행하였다. 그래프(Graph)이론에서 최소신장트리는 정점(Vertex)과 간선(Edge)으로 구성된 간선의 가중치의 합이 최소인 트리이다. 우리는 모든 흑점을 정점, 그들의 연결을 간선으로 적용하여 최소신장트리를 작성하였다. 또한 최소신장트리를 활용한 계층적 군집화기법은 초기값에 따른 군집화 결과의 차이가 없기 때문에 흑점군 검출에 있어서 가장 적합한 알고리즘이다. 이를 통해 흑점군의 기본적인 형태학적인 특성(개수, 면적, 면적비 등)을 계산하고 최소신장트리를 통해 가장 면적이 큰 흑점을 중심으로 트리의 깊이(Depth)와 차수(Degree)를 계산하였다. 이 방법을 2003년 SOHO/MDI의 태양 가시광 영상에 적용하여 구한 흑점군의 내부 흑점수와 면적은 NOAA에서 산출한 값들과 각각 90%, 99%의 좋은 상관관계를 가졌다. 우리는 이 연구를 통해 흑점군의 형태학적인 특성과 더불어 예보에 직접적으로 활용할 수 있는 방법을 논의하고자 한다.

  • PDF

Semantic Similarity Measures Between Words within a Document using WordNet (워드넷을 이용한 문서내에서 단어 사이의 의미적 유사도 측정)

  • Kang, SeokHoon;Park, JongMin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.11
    • /
    • pp.7718-7728
    • /
    • 2015
  • Semantic similarity between words can be applied in many fields including computational linguistics, artificial intelligence, and information retrieval. In this paper, we present weighted method for measuring a semantic similarity between words in a document. This method uses edge distance and depth of WordNet. The method calculates a semantic similarity between words on the basis of document information. Document information uses word term frequencies(TF) and word concept frequencies(CF). Each word weight value is calculated by TF and CF in the document. The method includes the edge distance between words, the depth of subsumer, and the word weight in the document. We compared out scheme with the other method by experiments. As the result, the proposed method outperforms other similarity measures. In the document, the word weight value is calculated by the proposed method. Other methods which based simple shortest distance or depth had difficult to represent the information or merge informations. This paper considered shortest distance, depth and information of words in the document, and also improved the performance.

Fast Random Walk with Restart over a Signed Graph (부호 그래프에서의 빠른 랜덤워크 기법)

  • Myung, Jaeseok;Shim, Junho;Suh, Bomil
    • The Journal of Society for e-Business Studies
    • /
    • v.20 no.2
    • /
    • pp.155-166
    • /
    • 2015
  • RWR (Random Walk with Restart) is frequently used by many graph-based ranking algorithms, but it does not consider a signed graph where edges may have negative weight values. In this paper, we apply the Balance Theory by F. Heider to RWR over a signed graph and propose a novel RWR, Balanced Random Walk (BRW). We apply the proposed technique into the domain of recommendation system, and show by experiments its effectiveness to filter out the items that users may dislike. In order to provide the reasonable performance of BRW in the domain, we modify the existing Top-k algorithm, BCA, and propose a new algorithm, Bicolor-BCA. The proposed algorithm yet requires employing a threshold. In the experiment, we show how threshold values affect both precision and performance of the algorithm.

Problem-Independent Gene Reordering for Genetic Algorithms (유전 알고리즘에서의 문제 독립적 유전자 재배열)

  • Kwon Yung-Keun;Kim Yong-Hyuk;Moon Byung-Ro
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.10
    • /
    • pp.974-983
    • /
    • 2005
  • In genetic algorithms with lotus-based encoding, static gene reordering is to locate the highly related genes closely together. It helps the genetic algorithms to create and preserve the schema of high-quality effectively. In this paper, we propose a static reordering framework for linear locus-based encoding. It differs from existing reorderings in that it is independent of problem-specific knowledge. It makes a complete graph where weights represent the interelationship between each pair of genes. And, it transforms the graph into a unweighted sparse graph by choosing the edges having relatively high weight. It finds a gene reordering by graph search method. Through the wide experiments about several problems, the method proposed in this paper shows significant performance improvement as compared with the genetic algorithm that does not rearrange genes.

Graph-based High-level Motion Segmentation using Normalized Cuts (Normalized Cuts을 이용한 그래프 기반의 하이레벨 모션 분할)

  • Yun, Sung-Ju;Park, An-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.11
    • /
    • pp.671-680
    • /
    • 2008
  • Motion capture devices have been utilized in producing several contents, such as movies and video games. However, since motion capture devices are expensive and inconvenient to use, motions segmented from captured data was recycled and synthesized to utilize it in another contents, but the motions were generally segmented by contents producers in manual. Therefore, automatic motion segmentation is recently getting a lot of attentions. Previous approaches are divided into on-line and off-line, where ow line approaches segment motions based on similarities between neighboring frames and off-line approaches segment motions by capturing the global characteristics in feature space. In this paper, we propose a graph-based high-level motion segmentation method. Since high-level motions consist of repeated frames within temporal distances, we consider similarities between neighboring frames as well as all similarities among all frames within the temporal distance. This is achieved by constructing a graph, where each vertex represents a frame and the edges between the frames are weighted by their similarity. Then, normalized cuts algorithm is used to partition the constructed graph into several sub-graphs by globally finding minimum cuts. In the experiments, the results using the proposed method showed better performance than PCA-based method in on-line and GMM-based method in off-line, as the proposed method globally segment motions from the graph constructed based similarities between neighboring frames as well as similarities among all frames within temporal distances.

A Weighted Frequent Graph Pattern Mining Approach considering Length-Decreasing Support Constraints (길이에 따라 감소하는 빈도수 제한조건을 고려한 가중화 그래프 패턴 마이닝 기법)

  • Yun, Unil;Lee, Gangin
    • Journal of Internet Computing and Services
    • /
    • v.15 no.6
    • /
    • pp.125-132
    • /
    • 2014
  • Since frequent pattern mining was proposed in order to search for hidden, useful pattern information from large-scale databases, various types of mining approaches and applications have been researched. Especially, frequent graph pattern mining was suggested to effectively deal with recent data that have been complicated continually, and a variety of efficient graph mining algorithms have been studied. Graph patterns obtained from graph databases have their own importance and characteristics different from one another according to the elements composing them and their lengths. However, traditional frequent graph pattern mining approaches have the limitations that do not consider such problems. That is, the existing methods consider only one minimum support threshold regardless of the lengths of graph patterns extracted from their mining operations and do not use any of the patterns' weight factors; therefore, a large number of actually useless graph patterns may be generated. Small graph patterns with a few vertices and edges tend to be interesting when their weighted supports are relatively high, while large ones with many elements can be useful even if their weighted supports are relatively low. For this reason, we propose a weight-based frequent graph pattern mining algorithm considering length-decreasing support constraints. Comprehensive experimental results provided in this paper show that the proposed method guarantees more outstanding performance compared to a state-of-the-art graph mining algorithm in terms of pattern generation, runtime, and memory usage.

An Improved Automatic Text Summarization Based on Lexical Chaining Using Semantical Word Relatedness (단어 간 의미적 연관성을 고려한 어휘 체인 기반의 개선된 자동 문서요약 방법)

  • Cha, Jun Seok;Kim, Jeong In;Kim, Jung Min
    • Smart Media Journal
    • /
    • v.6 no.1
    • /
    • pp.22-29
    • /
    • 2017
  • Due to the rapid advancement and distribution of smart devices of late, document data on the Internet is on the sharp increase. The increment of information on the Web including a massive amount of documents makes it increasingly difficult for users to understand corresponding data. In order to efficiently summarize documents in the field of automated summary programs, various researches are under way. This study uses TextRank algorithm to efficiently summarize documents. TextRank algorithm expresses sentences or keywords in the form of a graph and understands the importance of sentences by using its vertices and edges to understand semantic relations between vocabulary and sentence. It extracts high-ranking keywords and based on keywords, it extracts important sentences. To extract important sentences, the algorithm first groups vocabulary. Grouping vocabulary is done using a scale of specific weight. The program sorts out sentences with higher scores on the weight scale, and based on selected sentences, it extracts important sentences to summarize the document. This study proved that this process confirmed an improved performance than summary methods shown in previous researches and that the algorithm can more efficiently summarize documents.