• Title/Summary/Keyword: 그래프 비교하기

Search Result 851, Processing Time 0.022 seconds

Switching Element Disjoint Multicast Scheduling for Avoiding Crosstalk in Photonic Banyan-Type Switching Networks(Part I):Graph Theoretic Analysis of Crosstalk Relationship (광 베니언-형 교환 망에서의 누화를 회피하기 위한 교환소자를 달리하는 멀티캐스트 스케줄링(제1부):누화 관계의 그래프 이론적 분석)

  • Tscha, Yeong-Hwan
    • Journal of KIISE:Information Networking
    • /
    • v.28 no.3
    • /
    • pp.447-453
    • /
    • 2001
  • In this paper, we consider the scheduling of SE(switching element)-disjoint multicasting in photonic Banyan-type switching networks constructed with directional couplers. This ensures that at most, one connection holds each SE in a given time thus, neither crosstalk nor blocking will arise in the network. Such multicasting usually takes several routing rounds hence, it is desirable to keep the number of rounds(i.e., scheduling length) to a minimum. We first present the necessary and sufficient condition for connections to pass through a common SE(i.e., make crosstalk) in the photonic Banyan-type networks capable of supporting one-to-many connections. With definition of uniquely splitting a multicast connection into distinct subconnections, the crosstalk relationship of a set of connections is represented by a graph model. In order to analyze the worst case crosstalk we characterize the upper bound on the degree of the graph. The successor paper(Part II)[14] is devoted to the scheduling algorithm and the upper bound on the scheduling length. Comparison with related results is made in detail.

  • PDF

Analysis of the Error-Remedial Effect and Change of the Students' Misconception on the Learning of Linear Function (교수학적 처방에 따른 중학생들의 일차함수 오개념의 변화와 그 효과 분석)

  • 이종희;김부미
    • School Mathematics
    • /
    • v.5 no.1
    • /
    • pp.115-133
    • /
    • 2003
  • Investigation of the students' mathematical misconceptions is very important for improvement in the school mathematics teach]ng and basis of curriculum. In this study, we categorize second-grade middle school students' misconceptions on the learning of linear function and make a comparative study of the error-remedial effect of students' collaborative learning vs explanatory leaching. We also investigate how to change and advance students' self-diagnosis and treatment of the milton ceptions through the collaborative learning about linear function. The result of the study shows that there are three main kinds of students' misconceptions in algebraic setting like this: (1) linear function misconception in relation with number concept, (2) misconception of the variables, (3) tenacity of specific perspective. Types of misconception in graphical setting are classified into misconception of graph Interpretation and prediction and that of variables as the objects of function. Two different remedies have a distinctive effect on treatment of the students' misconception under the each category. We also find that a misconception can develop into a correct conception as a result of interaction with other students.

  • PDF

Provenance Compression Scheme Considering RDF Graph Patterns (RDF 그래프 패턴을 고려한 프로버넌스 압축 기법)

  • Bok, kyoungsoo;Han, Jieun;Noh, Yeonwoo;Yook, Misun;Lim, Jongtae;Lee, Seok-Hee;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.2
    • /
    • pp.374-386
    • /
    • 2016
  • Provenance means the meta data that represents the history or lineage of a data in collaboration storage environments. Therefore, as provenance has been accruing over time, it takes several ten times as large as the original data. The schemes for effciently compressing huge amounts of provenance are required. In this paper, we propose a provenance compression scheme considering the RDF graph patterns. The proposed scheme represents provenance based on a standard PROV model and encodes provenance in numeric data through the text encoding. We compress provenance and RDF data using the graph patterns. Unlike conventional provenance compression techniques, we compress provenance by considering RDF documents on the semantic web. In order to show the superiority of the proposed scheme, we compare it with the existing scheme in terms of compression ratio and the processing time.

A Bayesian Sampling Algorithm for Evolving Random Hypergraph Models Representing Higher-Order Correlations (고차상관관계를 표현하는 랜덤 하이퍼그래프 모델 진화를 위한 베이지안 샘플링 알고리즘)

  • Lee, Si-Eun;Lee, In-Hee;Zhang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.3
    • /
    • pp.208-216
    • /
    • 2009
  • A number of estimation of distribution algorithms have been proposed that do not use explicitly crossover and mutation of traditional genetic algorithms, but estimate the distribution of population for more efficient search. But because it is not easy to discover higher-order correlations of variables, lower-order correlations are estimated most cases under various constraints. In this paper, we propose a new estimation of distribution algorithm that represents higher-order correlations of the data and finds global optimum more efficiently. The proposed algorithm represents the higher-order correlations among variables by building random hypergraph model composed of hyperedges consisting of variables which are expected to be correlated, and generates the next population by Bayesian sampling algorithm Experimental results show that the proposed algorithm can find global optimum and outperforms the simple genetic algorithm and BOA(Bayesian Optimization Algorithm) on decomposable functions with deceptive building blocks.

Synthesis and Characterization of HEMA-PCL Macromer Grafted onto Starch (옥수수전분에 HEMA-PCL Macromer를 그래프팅시킨 공중합체의 합성 및 특성)

  • 공원석;진인주;김말남;김수현;윤진산
    • Polymer(Korea)
    • /
    • v.24 no.2
    • /
    • pp.141-148
    • /
    • 2000
  • Polycaprolactone (PCL) was blended with corn starch to produce biodegradable compost films and the biodegradability and mechanical properties were investigated. As the compatibilizer for the immiscible PCL/starch blend, 2-hydroxyethylmethacrylate (HEMA)-PCL macromer was grafted onto starch by initially grafting HEMA to starch and then grafting of PCL onto HEMA via ring opening polymerization of $\varepsilon$-caprolactone. When biodegradability of the PCL grafted starch-g-DEMA copolymers was compared with that of starch by the modified Sturm test, graft copolymers degraded at much slower rates due to the presence of the non-degradable HEMA. With the addition of the graft copolymer up to 5 wt% to the blend, the elongation-at-break of the starch/PCL blend increased substantially, while the tensile strength and modulus did not change much. SEM observation of the blend containing 2 wt% copolymer clearly indicated that the interfacial adhesion between the starch and PCL was strengthened by the copolymer.

  • PDF

Efficient Processing of Transitive Closure Queries in Ontology using Graph Labeling (온톨로지에서의 그래프 레이블링을 이용한 효율적인 트랜지티브 클로저 질의 처리)

  • Kim Jongnam;Jung Junwon;Min Kyeung-Sub;Kim Hyoung-Joo
    • Journal of KIISE:Databases
    • /
    • v.32 no.5
    • /
    • pp.526-535
    • /
    • 2005
  • Ontology is a methodology on describing specific concepts and their relationships, and it is being considered important more and more as semantic web and variety of knowledge management systems are being highlighted. Ontology uses the relationships among concerts to represent some concrete semantics of specific concept. When we want to get some useful information from ontology, we severely have to process the transitive relationships because most of relationships among concepts represent transitivity. Technically, it causes recursive calls to process such transitive closure queries with heavy costs. This paper describes the efficient technique for processing transitive closure queries in ontology. To the purpose of it, we examine some approaches of current systems for transitive closure queries, and propose a technique by graph labeling scheme. Basically, we assume large size of ontology, and then we show that our approach gives relative efficiency in processing of transitive closure, queries.

A Task Prioritizing Algorithm Optimized for Task Duplication Based Processor Allocation Method (태스크 복제 기반 프로세서 할당 방법에 최적화된 태스크 우선순위 결정 알고리즘)

  • Song, In-Seong;Yoon, Wan-Oh;Lee, Chang-Ho;Choi, Sang-Bang
    • Journal of Internet Computing and Services
    • /
    • v.12 no.6
    • /
    • pp.1-17
    • /
    • 2011
  • The performance of DHCS depends on the algorithm which schedules input DAG. However, as the task scheduling problem in DHCS is an NP-complete problem, heuristic approach has to be made. Task scheduling algorithm consists of task prioritizing phase and processor allocation phase, and most of studies are considering both phases together. In this paper, we focus on task prioritizing phase and propose a WPD algorithm which is optimized for task duplication based processor allocation method. For an evaluation of the proposed WPD algorithm, we combined WPD algorithm with processor allocation phase of HMPID, HCPFD, HCT algorithms, which are using task duplication based processor allocation method. The results show that WPD algorithm makes a better use of task duplication than conventional task prioritizing methods and provides 9.58% better performance than HCPFD algorithm, 1.31% than HCT algorithm.

Robust Human Silhouette Extraction Using Graph Cuts (그래프 컷을 이용한 강인한 인체 실루엣 추출)

  • Ahn, Jung-Ho;Kim, Kil-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.52-58
    • /
    • 2007
  • In this paper we propose a new robust method to extract accurate human silhouettes indoors with active stereo camera. A prime application is for gesture recognition of mobile robots. The segmentation of distant moving objects includes many problems such as low resolution, shadows, poor stereo matching information and instabilities of the object and background color distributions. There are many object segmentation methods based on color or stereo information but they alone are prone to failure. Here efficient color, stereo and image segmentation methods are fused to infer object and background areas of high confidence. Then the inferred areas are incorporated in graph cut to make human silhouette extraction robust and accurate. Some experimental results are presented with image sequences taken using pan-tilt stereo camera. Our proposed algorithms are evaluated with respect to ground truth data and proved to outperform some methods based on either color/stereo or color/contrast alone.

An LDPC Code Replication Scheme Suitable for Cloud Computing (클라우드 컴퓨팅에 적합한 LDPC 부호 복제 기법)

  • Kim, Se-Hoe;Lee, Won-Joo;Jeon, Chang-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.2
    • /
    • pp.134-142
    • /
    • 2012
  • This paper analyze an LDPC code replication method suitable for cloud computing. First, we determine the number of blocks suitable for cloud computing through analysis of the performance for the file availability and storage overhead. Also we determine the type of LDPC code appropriate for cloud computing through the performance for three types of LDPC codes. Finally we present the graph random generation method and the comparing method of each generated LDPC code's performance by the iterative decoding process. By the simulation, we confirmed the best graph's regularity is left-regular or least left-regular. Also, we confirmed the best graph's total number of edges are minimum value or near the minimum value.

Major gene interactions effect identification on the quality of Hanwoo by radial graph (방사형그래프를 활용한 한우의 품질관련 주요 유전자 상호작용 효과 규명)

  • Lee, Jea-Young;Bae, Jae-Young;Lee, Jin-Mok;Oh, Dong-Yep;Lee, Seong-Won
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.1
    • /
    • pp.151-159
    • /
    • 2013
  • It is well known that disease of human and economic traits of livestock are affected a lot by gene combination effect rather than a single gene effect. But existing methods have disadvantages such as heavy computing, many expenses and long time. In order to overcome those drawbacks, SNPHarvester was developed to find the main gene combinations among the many genes. In this paper, we used the superior gene combination which are related to the quality of the Korean beef cattle among sets of SNPs by SNPHarvester, and identified the superior genotypes using radial graph which can enhance various qualities of Korean beef among selected SNP combinations.