• Title/Summary/Keyword: divide and conquer

Search Result 88, Processing Time 0.022 seconds

Parallel Factorization using Quadratic Sieve Algorithm on SIMD machines (SIMD상에서의 이차선별법을 사용한 병렬 소인수분해 알고리즘)

  • Kim, Yang-Hee
    • The KIPS Transactions:PartA
    • /
    • v.8A no.1
    • /
    • pp.36-41
    • /
    • 2001
  • In this paper, we first design an parallel quadratic sieve algorithm for factoring method. We then present parallel factoring algorithm for factoring a large odd integer by repeatedly using the parallel quadratic sieve algorithm based on the divide-and-conquer strategy on SIMD machines with DMM. We show that this algorithm is optimal in view of the product of time and processor numbers.

  • PDF

Stabilization Analysis for Switching-Type Fuzzy-Model-Based Controller (스위칭 모드 퍼지 모델 기반 제어기를 위한 안정화 문제 해석)

  • 김주원;주영훈;박진배
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.149-152
    • /
    • 2001
  • 본 논문은 연속 시간과 이산 시간에서의 스위칭 모드 퍼지 모델 기반 제어기의 새로운 설계 기법에 대해서 논의한다. 스위칭 모드 퍼지 모델 기반 제어기의 설계에는 Takagi-Sugeno(75) 퍼지 시스템이 사용된다. 이 스위칭 모드 퍼지 모델 기반 제어기는 "정복-분할(divide and conquer)"이라는 하향식 접근 방식을 이용한다. 이 방법은 여러 개의 규칙을 가지고 있는 시스템에서 유한의 하위 시스템으로 시스템을 분할하여 각각의 부분 해를 구하고 이들을 결합하여 전체 시스템의 해를 구하는 방법이다. 제어기의 설계 조건은 주어진 75 퍼지 시스템의 안정화를 보장하는 선형 행렬 부등식들(LMls)에 의해 결정된다. 적절한 시뮬레이션 예제를 통한 성능 비교를 통해 본 논문의 우수성을 입증한다.

  • PDF

A Study of the Building Model for a construction of 3D Virtual City (3D 가상도시 구축을 위한 건물모델 구축 연구)

  • 김성수;임일식;김병국
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.328-333
    • /
    • 2003
  • GIS기술은 실세계의 분석에서부터 3D 가상도시의 구축까지 매우 빠르게 발전하고 있다. 3D가상도시는 GIS 기술, 컴퓨터그래픽스 기술, 가상현실 기술, 데이터베이스 기술에 의해 컴퓨터 속에 구현이 되는 도시를 말한다. 본 연구에서는 1/1,000 수치지도를 이용하여 3D건물모델을 형성하였고, 건물모델을 형성할 때는 삼각형 메쉬를 형성하고 최적의 메쉬를 형성하기 위하여 Divide and Conquer 알고리즘을 사용하였다. 메쉬가 형성된 수치지도의 건물모델에 대하여 기존의 기하파이프라인을 개선함으로서 건물모델을 빠르게 렌더링할 수 있도록 하였다. 건물측면에 대하여는 디지틀 카메라를 이용하여 측면에 대한 이미지를 취득하고 2D Projective Transformation을 적용하여 이미지를 조정하였다.

  • PDF

Developing a distributed conversion algorithm of 3D model data for 3D printers (3D 프린터를 위하여 3D 모델 데이터의 분산 변환 기법 개발)

  • Mo, Junseo;Joo, Woosung;Lee, Kyooyoung;Kim, Sungsuk;Yang, Sun Ok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.68-70
    • /
    • 2016
  • 3D 프린터는 연속적인 계층에 특수한 재료를 출력시켜 3차원 물체를 만들어 내는 장치이다. 3D 프린팅을 위해서는 3D 모델을 생성한 후, 이를 3D 프린터에 출력할 수 있도록 G-code로 변환해야 한다. 본 논문에서는 이 변환 작업을 완전 분산 방식으로 처리할 수 있는 알고리즘을 제안한다. 이를 위해 하나의 메인 노드와 N개의 작업 노드로 구성한 시스템에서 2단계에 걸쳐 분할 정복(divide-and-conquer) 방식으로 변환하도록 하였다. 실제 구현한 시스템을 이용하여, 성능에 미치는 요소(모델의 크기 및 정밀도)에 따른 변환 시간의 단축 효과를 보였다.

A Study on File Search Engine Based on DBMS (DBMS을 활용한 파일 검색엔진 연구)

  • Kim, HyoungSeuk;Yu, Heonchang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.548-551
    • /
    • 2016
  • 기존 그리드 기반의 전통적인 RDBMS는 비구조적 데이터에 대한 색인이 지원되지 않았다. 이러한 제약 조건들로 인해 파일 문서 및 비 구조화된 데이터의 검색 엔진으로는 부적합하였다. 최근에 다양한 검색 오픈소스(Solr, Lucene)등으로 검색 엔진이 개발되어 활용되고 있지만, 검색한 결과와 기존 데이터의 연동이 쉽지 않고 구조 변경이 어려우며, 사용자의 다양한 요구 사항 수용이 쉽지 않은 단점을 가지고 있다. 따라서 본 연구에서는 빠른 검색을 위한 색인 (index) 최적화와 대용량 데이터 처리를 위한 파티션 기반 데이터의 분할 및 정복 (divide and conquer) 처리, 이중화된 검색어 색인 기능을 구현하였다. 또한 동의어 사전을 구축하여 연관 관계 분석이 가능하도록 DB를 구축하여 검색어와 동의어의 상호 관계성을 유지하였으며 오픈 소스보다 발전한 형태의 검색 엔진을 개발하는 것을 목표로 하였다. 본 연구를 위해 약 400만건 이상의 다양한 포맷 (Ms-office, Hwp, Pdf, Text)등의 파일 문서를 샘플로 실험을 진행하였다.

Efficiently Hybrid $MSK_k$ Method for Multiplication in $GF(2^n)$ ($GF(2^n)$ 곱셈을 위한 효율적인 $MSK_k$ 혼합 방법)

  • Ji, Sung-Yeon;Chang, Nam-Su;Kim, Chang-Han;Lim, Jong-In
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.9
    • /
    • pp.1-9
    • /
    • 2007
  • For an efficient implementation of cryptosystems based on arithmetic in a finite field $GF(2^n)$, their hardware implementation is an important research topic. To construct a multiplier with low area complexity, the divide-and-conquer technique such as the original Karatsuba-Ofman method and multi-segment Karatsuba methods is a useful method. Leone proposed an efficient parallel multiplier with low area complexity, and Ernst at al. proposed a multiplier of a multi-segment Karatsuba method. In [1], the authors proposed new $MSK_5$ and $MSK_7$ methods with low area complexity to improve Ernst's method. In [3], the authors proposed a method which combines $MSK_2$ and $MSK_3$. In this paper we propose an efficient multiplication method by combining $MSK_2,\;MSK_3\;and\;MSK_5$ together. The proposed method reduces $116{\cdot}3^l$ gates and $2T_X$ time delay compared with Gather's method at the degree $25{\cdot}2^l-2^l with l>0.

Travelling Salesman Problem Based on Area Division and Connection Method (외판원 문제의 지역 분할-연결 기법)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.211-218
    • /
    • 2015
  • This paper introduces a 'divide-and-conquer' algorithm to the travelling salesman problem (TSP). Top 10n are selected beforehand from a pool of n(n-1) data which are sorted in the ascending order of each vertex's distance. The proposed algorithm then firstly selects partial paths that are interconnected with the shortest distance $r_1=d\{v_i,v_j\}$ of each vertex $v_i$ and assigns them as individual regions. For $r_2$, it connects all inter-vertex edges within the region and inter-region edges are connected in accordance with the connection rule. Finally for $r_3$, it connects only inter-region edges until one whole Hamiltonian cycle is constructed. When tested on TSP-1(n=26) and TSP-2(n=42) of real cities and on a randomly constructed TSP-3(n=50) of the Euclidean plane, the algorithm has obtained optimal solutions for the first two and an improved one from that of Valenzuela and Jones for the third. In contrast to the brute-force search algorithm which runs in n!, the proposed algorithm runs at most 10n times, with the time complexity of $O(n^2)$.

Improved Key-Recovery Attacks on HMAC/NMAC-MD4 (HMAC/NMAC-MD4에 대한 향상된 키 복구 공격)

  • Kang, Jin-Keon;Lee, Je-Sang;Sung, Jae-Chul;Hong, Seok-Hie;Ryu, Heui-Su
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.2
    • /
    • pp.63-74
    • /
    • 2009
  • In 2005, Wang et al. discovered devastating collision attacks on the main hash functions from the MD4 family. After the discovery of Wang, many analysis results on the security of existing hash-based cryptographic schemes are presented. At CRYPTO'07, Fouque, Leurent and Nguyen presented full key-recovery attacks on HMAC/NMAC-MD4 and NMAC-MD5[4]. Such attacks are based on collision attacks on the underlying hash function, and the most expensive stage is the recovery of the outer key. At EUROCRYPT'08, Wang, Ohta and Kunihiro presented improved outer key recovery attack on HMAC/NMAC-MD4, by using a new near collision path with a high probability[2]. This improves the complexity of the full key-recovery attack on HMAC/NMAC-MD4 which proposed by Fouque, Leurent and Nguyen at CRYPTO'07: The MAC queries decreases from $2^{88}$ to $2^{72}$, and the number of MD4 computations decreases from $2^{95}$ to $2^{77}$. In this paper, we propose improved outer key-recovery attack on HMAC/NMAC-MD4 with $2^{77.1246}$ MAC queries and $2^{37}$ MD4 computations, by using divide and conquer paradigm.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Computation and Communication Efficient Key Distribution Protocol for Secure Multicast Communication

  • Vijayakumar, P.;Bose, S.;Kannan, A.;Jegatha Deborah, L.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.878-894
    • /
    • 2013
  • Secure multimedia multicast applications involve group communications where group membership requires secured dynamic key generation and updating operations. Such operations usually consume high computation time and therefore designing a key distribution protocol with reduced computation time is necessary for multicast applications. In this paper, we propose a new key distribution protocol that focuses on two aspects. The first one aims at the reduction of computation complexity by performing lesser numbers of multiplication operations using a ternary-tree approach during key updating. Moreover, it aims to optimize the number of multiplication operations by using the existing Karatsuba divide and conquer approach for fast multiplication. The second aspect aims at reducing the amount of information communicated to the group members during the update operations in the key content. The proposed algorithm has been evaluated based on computation and communication complexity and a comparative performance analysis of various key distribution protocols is provided. Moreover, it has been observed that the proposed algorithm reduces the computation and communication time significantly.