• Title/Summary/Keyword: 그래프최적화

Search Result 186, Processing Time 0.029 seconds

Co-scheduling Technique of Dataflow Applications with Shared Processor Allocation (프로세서 공유를 이용한 데이터 플로우 어플리케이션의 동시 스케줄링 기법)

  • Kang, Duseok;Kang, Shinhaeng;Yang, Hoeseok;Ha, Soonhoi
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.1-7
    • /
    • 2016
  • When multiple applications are running concurrently on a multi-processor system, interferences between applications make it difficult to guarantee real-time constraints. We propose a novel interference analysis technique that allows sharing of share processors among dataflow applications, while satisfying real-time constraints. Based on the interference analysis, we develop a co-scheduling technique that aims to minimize the resource usage. Compared to an existent technique that involves converting application graphs to real-time tasks, the proposed technique shows better results in terms of resource usage, especially when it is applied to applications with tight time constraints.

Improving the Performance of Genetic Algorithms using Gene Reordering (유전자 재배열을 이용한 유전자 알고리즘의 성능향상)

  • Hwang, In-Jae
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.7 no.4
    • /
    • pp.201-206
    • /
    • 2006
  • Genetic Algorithms have been known to provide near optimal solutions for various optimization problems in engineering. In this paper, we study the effect of gene order in genetic algorithms on the defining length of the schema with high fitness values. Its effect on the performance of genetic algorithms was also analyzed through two well known problems. A few gene reordering methods were proposed for graph partitioning and knapsack problems. Experimental results showed that genetic algorithms with gene reordering could find solutions of better qualities compared to the ones without gene reordering. It is very important to find proper reordering method for a given problem to improve the performance of genetic algorithms.

  • PDF

Join Query Performance Optimization Based on Convergence Indexing Method (융합 인덱싱 방법에 의한 조인 쿼리 성능 최적화)

  • Zhao, Tianyi;Lee, Yong-Ju
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.109-116
    • /
    • 2021
  • Since RDF (Resource Description Framework) triples are modeled as graph, we cannot directly adopt existing solutions in relational databases and XML technology. In order to store, index, and query Linked Data more efficiently, we propose a convergence indexing method combined R*-tree and K-dimensional trees. This method uses a hybrid storage system based on HDD (Hard Disk Drive) and SSD (Solid State Drive) devices, and a separated filter and refinement index structure to filter unnecessary data and further refine the immediate result. We perform performance comparisons based on three standard join retrieval algorithms. The experimental results demonstrate that our method has achieved remarkable performance compared to other existing methods such as Quad and Darq.

Fabric Mapping and Placement of Field Programmable Stateful Logic Array (Field Programmable Stateful Logic Array 패브릭 매핑 및 배치)

  • Kim, Kyosun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.209-218
    • /
    • 2012
  • Recently, the Field Programmable Stateful Logic Array (FPSLA) was proposed as one of the most promising system integration technologies which will extend the life of the Moore's law. This work is the first proposal of the FPSLA design automation flow, and the approaches to logic synthesis, synchronization, physical mapping, and automatic placement of the FPSLA designs. The synchronization at each gate for pipelining determines the x-coordinates of cells, and reduces the placement to 1-dimensional problems. The objective function and its gradients for the non-linear optimization of the net length and placement density have been remodeled for the reduced global placement problem. Also, a recursive algorithm has been proposed to legalize the placement by relaxing the density overflow of bipartite bin groups in a top-down hierarchical fashion. The proposed model and algorithm are implemented, and validated by applying them to the ACM/SIGDA benchmark designs. The output state of a gate in an FPSLA needs to be duplicated so that each fanout gate can be connected to a dedicated copy. This property has been taken into account by merging the duplicated nets into a hyperedge, and then, splitting the hyperedge into edges as the optimization progresses. This yields additional 18.4% of the cell count reduction in the most dense logic stage. The practicality of the FPSLA can be further enhanced primarily by incorporating into the logic synthesis the constraint to avoid the concentrated fains of gates on some logic stages. In addition, an efficient algorithm needs to be devised for the routing problem which is based on a complicated graph. The graph models the nanowire crossbar which is trimmed to be embedded into the FPSLA fabric, and therefore, asymmetric. These CAD tools can be used to evaluate the fabric efficiency during the architecture enhancement as well as automate the design.

The Development of a beam profile monitoring system for improving the beam output characteristics (빔 출력 특성 개선을 위한 빔 프로파일 모니터링 시스템 개발)

  • An, Young-jun;Hur, Min-goo;Yang, Seung-dae;Shin, Dae-seob;Lee, Dong-hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.11
    • /
    • pp.2689-2696
    • /
    • 2015
  • Radioactive isotopes which are manufactured using a cyclotron in a radioisotope used for radiation diagnosis is affected by the production yield according to size and shape of the beam and beam uniform degree from irradiated location when the proton beam investigated the target by cyclotron. Therefore, in this paper developed the BPM(Beam Profile Monitor) device capable of measuring the beam cross-section at the cyclotron beam line. It was configured so as to be able to remote control the BPM device in LabView and used the BPM program it was to be able to easily monitor and display to analyze the graph of two-dimensional graph and a three-dimensional beam distribution numerical information of the beam obtained while scanning the tungsten wire to the X and Y axis. The time it takes to measure the beam can be confirmed 37seconds when step motor driving speed was 2000pps. Through a beam readjusted based on the measured beam distribution information by optimizing the beam distribution it can be made to maximize the RI production yield and contribute supply stabilization.

A new approach to design isolation valve system to prevent unexpected water quality failures (수질사고 예방형 상수도 관망 밸브 시스템 설계)

  • Park, Kyeongjin;Shin, Geumchae;Lee, Seungyub
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.spc1
    • /
    • pp.1211-1222
    • /
    • 2022
  • Abnormal condition inevitably occurs during operation of water distribution system (WDS) and requires the isolation of certain areas using isolation valves. In general, the determination of the optimal location of isolation valves considered minimization of hydraulic failures as isolation of certain areas causes a change in hydraulic states (e.g., flow direction, velocity, pressure, etc.). Water quality failure can also be induced by changes in hydraulics, which have not been considered for isolation valve system design. Therefore, this study proposes a new isolation valve system design methodology to prevent unexpected water quality failure events. The new methodology considers flow direction change ratio (FDCR), which accounts for flow direction changes after isolation of the area, as a constraint while reliability is used as the objective function. The optimal design model has been applied to a synthetic grid network and the results are compared with the traditional design approach. Results show that considering FDCR can eliminate flow direction changes while average pressure and coefficient of variation of pressure, velocity, and hydraulic geodesic index (HGI) outperform compared to the traditional design approach. The proposed methodology is expected to be a useful approach to minimizing unexpected consequences by traditional design approaches.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A news visualization based on an algorithm by journalistic values (저널리즘 가치에 기초한 알고리즘을 이용한 뉴스 시각화)

  • Park, Daemin;Kim, Gi-Nam;Kang, Nam-Yong;Suh, Bongwon;Ha, Hyo-Ji;On, Byung-Won
    • Journal of the HCI Society of Korea
    • /
    • v.9 no.2
    • /
    • pp.5-12
    • /
    • 2014
  • There was widespread criticism of the online news services due to their bias toward sensational and soft news. Thus, news services based on journalist values are socially requested. News source network analysis(NSNA), an algorithm to cluster and weight news sources, quotes, and articles, is suggested as a method to emphasize on journalist values like facts, variety, depth, and criticism in the previous study. This study suggests 'News Sources' as a visualization tool of NSNA. 'News Sources' shows news as bar graphs, weighted by facts and criticism, and arranged by organizations and subjects. This study designed a beta version using KINDS, a news archive of Korean Press Foundation.

Digital Logic System Design based on Directed Cyclic graph (다이렉트사이클릭그래프에 기초한 디지털논리시스템 설계)

  • Park, Chun-Myoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.1
    • /
    • pp.89-94
    • /
    • 2009
  • This paper proposes the algorithms that design the highly digital logic circuit and assign the code to each node of DCG(Directed Cyclic Graph) of length ${\zeta}$. The conventional algorithm have some problems, so this paper introduce the matrix equation from DCG of length ${\zeta}$ and proposes highly digital logic circuit design algorithms according to the DCG of length ${\zeta}$. Using the proposed circuit design algorithms in this paper, it become realized that was able to design from former algorithm. Also, making a comparison between the circuit using former algorithm and this paper's, we testify that proposed paper's algorithm is able to realize more optimized circuit design. According to proposed circuit design algorithm in this paper, it is possible to design current that DCG have natural number, so it have the following advantages, reduction of the circuit input/output digits, simplification of circuit composition, reduction of computation time and cost. And we show comparability and verification about this paper's algorithm.

  • PDF

Progressive Reconstruction of 3D Objects from a Single Freehand Line Drawing (Free-Hand 선화로부터 점진적 3차원 물체 복원)

  • 오범수;김창헌
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.3_4
    • /
    • pp.168-185
    • /
    • 2003
  • This paper presents a progressive algorithm that not only can narrow down the search domain in the course of face identification but also can fast reconstruct various 3D objects from a sketch drawing. The sketch drawing, edge-vertex graph without hidden line removal, which serves as input for reconstruction process, is obtained from an inaccurate freehand sketch of a 3D wireframe object. The algorithm is executed in two stages. In the face identification stage, we generate and classify potential faces into implausible, basis, and minimal faces by using geometrical and topological constraints to reduce search space. The proposed algorithm searches the space of minimal faces only to identify actual faces of an object fast. In the object reconstruction stage, we progressively calculate a 3D structure by optimizing the coordinates of vertices of an object according to the sketch order of faces. The progressive method reconstructs the most plausible 3D object quickly by applying 3D constraints that are derived from the relationship between the object and the sketch drawing in the optimization process. Furthermore, it allows the designer to change viewpoint during sketching. The progressive reconstruction algorithm is discussed, and examples from a working implementation are given.