• Title/Summary/Keyword: parallel/distributed processing

Search Result 257, Processing Time 0.029 seconds

Priority-based Multi-DNN scheduling framework for autonomous vehicles (자율주행차용 우선순위 기반 다중 DNN 모델 스케줄링 프레임워크)

  • Cho, Ho-Jin;Hong, Sun-Pyo;Kim, Myung-Sun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.368-376
    • /
    • 2021
  • With the recent development of deep learning technology, autonomous things technology is attracting attention, and DNNs are widely used in embedded systems such as drones and autonomous vehicles. Embedded systems that can perform large-scale operations and process multiple DNNs for high recognition accuracy without relying on the cloud are being released. DNNs with various levels of priority exist within these systems. DNNs related to the safety-critical applications of autonomous vehicles have the highest priority, and they must be handled first. In this paper, we propose a priority-based scheduling framework for DNNs when multiple DNNs are executed simultaneously. Even if a low-priority DNN is being executed first, a high-priority DNN can preempt it, guaranteeing the fast response characteristics of safety-critical applications of autonomous vehicles. As a result of checking through extensive experiments, the performance improved by up to 76.6% in the actual commercial board.

The Study of New Reconstruction Method for Brain SPECT on Dual Detector System (Dual detector system에서 Brain SPECT의 new reconstruction method의 연구)

  • Lee, Hyung-Jin;Kim, Su-Mi;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.57-62
    • /
    • 2009
  • Purpose: Brain SPECT study is more sensitive to motion than other studies. Especially, when applying 1-day subtraction method for Diamox SPECT, it needs shorter study time in order to prevent reexamination. We were required to have new study condition and analysing method on dual detector system because triple head camera in Seoul National University Hospital is to be disposed. So we have tried to increase image quality and make the dual and triple head to have equivalent study time by using a new analysing program. Materials and Methods: Using IEC phantom, we estimated contrast, SNR and FWHM. In Hoffman 3D brain phantom which is similar with real brain, we were on the supposition that 5% of injected doses were distributed in brain tissue. To compare with existing FBP method, we used fan-beam collimator. And we applied 15 sec, 25 sec/frame for each SEPCT studies using LEHR and LEUHR. We used OSEM2D and Onco-flash3D reconstruction method and compared reconstruction methods between applied Gaussian post-filtering 5mm and not applied as well. Attenuation correction was applied by manual method. And we did Brain SPECT to patient injected 15 mCi of $^{99m}Tc$-HMPAO according to results of Phantom study. Lastly, technologist, MD, PhD estimated the results. Results: The study shows that reconstruction method by Flash3D is better than exiting FBP and OSEM2D when studied using IEC phantom. Flowing by estimation, when using Flash3D, both of 15 sec and 25 sec are needed postfiltering 5 mm. And 8 times are proper for subset 8 iteration in Flash3D. OSEM2D needs post-filtering. And it is proper that subset 4, iteration 8 times for 15sec and subset 8, iteration 12 times for 25sec. The study regarding to injected doses for a patient and study time, combination of input parameter-15 sec/frame, LEHR collimator, analysing program-Flash3D, subset 8, iteration 8times and Gaussian post-filtering 5mm is the most appropriate. On the other hands, it was not appropriate to apply LEUHR collimator to 1-day subtraction method of Diamox study because of lower sensitivity. Conclusions: We could prove that there was also an advantage of short study time effectiveness in Dual camera same as Triple gamma camera and get great result of alternation from existing fan-beam collimator to parallel collimator. In addition, resolution and contrast of new method was better than FBP method. And it could improve sensitivity and accuracy of image because lesser subjectivity was input than Metz filter of FBP. We expect better image quality and shorter study time of Brain SPECT on Dual detector system.

  • PDF

Development of Information Technology Infrastructures through Construction of Big Data Platform for Road Driving Environment Analysis (도로 주행환경 분석을 위한 빅데이터 플랫폼 구축 정보기술 인프라 개발)

  • Jung, In-taek;Chong, Kyu-soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.669-678
    • /
    • 2018
  • This study developed information technology infrastructures for building a driving environment analysis platform using various big data, such as vehicle sensing data, public data, etc. First, a small platform server with a parallel structure for big data distribution processing was developed with H/W technology. Next, programs for big data collection/storage, processing/analysis, and information visualization were developed with S/W technology. The collection S/W was developed as a collection interface using Kafka, Flume, and Sqoop. The storage S/W was developed to be divided into a Hadoop distributed file system and Cassandra DB according to the utilization of data. Processing S/W was developed for spatial unit matching and time interval interpolation/aggregation of the collected data by applying the grid index method. An analysis S/W was developed as an analytical tool based on the Zeppelin notebook for the application and evaluation of a development algorithm. Finally, Information Visualization S/W was developed as a Web GIS engine program for providing various driving environment information and visualization. As a result of the performance evaluation, the number of executors, the optimal memory capacity, and number of cores for the development server were derived, and the computation performance was superior to that of the other cloud computing.

Multi-threaded Web Crawling Design using Queues (큐를 이용한 다중스레드 방식의 웹 크롤링 설계)

  • Kim, Hyo-Jong;Lee, Jun-Yun;Shin, Seung-Soo
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.2
    • /
    • pp.43-51
    • /
    • 2017
  • Background/Objectives : The purpose of this study is to propose a multi-threaded web crawl using queues that can solve the problem of time delay of single processing method, cost increase of parallel processing method, and waste of manpower by utilizing multiple bots connected by wide area network Design and implement. Methods/Statistical analysis : This study designs and analyzes applications that run on independent systems based on multi-threaded system configuration using queues. Findings : We propose a multi-threaded web crawler design using queues. In addition, the throughput of web documents can be analyzed by dividing by client and thread according to the formula, and the efficiency and the number of optimal clients can be confirmed by checking efficiency of each thread. The proposed system is based on distributed processing. Clients in each independent environment provide fast and reliable web documents using queues and threads. Application/Improvements : There is a need for a system that quickly and efficiently navigates and collects various web sites by applying queues and multiple threads to a general purpose web crawler, rather than a web crawler design that targets a particular site.

Debelppment of C++ Compiler and Programming Environment (C++컴파일러 및 프로그래밍 환경 개발)

  • Jang, Cheon-Hyeon;O, Se-Man
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.3
    • /
    • pp.831-845
    • /
    • 1997
  • In this paper,we proposed and developed a compiler and interactive programming enviroments for C++ wich is mostly worth of nitice among the object -oriented languages.To develope the compiler for C++ we took front=end/back-end model using EM virtual machine.In develpoing Front-End,we formailized C++ gram-mar with the context semsitive tokens which must be manipulated by dexical scanner and designed a AST class li-brary which is the hierarchy of AST node class and well defined interface among them,In develpoing Bacik-End,we proposed model for three major components :code oprtimizer,code generator and run-time enviroments.We emphasized the retargatable back-end which can be systrmatically reconfigured to genrate code for a variety of distinct target computers.We also developed terr pattern matching algorithm and implemented target code gen-erator which produce SPARC code.We also proposed the theroy and model for construction interative pro-gramming enviroments. To represent language features we adopt AST as internal reprsentation and propose uncremental analysis algorithm and viseal digrams.We also studied unparsing scheme, visual diagram,graphical user interface to generate interactive environments automatically Results of our resarch will be very useful for developing a complier and programming environments, and also can be used in compilers for parallel and distributed enviroments.

  • PDF

Analysis of Factors for Korean Women's Cancer Screening through Hadoop-Based Public Medical Information Big Data Analysis (Hadoop기반의 공개의료정보 빅 데이터 분석을 통한 한국여성암 검진 요인분석 서비스)

  • Park, Min-hee;Cho, Young-bok;Kim, So Young;Park, Jong-bae;Park, Jong-hyock
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.10
    • /
    • pp.1277-1286
    • /
    • 2018
  • In this paper, we provide flexible scalability of computing resources in cloud environment and Apache Hadoop based cloud environment for analysis of public medical information big data. In fact, it includes the ability to quickly and flexibly extend storage, memory, and other resources in a situation where log data accumulates or grows over time. In addition, when real-time analysis of accumulated unstructured log data is required, the system adopts Hadoop-based analysis module to overcome the processing limit of existing analysis tools. Therefore, it provides a function to perform parallel distributed processing of a large amount of log data quickly and reliably. Perform frequency analysis and chi-square test for big data analysis. In addition, multivariate logistic regression analysis of significance level 0.05 and multivariate logistic regression analysis of meaningful variables (p<0.05) were performed. Multivariate logistic regression analysis was performed for each model 3.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.