• Title/Summary/Keyword: 병렬고도처리

Search Result 17, Processing Time 0.02 seconds

Structuring FFT Algorithm for Dataflow Computation (Dataflow 연산에 의한 FFT 앨고리즘의 구성)

  • 이상범;박찬정
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.10 no.4
    • /
    • pp.175-183
    • /
    • 1985
  • Dataflow computers exhibit a high degree of parallelism which can not be obtained easily with the conventional von-Neumann architecture. Since many instructions are ready for execution simultaneously, concurrency can be easily achieved by the multiple processors modified the dataflow machine. This paper describes a FFT Butterfly algorithm for dataflow computation and evaluates the performance by the speed up factor of that algorithm through the simulation approach by the time-accelation method.

  • PDF

High Resolution Rainfall Prediction Using Distributed Computing Technology (분산 컴퓨팅 기술을 이용한 고해상도 강수량 예측)

  • Yoon, JunWeon;Song, Ui-Sung
    • Journal of Digital Contents Society
    • /
    • v.17 no.1
    • /
    • pp.51-57
    • /
    • 2016
  • Distributed Computing attempts to harness a massive computing power using a great numbers of idle PCs resource distributed linked to the internet and processes a variety of applications parallel way such as bio, climate, cryptology, and astronomy. In this paper, we develop internet-distributed computing environment, so that we can analyze High Resolution Rainfall Prediction application in meteorological field. For analyze the rainfall forecast in Korea peninsula, we used QPM(Quantitative Precipitation Model) that is a mesoscale forecasting model. It needs to a lot of time to construct model which consisted of 27KM grid spacing, also the efficiency is degraded. On the other hand, based on this model it is easy to understand the distribution of rainfall calculated in accordance with the detailed topography of the area represented by a small terrain model reflecting the effects 3km radius of detail and terrain can improve the computational efficiency. The model is broken down into detailed area greater the required parallelism and increases the number of compute nodes that efficiency is increased linearly.. This model is distributed divided in two sub-grid distributed units of work to be done in the domain of $20{\times}20$ is networked computing resources.

A Partition Technique of UML-based Software Models for Multi-Processor Embedded Systems (멀티프로세서용 임베디드 시스템을 위한 UML 기반 소프트웨어 모델의 분할 기법)

  • Kim, Jong-Phil;Hong, Jang-Eui
    • The KIPS Transactions:PartD
    • /
    • v.15D no.1
    • /
    • pp.87-98
    • /
    • 2008
  • In company with the demand of powerful processing units for embedded systems, the method to develop embedded software is also required to support the demand in new approach. In order to improve the resource utilization and system performance, software modeling techniques have to consider the features of hardware architecture. This paper proposes a partitioning technique of UML-based software models, which focus the generation of the allocatable software components into multiprocessor architecture. Our partitioning technique, at first, transforms UML models to CBCFGs(Constraint-Based Control Flow Graphs), and then slices the CBCFGs with consideration of parallelism and data dependency. We believe that our proposition gives practical applicability in the areas of platform specific modeling and performance estimation in model-driven embedded software development.

Signal Space Detection for High Data Rate Channels (고속 데이터 전송 채널을 위한 신호공간 검출)

  • Jeon , Taehyun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.10 s.340
    • /
    • pp.25-30
    • /
    • 2005
  • This paper generalizes the concept of the signal space detection to construct a fixed delay tree search (FDTS) detector which estimates a block of n channel symbols at a time. This technique is applicable to high speed implementation. Two approaches are discussed both of which are based on efficient signal space partitioning. In the first approach, symbol detection is performed based on a multi-class partitioning of the signal space. This approach is a generalization of binary symbol detection based on a two-class pattern classification. In the second approach, binary signal detection is combined with a look-ahead technique, resulting in a highly parallel detector architecture.

Development of long-term daily high-resolution gridded meteorological data based on deep learning (딥러닝에 기반한 우리나라 장기간 일 단위 고해상도 격자형 기상자료 생산)

  • Yookyung Jeong;Kyuhyun Byu
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.198-198
    • /
    • 2023
  • 유역 내 수자원 계획을 효율적으로 수립하기 위해서는 장기간에 걸친 수문 모델링 뿐만 아니라 미래 기후 시나리오에 따른 수문학적 기후변화 영향 분석도 중요하다. 이를 위해서는 관측 값에 기반한 고품질 및 고해상도 격자형 기상자료 생산이 필수적이다. 하지만, 우리나라는 종관기상관측시스템(ASOS)과 방재기상관측시스템(AWS)으로 이루어진 고밀도 관측 네트워크가 2000년 이후부터 이용 가능했기에 장기간 격자형 기상자료가 부족하다. 이를 보완하고자 본 연구는 가정적인 상황에 기반하여 만약 2000년 이전에도 현재와 동일한 고밀도 관측 네트워크가 존재했다면 산출 가능했을 장기간 일 단위 고해상도 격자형 기상자료를 생산하는 것을 목표로 한다. 구체적으로, 2000년을 기준으로 최근과 과거 기간의 격자형 기상자료를 딥러닝 알고리즘으로 모델링하여 과거 기간을 대상으로 기상자료(일 단위 기온, 강수량)의 공간적 변동성 및 특성을 재구성한다. 격자형 기상자료의 생산을 위해 우리나라의 고도에 기반하여 기상 인자들의 영향을 정량화 하는 보간법인 K-PRISM을 적용하여 고밀도 및 저밀도 관측 네트워크로 두 가지 격자형 기상자료를 생산한다. 생산한 격자형 기상자료 중 저밀도 관측 네트워크의 자료를 입력 자료로, 고밀도 관측 네트워크의 자료를 출력 자료로 선정하여 각 격자점에 대해 Long-Short Term Memory(LSTM) 알고리즘을 개발한다. 이 때, 멀티 그래픽 처리장치(GPU)에 기반한 병렬 처리를 통해 비용 효율적인 계산이 가능하도록 한다. 최종적으로 1973년부터 1999년까지의 저밀도 관측 네트워크의 격자형 기상자료를 입력 자료로 하여 해당 기간에 대한 고밀도 관측 네트워크의 격자형 기상자료를 생산한다. 개발된 대부분의 예측 모델 결과가 0.9 이상의 NSE 값을 나타낸다. 따라서, 본 연구에서 개발된 모델은 고품질의 장기간 기상자료를 효율적으로 정확도 높게 산출하며, 이는 향후 장기간 기후 추세 및 변동 분석에 중요 자료로 활용 가능하다.

  • PDF

Feasibility Study on Double Path Capacitive Deionization Process for Advanced Wastewater Treatment (이단유로 축전식 탈염공정의 하수고도처리 적용가능성 평가)

  • Cha, Jaehwan;Shin, Kyung-Sook;Lee, Jung-Chul;Park, Seung-Kook;Park, Nam-Su;Song, Eui-Yeol
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.36 no.4
    • /
    • pp.295-302
    • /
    • 2014
  • This study demonstrates a double-path CDI as an alternative of advanced wastewater treatment process. While the CDI typically consists of many pairs of electrodes connected in parallel, the new double-path CDI is designed to have series flow path by dividing the module into two stages. The CFD model showed that the double-path had uniform flow distribution with higher velocity and less dead zone compared with the single-path. However, the double-path was predicted to have higher pressure drop(0.7 bar) compared the single-path (0.4 bar). From the unit cell test, the highest TDS removal efficiencies of single- and double-path were up to 88% and 91%, respectively. The rate of increase in pressure drop with an increase of flow rate was higher in double-path than single-path. At 70 mL/min of flow rate, the pressure drop of double-path was 1.67 bar, which was two times higher than single-path. When the electrode spacing was increased from 100 to $200{\mu}m$, the pressure drop of double-path decreased from 1.67 to 0.87 bar, while there was little difference in TDS removal. When proto type double-path CDI was operated using sewage water, TDS, $NH_4{^+}$-N, $NO_3{^-}$-N and $PO_4{^{3-}}$-P removal efficiencies were up to 78%, 50%, 93% and 50%, respectively.

On Developing The Intellingent contro System of a Robot Manupulator by Fussion of Fuzzy Logic and Neural Network (퍼지논리와 신경망 융합에 의한 로보트매니퓰레이터의 지능형제어 시스템 개발)

  • 김용호;전홍태
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.52-64
    • /
    • 1995
  • Robot manipulator is a highly nonlinear-time varying system. Therefore, a lot of control theory has been applied to the system. Robot manipulator has two types of control; one is path planning, another is path tracking. In this paper, we select the path tracking, and for this purpose, propose the intelligent control¬ler which is combined with fuzzy logic and neural network. The fuzzy logic provides an inference morphorlogy that enables approximate human reasoning to apply to knowledge-based systems, and also provides a mathematical strength to capture the uncertainties associated with human cognitive processes like thinking and reasoning. Based on this fuzzy logic, the fuzzy logic controller(FLC) provides a means of converhng a linguistic control strategy based on expert knowledge into automahc control strategy. But the construction of rule-base for a nonlinear hme-varying system such as robot, becomes much more com¬plicated because of model uncertainty and parameter variations. To cope with these problems, a auto-tuning method of the fuzzy rule-base is required. In this paper, the GA-based Fuzzy-Neural control system combining Fuzzy-Neural control theory with the genetic algorithm(GA), that is known to be very effective in the optimization problem, will be proposed. The effectiveness of the proposed control system will be demonstrated by computer simulations using a two degree of freedom robot manipulator.

  • PDF