• Title/Summary/Keyword: Parallel computation

Search Result 594, Processing Time 0.03 seconds

Distributed Assumption-Based Truth Maintenance System for Scalable Reasoning (대용량 추론을 위한 분산환경에서의 가정기반진리관리시스템)

  • Jagvaral, Batselem;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.10
    • /
    • pp.1115-1123
    • /
    • 2016
  • Assumption-based truth maintenance system (ATMS) is a tool that maintains the reasoning process of inference engine. It also supports non-monotonic reasoning based on dependency-directed backtracking. Bookkeeping all the reasoning processes allows it to quickly check and retract beliefs and efficiently provide solutions for problems with large search space. However, the amount of data has been exponentially grown recently, making it impossible to use a single machine for solving large-scale problems. The maintaining process for solving such problems can lead to high computation cost due to large memory overhead. To overcome this drawback, this paper presents an approach towards incrementally maintaining the reasoning process of inference engine on cluster using Spark. It maintains data dependencies such as assumption, label, environment and justification on a cluster of machines in parallel and efficiently updates changes in a large amount of inferred datasets. We deployed the proposed ATMS on a cluster with 5 machines, conducted OWL/RDFS reasoning over University benchmark data (LUBM) and evaluated our system in terms of its performance and functionalities such as assertion, explanation and retraction. In our experiments, the proposed system performed the operations in a reasonably short period of time for over 80GB inferred LUBM2000 dataset.

Spherical Slepian Harmonic Expression of the Crustal Magnetic Vector and Its Gradient Components (구면 스레피안 함수로 표현된 지각 자기이상값과 구배 성분)

  • Kim, Hyung Rae
    • Economic and Environmental Geology
    • /
    • v.49 no.4
    • /
    • pp.269-280
    • /
    • 2016
  • I presented three vector crustal magnetic anomaly components and six gradients by using spherical Slepian functions over the cap area of $20^{\circ}$ of radius centered on the South Pole. The Swarm mission, launched by European Space Agency(ESA) in November of 2013, was planned to put three satellites into the low-Earth orbits, two in parallel in East-West direction and one in cross-over of the higher altitude. This orbit configuration will make the gradient measurements possible in North-South direction, vertical direction, as well as E-W direction. The gravity satellites, such as GRACE and GOCE, have already implemented their gradient measurements for recovering the accurate gravity of the Earth and its temporal variation due to mass changes on the subsurface. However, the magnetic gradients have little been applied since Swarm launched. A localized magnetic modeling method is useful in taking an account for a region where data availability was limited or of interest was special. In particular, computation to get the localized solutions is much more efficient and it has an advantage of presenting high frequency anomaly features with numbers of solutions fewer than the global ones. Besides, these localized basis functions that were done by a linear transformation of the spherical harmonic functions, are orthogonal so that they can be used for power spectrum analysis by transforming the global spherical harmonic coefficients. I anticipate in scientific and technical progress in the localized modeling with the gradient measurements from Swarm and here will do discussion on the results of the localized solution to represent the three vector and six gradient anomalies over the Antarctic area from the synthetic data derived from a global solution of the spherical harmonics for the crustal magnetic anomalies of Swarm measurements.

Improvement of Address Pointer Assignment in DSP Code Generation (DSP용 코드 생성에서 주소 포인터 할당 성능 향상 기법)

  • Lee, Hee-Jin;Lee, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.37-47
    • /
    • 2008
  • Exploitation of address generation units which are typically provided in DSPs plays an important role in DSP code generation since that perform fast address computation in parallel to the central data path. Offset assignment is optimization of memory layout for program variables by taking advantage of the capabilities of address generation units, consists of memory layout generation and address pointer assignment steps. In this paper, we propose an effective address pointer assignment method to minimize the number of address calculation instructions in DSP code generation. The proposed approach reduces the time complexity of a conventional address pointer assignment algorithm with fixed memory layouts by using minimum cost-nodes breaking. In order to contract memory size and processing time, we employ a powerful pruning technique. Moreover our proposed approach improves the initial solution iteratively by changing the memory layout for each iteration because the memory layout affects the result of the address pointer assignment algorithm. We applied the proposed approach to about 3,000 sequences of the OffsetStone benchmarks to demonstrate the effectiveness of the our approach. Experimental results with benchmarks show an average improvement of 25.9% in the address codes over previous works.

Real-time Watermarking Algorithm using Multiresolution Statistics for DWT Image Compressor (DWT기반 영상 압축기의 다해상도의 통계적 특성을 이용한 실시간 워터마킹 알고리즘)

  • 최순영;서영호;유지상;김대경;김동욱
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.13 no.6
    • /
    • pp.33-43
    • /
    • 2003
  • In this paper, we proposed a real-time watermarking algorithm to be combined and to work with a DWT(Discrete Wavelet Transform)-based image compressor. To reduce the amount of computation in selecting the watermarking positions, the proposed algorithm uses a pre-established look-up table for critical values, which was established statistically by computing the correlation according to the energy values of the corresponding wavelet coefficients. That is, watermark is embedded into the coefficients whose values are greater than the critical value in the look-up table which is searched on the basis of the energy values of the corresponding level-1 subband coefficients. Therefore, the proposed algorithm can operate in a real-time because the watermarking process operates in parallel with the compression procession without affecting the operation of the image compression. Also it improved the property of losing the watermark and the efficiency of image compression by watermark inserting, which results from the quantization and Huffman-Coding during the image compression. Visual recognizable patterns such as binary image were used as a watermark The experimental results showed that the proposed algorithm satisfied the properties of robustness and imperceptibility that are the major conditions of watermarking.

A new warp scheduling technique for improving the performance of GPUs by utilizing MSHR information (GPU 성능 향상을 위한 MSHR 정보 기반 워프 스케줄링 기법)

  • Kim, Gwang Bok;Kim, Jong Myon;Kim, Cheol Hong
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.3
    • /
    • pp.72-83
    • /
    • 2017
  • GPUs can provide high throughput with latency hiding by executing many warps in parallel. MSHR(Miss Status Holding Registers) for L1 data cache tracks cache miss requests until required data is serviced from lower level memory. In recent GPUs, excessive requests for cache resources cause underutilization problem of GPU resources due to cache resource reservation fails. In this paper, we propose a new warp scheduling technique to reduce stall cycles under MSHR resource shortage. Cache miss rates for each warp is predicted based on the observation that each warp shows similar cache miss rates for long period. The warps showing low miss rates or computation-intensive warps are given high priority to be issued when MSHR is full status. Our proposal improves GPU performance by utilizing cache resource more efficiently based on cache miss rate prediction and monitoring the MSHR entries. According to our experimental results, reservation fail cycles can be reduced by 25.7% and IPC is increased by 6.2% with the proposed scheduling technique compared to loose round robin scheduler.

Dose Verification Study of Brachytherapy Plans Using Monte Carlo Methods and CT Images (CT 영상 및 몬테칼로 계산에 기반한 근접 방사선치료계획의 선량분포 평가 방법 연구)

  • Cheong, Kwang-Ho;Lee, Me-Yeon;Kang, Sei-Kwon;Bae, Hoon-Sik;Park, So-Ah;Kim, Kyoung-Joo;Hwang, Tae-Jin;Oh, Do-Hoon
    • Progress in Medical Physics
    • /
    • v.21 no.3
    • /
    • pp.253-260
    • /
    • 2010
  • Most brachytherapy treatment planning systems employ a dosimetry formalism based on the AAPM TG-43 report which does not appropriately consider tissue heterogeneity. In this study we aimed to set up a simple Monte Carlo-based intracavitary high-dose-rate brachytherapy (IC-HDRB) plan verification platform, focusing particularly on the robustness of the direct Monte Carlo dose calculation using material and density information derived from CT images. CT images of slab phantoms and a uterine cervical cancer patient were used for brachytherapy plans based on the Plato (Nucletron, Netherlands) brachytherapy planning system. Monte Carlo simulations were implemented using the parameters from the Plato system and compared with the EBT film dosimetry and conventional dose computations. EGSnrc based DOSXYZnrc code was used for Monte Carlo simulations. Each $^{192}Ir$ source of the afterloader was approximately modeled as a parallel-piped shape inside the converted CT data set whose voxel size was $2{\times}2{\times}2\;mm^3$. Bracytherapy dose calculations based on the TG-43 showed good agreement with the Monte Carlo results in a homogeneous media whose density was close to water, but there were significant errors in high-density materials. For a patient case, A and B point dose differences were less than 3%, while the mean dose discrepancy was as much as 5%. Conventional dose computation methods might underdose the targets by not accounting for the effects of high-density materials. The proposed platform was shown to be feasible and to have good dose calculation accuracy. One should be careful when confirming the plan using a conventional brachytherapy dose computation method, and moreover, an independent dose verification system as developed in this study might be helpful.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Development and evaluation of a 2-dimensional land surface flood analysis model using uniform square grid (정형 사각 격자 기반의 2차원 지표면 침수해석 모형 개발 및 평가)

  • Choi, Yun-Seok;Kim, Joo-Hun;Choi, Cheon-Kyu;Kim, Kyung-Tak
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.5
    • /
    • pp.361-372
    • /
    • 2019
  • The purpose of this study is to develop a two-dimensional land surface flood analysis model based on uniform square grid using the governing equations except for the convective acceleration term in the momentum equation. Finite volume method and implicit method were applied to spatial and temporal discretization. In order to reduce the execution time of the model, parallel computation techniques using CPU were applied. To verify the developed model, the model was compared with the analytical solution and the behavior of the model was evaluated through numerical experiments in the virtual domain. In addition, inundation analyzes were performed at different spatial resolutions for the domestic Janghowon area and the Sebou river area in Morocco, and the results were compared with the analysis results using the CAESER-LISFLOOD (CLF) model. In model verification, simulation results were well matched with the analytical solution, and the flow analyses in the virtual domain were also evaluated to be reasonable. The results of inundation simulations in the Janghowon and the Sebou river area by this study and CLF model were similar with each other and for Janghowon area, the simulation result was also similar to the flooding area of flood hazard map. The different parts in the simulation results of this study and the CLF model were compared and evaluated for each case. The results of this study suggest that the model proposed in this study can simulate the flooding well in the floodplain. However, in case of flood analysis using the model presented in this study, the characteristics and limitations of the model by domain composition method, governing equation and numerical method should be fully considered.

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.