• Title/Summary/Keyword: Complexity of Computation

Search Result 609, Processing Time 0.027 seconds

Real-time Fluid Animation using Particle Dynamics Simulation and Pre-integrated Volume Rendering (입자 동역학 시뮬레이션과 선적분 볼륨 렌더링을 이용한 실시간 유체 애니메이션)

  • Lee Jeongjin;Kang Moon Koo;Kim Dongho;Shin Yeong Gil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.29-38
    • /
    • 2005
  • The fluid animation procedure consists of physical simulation and visual rendering. In the physical simulation of fluids, the most frequently used practices are the numerical simulation of fluid particles using particle dynamics equations and the continuum analysis of flow via Wavier-Stokes equation. Particle dynamics method is fast in calculation, but the resulting fluid motion is conditionally unrealistic The method using Wavier-Stokes equation, on the contrary, yields lifelike fluid motion when properly conditioned, yet the complexity of calculation restrains this method from being used in real-time applications. Global illumination is generally successful in producing premium-Duality rendered images, but is also excessively slow for real-time applications. In this paper, we propose a rapid fluid animation method incorporating enhanced particle dynamics simulation method and pre-integrated volume rendering technique. The particle dynamics simulation of fluid flow was conducted in real-time using Lennard-Jones model, and the computation efficiency was enhanced such that a small number of particles can represent a significant volume. For real-time rendering, pre-integrated volume rendering method was used so that fewer slices than ever can construct seamless inter-laminar shades. The proposed method could successfully simulate and render the fluid motion in real time at an acceptable speed and visual quality.

A Comparison Study with the Vatiation of Isocenter and Collimator in Stereotactic Radiosurgery (방사선 수술시 Isocenter, 콜리메이터 변수에 따른 선량 분포 비교연구)

  • 오승종;박정훈;곽철은;이형구;최보영;이태규;김문찬;서태석
    • Progress in Medical Physics
    • /
    • v.13 no.3
    • /
    • pp.129-134
    • /
    • 2002
  • The radiosurgery is planned that prescribed dose was irradiated to tumor for obtaining expected remedial value in stereotactic radiosurgery. The planning for many irregular tumor shape requires long computation time and skilled planners. Due to the rapid development in computer power recently, many optimization methods using computer has been proposed, although the practical method is still trial and error type of plan. In this study, many beam variables were considered and many tumor shapes were assumed cylinderical ideal models. Then, beam variables that covered the target within 50% isodose curve were searched, the result was compared and analysed. The beam variables considered were isocenter separation distance, number of isocenters and collimator size. Dose distributions obtained with these variables were analysed by dose volume histogram(DVH) and dose profile at orthogonal plane. According to the results compared, the use of more isocenters than specified isocenter dosen't improve DVH and dose profile but only increases complexity of plan. The best result of DVH and dose profile are obtainedwhen isocenter separation was 1.0-1.2 in using same number of isocenter.

  • PDF

투자대상 벤처기업의 선정을 위한 전문가시스템 개발

  • 김성근;김지혜
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.10a
    • /
    • pp.139-148
    • /
    • 1999
  • 오늘날 기술집약적인 벤처기업들에 대한 관심이 집중되고 있다. 소수의 진취적인 벤처기업들이 기술개발 및 신상품 개발 등 두드러진 활약을 보이고 있기 때문이다. 그러나 실제 이 벤처기업의 성공 가능성은 그렇게 높지 않다. 특히 벤처기업 환경이 아직 미약한 국내의 경우 위험부담이 훨씬 더 크다. 이러한 벤처기업 환경에서 투자대상 벤처기업을 선정하는 것은 매우 전략적인 의사결정이다. 일반적으로 일반 벤처투자가들은 관심이 있는 산업에 해당하는 기업의 사업계획서와 기초적인 관련 정보를 토대로 투자여부를 결정한다. 그렇지만 실제로는 이와 같은 분석에 필수적으로 요구되는 정보가 불확실할 뿐만 아니라 기술분야에 대한 전문적 지식도 부족하기 때문에 투자 여부를 결정하는 것은 매우 복잡하고 어려운 문제이다. 그러므로 투자대상 벤처기업의 선정을 효과적으로 지원해주는 체계적인 접근이 필요하다. 특히 벤처 사업과 관련된 기술 동향 및 수준 등에 관련된 전문 지식과 경험이 체계적으로 제공되어야 하고 또한 벤처 투자가의 개인적 경험과 판단이 평가 프로세스에 직접적으로 반영될 수 있어야 한다. 이에 본 연구에서는 전문가의 지식과 경험을 체계화하고 투자가의 개인적 판단을 효과적으로 수용할 수 있는 전문가시스템의 접근방법을 제시하고자 한다. 투자대상 벤처기업의 선정을 위한 전문가시스템을 구축하기 위해 본 연구에서는 다양한 정보수집 과정을 거쳤다. 우선 벤처 투자와 관련된 기존 문헌을 심층 분석하였으며 아울러 벤처 투자 업계에서 활약중인 전문 벤처캐피탈리스트들과의 수차례 인터뷰를 통해 벤처기업 평가의 주요 요인과 의사결정 과정을 파악할 수 있었다. 이러한 과정을 통하여 본 연구에서는 벤처 투자의 90%를 차지하는 정보통신분야에 속한 기법 중에서 투자대상 벤처기업의 선정을 위한 전문가시스템을 구축중이다.의 밀도를 비재무적 지표변수로 산정하여 로지스틱회귀 분석과 인공신경망 기법으로 검증하였다. 로지스틱회귀분석 결과에서는 재무적 지표변수 모형의 전체적 예측적중률이 87.50%인 반면에 재무/비재무적 지표모형은 90.18%로서 비재무적 지표변수 사용에 대한 개선의 효과가 나타났다. 표본기업들을 훈련과 시험용으로 구분하여 분석한 결과는 전체적으로 재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer algorithm could efficiently reject ISI of channel and in

  • PDF

A Distributed Web-DSS Approach for Coordinating Interdepartmental Decisions - Emphasis on Production and Marketing Decision (부서간 의사결정 조정을 위한 분산 웹 의사결정지원시스템에 관한 연구)

  • 이건창;조형래;김진성
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.10a
    • /
    • pp.291-300
    • /
    • 1999
  • 인터넷을 기반으로 한 정보통신의 급속한 발전이라는 기업환경의 변화에 적응하기 위해서 기업은 점차 모든 경영시스템을 인터넷을 기반으로 하도록 변화시키고 있을 뿐만 아니라, 기업 조직 또한 전세계를 기반으로한 글로벌 기업 형태로 변화하고 있다. 이러한 급속한 경영환경의 변화로 인해서 기업 내에서는 종전과는 다른 형태의 부서간 상호의사결정조정 과정이 필요하게 되었다. 일반 기업들을 대상으로 한 상호의사결정의 지원과정에 대해서는 기존에 많은 연구들이 있었으나 글로벌기업과 같은 네트워크 형태의 새로운 형태의 기업에 있어서의 상호의사결정과정을 지원할 수 있는 의사결정지원시스템에 대해서는 단순한 그룹의사결정지원시스템 또는 분산의사결정지원시스템과 같은 연구들이 주를 이루고 있다. 따라서 본 연구에서는 인터넷 특히, 웹을 기반으로 한 기업의 글로벌경영 및 분산 경영에서 비롯되는 부서간 상호의사결정이라는 문제를 효율적으로 지원할 수 있는 기업의 글로벌경영 및 분산 경영에서 비롯되는 부서간 상호의사결정이라는 문제를 효율적으로 지원할 수 있는 메커니즘을 제시하고 이에 기반한 프로토타입 형태의 시스템을 구현하여 성능을 검증하고자 한다. 특히, 기업 내에서 가장 대표적으로 상호의사결정지원이 필요한 생산과 마케팅 부서를 대상으로 상호의사결정지원 메커니즘을 개발하고 실험을 진행하였다. 그 결과 글로벌 기업내의 생산과 마케팅 부서간 상호의사결정을 효율적으로 지원 할 수 있는 상호조정 메카니즘인 개선된 PROMISE(PROduction and Marketing Interface Support Environment)를 기반으로 한 웹 분산의사결정지원시스템 (Web-DSS : Web-Decision Support Systems)을 제안하는 바이다.자대상 벤처기업의 선정을 위한 전문가시스템을 구축중이다.의 밀도를 비재무적 지표변수로 산정하여 로지스틱회귀 분석과 인공신경망 기법으로 검증하였다. 로지스틱회귀분석 결과에서는 재무적 지표변수 모형의 전체적 예측적중률이 87.50%인 반면에 재무/비재무적 지표모형은 90.18%로서 비재무적 지표변수 사용에 대한 개선의 효과가 나타났다. 표본기업들을 훈련과 시험용으로 구분하여 분석한 결과는 전체적으로 재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer

  • PDF

Integration of Condensation and Mean-shift algorithms for real-time object tracking (실시간 객체 추적을 위한 Condensation 알고리즘과 Mean-shift 알고리즘의 결합)

  • Cho Sang-Hyun;Kang Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.273-282
    • /
    • 2005
  • Real-time Object tracking is an important field in developing vision applications such as surveillance systems and vision based navigation. mean-shift algerian and Condensation algorithm are widely used in robust object tracking systems. Since the mean-shift algorithm is easy to implement and is effective in object tracking computation, it is widely used, especially in real-time tracking systems. One of the drawbacks is that it always converges to a local maximum which may not be a global maximum. Therefore, in a cluttered environment, the Mean-shift algorithm does not perform well. On the other hand, since it uses multiple hypotheses, the Condensation algorithm is useful in tracking in a cluttered background. Since it requires a complex object model and many hypotheses, it contains a high computational complexity. Therefore, it is not easy to apply a Condensation algorithm in real-time systems. In this paper, by combining the merits of the Condensation algorithm and the mean-shift algorithm we propose a new model which is suitable for real-time tracking. Although it uses only a few hypotheses, the proposed method use a high-likelihood hypotheses using mean-shift algorithm. As a result, we can obtain a better result than either the result produced by the Condensation algorithm or the result produced by the mean-shift algorithm.

A Proposed Algorithm and Sampling Conditions for Nonlinear Analysis of EEG (뇌파의 비선형 분석을 위한 신호추출조건 및 계산 알고리즘)

  • Shin, Chul-Jin;Lee, Kwang-Ho;Choi, Sung-Ku;Yoon, In-Young
    • Sleep Medicine and Psychophysiology
    • /
    • v.6 no.1
    • /
    • pp.52-60
    • /
    • 1999
  • Objectives: With the object of finding the appropriate conditions and algorithms for dimensional analysis of human EEG, we calculated correlation dimensions in the various condition of sampling rate and data aquisition time and improved the computation algorithm by taking advantage of bit operation instead of log operation. Methods: EEG signals from 13 scalp lead of a man were digitized with A-D converter under the condition of 12 bit resolution and 1000 Hertz of sampling rate during 32 seconds. From the original data, we made 15 time series data which have different sampling rate of 62.5, 125, 250, 500, 1000 hertz and data acqusition time of 10, 20, 30 second, respectively. New algorithm to shorten the calculation time using bit operation and the Least Trimmed Squares(LTS) estimator to get the optimal slope was applied to these data. Results: The values of the correlation dimension showed the increasing pattern as the data acquisition time becomes longer. The data with sampling rate of 62.5 Hz showed the highest value of correlation dimension regardless of sampling time but the correlation dimension at other sampling rates revealed similar values. The computation with bit operation instead of log operation had a statistically significant effect of shortening of calculation time and LTS method estimated more stably the slope of correlation dimension than the Least Squares estimator. Conclusion: The bit operation and LTS methods were successfully utilized to time-saving and efficient calculation of correlation dimension. In addition, time series of 20-sec length with sampling rate of 125 Hz was adequate to estimate the dimensional complexity of human EEG.

  • PDF

Fast Full Search Block Matching Algorithm Using The Search Region Subsampling and The Difference of Adjacent Pixels (탐색 영역 부표본화 및 이웃 화소간의 차를 이용한 고속 전역 탐색 블록 정합 알고리듬)

  • Cheong, Won-Sik;Lee, Bub-Ki;Lee, Kyeong-Hwan;Choi, Jung-Hyun;Kim, Kyeong-Kyu;Kim, Duk-Gyoo;Lee, Kuhn-Il
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.11
    • /
    • pp.102-111
    • /
    • 1999
  • In this paper, we propose a fast full search block matching algorithm using the search region subsampling and the difference of adjacent pixels in current block. In the proposed algorithm, we calculate the lower bound of mean absolute difference (MAD) at each search point using the MAD value of neighbor search point and the difference of adjacent pixels in current block. After that, we perform block matching process only at the search points that need block matching process using the lower bound of MAD at each search point. To calculate the lower bound of MAD at each search point, we need the MAD value of neighbor search point. Therefore, the search points are subsampled at the factor of 4 and the MAD value at the subsampled search points are calculated by the block matching process. And then, the lower bound of MAD at the rest search points are calculated using the MAD value of the neighbor subsampled search point and the difference of adjacent pixels in current block. Finally, we discard the search points that have the lower bound of MAD value exceed the reference MAD which is the minimum MAD value of the MAD values at the subsampled search points and we perform the block matching process only at the search points that need block matching process. By doing so, we can reduce the computation complexity drastically while the motion compensated error performance is kept the same as that of full search block matching algorithm (FSBMA). The experimental results show that the proposed method has a much lower computational complexity than that of FSBMA while the motion compensated error performance of the proposed method is kept same as that of FSBMA.

  • PDF

Noise-robust electrocardiogram R-peak detection with adaptive filter and variable threshold (적응형 필터와 가변 임계값을 적용하여 잡음에 강인한 심전도 R-피크 검출)

  • Rahman, MD Saifur;Choi, Chul-Hyung;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.126-134
    • /
    • 2017
  • There have been numerous studies on extracting the R-peak from electrocardiogram (ECG) signals. However, most of the detection methods are complicated to implement in a real-time portable electrocardiograph device and have the disadvantage of requiring a large amount of calculations. R-peak detection requires pre-processing and post-processing related to baseline drift and the removal of noise from the commercial power supply for ECG data. An adaptive filter technique is widely used for R-peak detection, but the R-peak value cannot be detected when the input is lower than a threshold value. Moreover, there is a problem in detecting the P-peak and T-peak values due to the derivation of an erroneous threshold value as a result of noise. We propose a robust R-peak detection algorithm with low complexity and simple computation to solve these problems. The proposed scheme removes the baseline drift in ECG signals using an adaptive filter to solve the problems involved in threshold extraction. We also propose a technique to extract the appropriate threshold value automatically using the minimum and maximum values of the filtered ECG signal. To detect the R-peak from the ECG signal, we propose a threshold neighborhood search technique. Through experiments, we confirmed the improvement of the R-peak detection accuracy of the proposed method and achieved a detection speed that is suitable for a mobile system by reducing the amount of calculation. The experimental results show that the heart rate detection accuracy and sensitivity were very high (about 100%).

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.