• Title/Summary/Keyword: Optimal computation

Search Result 636, Processing Time 0.029 seconds

Development of State Diagnosis Algorithm for Performance Improvement of PV System (태양광전원의 성능향상을 위한 상태진단 알고리즘 개발)

  • Choi, Sungsik;Kim, Taeyoun;Park, Jaebeom;Kim, Byungki;Rho, Daeseok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.2
    • /
    • pp.1036-1043
    • /
    • 2014
  • The installation of PV system to the power distribution system is being increased as one of solutions for environmental pollution and energy crisis. Because the output efficiency of PV system is getting decreased because of the aging phenomenon and several operation obstacles, the technology development of output prediction and state diagnosis of PV modules are required in order to improve operation performance of PV modules. The conventional methods for output prediction by considering various parameters and standard test condition values of PV modules may have difficult and complex computation procedure and also their prediction values may produce large error. To overcome these problems, this paper proposes an optimal prediction algorithm and state diagnosis algorithm of PV modules by using least square methods of linear regression analysis. In addition, this paper presents a state diagnosis evaluation system of PV modules based on the proposed optimal algorithms of PV modules. From the simulation results of proposed evaluation system, it is confirmed that the proposed algorithms is a practical tool for state diagnosis of PV modules.

Development of Independent Target Approximation by Auto-computation of 3-D Distribution Units for Stereotactic Radiosurgery (정위적 방사선 수술시 3차원적 공간상 단위분포들의 자동계산법에 의한 간접적 병소 근사화 방법의 개발)

  • Choi Kyoung Sik;Oh Seung Jong;Lee Jeong Woo;Kim Jeung Kee;Suh Tae Suk;Choe Bo Young;Kim Moon Chan;Chung Hyun-Tai
    • Progress in Medical Physics
    • /
    • v.16 no.1
    • /
    • pp.24-31
    • /
    • 2005
  • The stereotactic radiosurgery (SRS) describes a method of delivering a high dose of radiation to a small tar-get volume in the brain, generally in a single fraction, while the dose delivered to the surrounding normal tissue should be minimized. To perform automatic plan of the SRS, a new method of multi-isocenter/shot linear accelerator (linac) and gamma knife (GK) radiosurgery treatment plan was developed, based on a physical lattice structure in target. The optimal radiosurgical plan had been constructed by many beam parameters in a linear accelerator or gamma knife-based radiation therapy. In this work, an isocenter/shot was modeled as a sphere, which is equal to the circular collimator/helmet hole size because the dimension of the 50% isodose level in the dose profile is similar to its size. In a computer-aided system, it accomplished first an automatic arrangement of multi-isocenter/shot considering two parameters such as positions and collimator/helmet sizes for each isocenter/shot. Simultaneously, an irregularly shaped target was approximated by cubic structures through computation of voxel units. The treatment planning method by the technique was evaluated as a dose distribution by dose volume histograms, dose conformity, and dose homogeneity to targets. For irregularly shaped targets, the new method performed optimal multi-isocenter packing, and it only took a few seconds in a computer-aided system. The targets were included in a more than 50% isodose curve. The dose conformity was ordinarily acceptable levels and the dose homogeneity was always less than 2.0, satisfying for various targets referred to Radiation Therapy Oncology Group (RTOG) SRS criteria. In conclusion, this approach by physical lattice structure could be a useful radiosurgical plan without restrictions in the various tumor shapes and the different modality techniques such as linac and GK for SRS.

  • PDF

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

A Proposed Algorithm and Sampling Conditions for Nonlinear Analysis of EEG (뇌파의 비선형 분석을 위한 신호추출조건 및 계산 알고리즘)

  • Shin, Chul-Jin;Lee, Kwang-Ho;Choi, Sung-Ku;Yoon, In-Young
    • Sleep Medicine and Psychophysiology
    • /
    • v.6 no.1
    • /
    • pp.52-60
    • /
    • 1999
  • Objectives: With the object of finding the appropriate conditions and algorithms for dimensional analysis of human EEG, we calculated correlation dimensions in the various condition of sampling rate and data aquisition time and improved the computation algorithm by taking advantage of bit operation instead of log operation. Methods: EEG signals from 13 scalp lead of a man were digitized with A-D converter under the condition of 12 bit resolution and 1000 Hertz of sampling rate during 32 seconds. From the original data, we made 15 time series data which have different sampling rate of 62.5, 125, 250, 500, 1000 hertz and data acqusition time of 10, 20, 30 second, respectively. New algorithm to shorten the calculation time using bit operation and the Least Trimmed Squares(LTS) estimator to get the optimal slope was applied to these data. Results: The values of the correlation dimension showed the increasing pattern as the data acquisition time becomes longer. The data with sampling rate of 62.5 Hz showed the highest value of correlation dimension regardless of sampling time but the correlation dimension at other sampling rates revealed similar values. The computation with bit operation instead of log operation had a statistically significant effect of shortening of calculation time and LTS method estimated more stably the slope of correlation dimension than the Least Squares estimator. Conclusion: The bit operation and LTS methods were successfully utilized to time-saving and efficient calculation of correlation dimension. In addition, time series of 20-sec length with sampling rate of 125 Hz was adequate to estimate the dimensional complexity of human EEG.

  • PDF

A Study on Real-time Tracking Method of Horizontal Face Position for Optimal 3D T-DMB Content Service (지상파 DMB 단말에서의 3D 컨텐츠 최적 서비스를 위한 경계 정보 기반 실시간 얼굴 수평 위치 추적 방법에 관한 연구)

  • Kang, Seong-Goo;Lee, Sang-Seop;Yi, June-Ho;Kim, Jung-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.88-95
    • /
    • 2011
  • An embedded mobile device mostly has lower computation power than a general purpose computer because of its relatively lower system specifications. Consequently, conventional face tracking and face detection methods, requiring complex algorithms for higher recognition rates, are unsuitable in a mobile environment aiming for real time detection. On the other hand, by applying a real-time tracking and detecting algorithm, we would be able to provide a two-way interactive multimedia service between an user and a mobile device thus providing a far better quality of service in comparison to a one-way service. Therefore it is necessary to develop a real-time face and eye tracking technique optimized to a mobile environment. For this reason, in this paper, we proposes a method of tracking horizontal face position of a user on a T-DMB device for enhancing the quality of 3D DMB content. The proposed method uses the orientation of edges to estimate the left and right boundary of the face, and by the color edge information, the horizontal position and size of face is determined finally to decide the horizontal face. The sobel gradient vector is projected vertically and candidates of face boundaries are selected, and we proposed a smoothing method and a peak-detection method for the precise decision. Because general face detection algorithms use multi-scale feature vectors, the detection time is too long on a mobile environment. However the proposed algorithm which uses the single-scale detection method can detect the face more faster than conventional face detection methods.

A Study on Parallel Performance Optimization Method for Acceleration of High Resolution SAR Image Processing (고해상도 SAR 영상처리 고속화를 위한 병렬 성능 최적화 기법 연구)

  • Lee, Kyu Beom;Kim, Gyu Bin;An, Sol Bo Reum;Cho, Jin Yeon;Lim, Byoung-Gyun;Kim, Dong-Hyun;Kim, Jeong Ho
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.6
    • /
    • pp.503-512
    • /
    • 2018
  • SAR(Synthetic Aperture Radar) is a technology to acquire images by processing signals obtained from radar, and there is an increasing demand for utilization of high-resolution SAR images. In this paper, for high-speed processing of high-resolution SAR image data, a study for SAR image processing algorithms to achieve optimal performance in multi-core based computer architecture is performed. The performance deterioration due to a large amount of input/output data for high resolution images is reduced by maximizing the memory utilization, and the parallelization ratio of the code is increased by using dynamic scheduling and nested parallelism of OpenMP. As a result, not only the total computation time is reduced, but also the upper bound of parallel performance is increased and the actual parallel performance on a multi-core system with 10 cores is improved by more than 8 times. The result of this study is expected to be used effectively in the development of high-resolution SAR image processing software for multi-core systems with large memory.

Precision Verification of New Global Gravitational Model Using GPS/Leveling Data (GPS/Leveling 자료를 이용한 최신 전지구중력장 모델의 정밀도 검증)

  • Baek, Kyeongmin;Kwon, Jay Hyoun;Lee, Jisun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.3
    • /
    • pp.239-247
    • /
    • 2013
  • The global gravitational model is essential for precision geoid model construction. Also, it would be used as basic scientific data in geophysical and oceanographic fields. In Korea, EGM2008 has been used from the late 2000s. After publishing EGM2008, new gravitational models such as GOCO02S, GOCO03S, EIGEN-6C, EIGEN-6C2 based on GOCE data were developed. Therefore, we need to verify recent models to select optimal one for geoid computation in Korea. In this study, we compared new models generated based on the GOCE data to EGM2008 and verified the precision of models by comparing with NGII(National Geographic Information Institute) GPS/Leveling data. When comparing EIGEN models to EGM2008, the difference is about 8cm. On the other h and, about 70cm of difference between GOCO models and EGM2008 has been calculated. The reason for this is because GOCO models have been developed using only satellite data while EGM2008 has been used gravity and altimeter data as well as satellite data. When comparing global gravitational model to GPS/Leveling data, EGM2008 showed the best precision of 6.1cm over whole Korean peninsula. The new global gravitational model using additional GOCE data will be published consistently, so the precision verification of new model should be continued.

A Study on the Applicability of Deep Learning Algorithm for Detection and Resolving of Occlusion Area (영상 폐색영역 검출 및 해결을 위한 딥러닝 알고리즘 적용 가능성 연구)

  • Bae, Kyoung-Ho;Park, Hong-Gi
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.11
    • /
    • pp.305-313
    • /
    • 2019
  • Recently, spatial information is being constructed actively based on the images obtained by drones. Because occlusion areas occur due to buildings as well as many obstacles, such as trees, pedestrians, and banners in the urban areas, an efficient way to resolve the problem is necessary. Instead of the traditional way, which replaces the occlusion area with other images obtained at different positions, various models based on deep learning were examined and compared. A comparison of a type of feature descriptor, HOG, to the machine learning-based SVM, deep learning-based DNN, CNN, and RNN showed that the CNN is used broadly to detect and classify objects. Until now, many studies have focused on the development and application of models so that it is impossible to select an optimal model. On the other hand, the upgrade of a deep learning-based detection and classification technique is expected because many researchers have attempted to upgrade the accuracy of the model as well as reduce the computation time. In that case, the procedures for generating spatial information will be changed to detect the occlusion area and replace it with simulated images automatically, and the efficiency of time, cost, and workforce will also be improved.

Development of Information Technology Infrastructures through Construction of Big Data Platform for Road Driving Environment Analysis (도로 주행환경 분석을 위한 빅데이터 플랫폼 구축 정보기술 인프라 개발)

  • Jung, In-taek;Chong, Kyu-soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.669-678
    • /
    • 2018
  • This study developed information technology infrastructures for building a driving environment analysis platform using various big data, such as vehicle sensing data, public data, etc. First, a small platform server with a parallel structure for big data distribution processing was developed with H/W technology. Next, programs for big data collection/storage, processing/analysis, and information visualization were developed with S/W technology. The collection S/W was developed as a collection interface using Kafka, Flume, and Sqoop. The storage S/W was developed to be divided into a Hadoop distributed file system and Cassandra DB according to the utilization of data. Processing S/W was developed for spatial unit matching and time interval interpolation/aggregation of the collected data by applying the grid index method. An analysis S/W was developed as an analytical tool based on the Zeppelin notebook for the application and evaluation of a development algorithm. Finally, Information Visualization S/W was developed as a Web GIS engine program for providing various driving environment information and visualization. As a result of the performance evaluation, the number of executors, the optimal memory capacity, and number of cores for the development server were derived, and the computation performance was superior to that of the other cloud computing.

Ground-Roll Suppression of the Land Seismic Data using the Singular Value Decomposition (SVD) (특이값 분해를 이용한 육상 탄성파자료의 그라운드롤 제거)

  • Sa, Jin-Hyeon;Kim, Sung-Soo;Kim, Ji-Soo
    • The Journal of Engineering Geology
    • /
    • v.28 no.3
    • /
    • pp.465-473
    • /
    • 2018
  • The application of singular value decomposition (SVD) filtering is examined for attenuation of the ground-roll in land seismic data. Prior to the SVD computation to seek singular values containing the highly correlatable reflection energy, processing steps such as automatic gain control, elevation and refraction statics, NMO correction, and residual statics are performed to enhance the horizontal correlationships and continuities of reflections. Optimal parameters of SVD filtering are effectively chosen with diagnostic display of inverse NMO (INMO) corrected CSP (common shot point) gather. On the field data with dispersion of ground-roll overwhelmed, continuities of reflection events are much improved by SVD filtering than f-k filtering by eliminating the ground-roll with preserving the low-frequency reflections. This is well explained in the average amplitude spectra of the f-k and SVD filtered data. The reflectors including horizontal layer of the reservoir are much clearer on the stack section, with laminated events by SVD filtering and subsequent processing steps of spiking deconvolution and time-variant spectral whitening.