• Title/Summary/Keyword: Real-Time Computation Methods

Search Result 163, Processing Time 0.03 seconds

Improvement in Computation of Δ V10 Flicker Severity Index Using Intelligent Methods

  • Moallem, Payman;Zargari, Abolfazl;Kiyoumarsi, Arash
    • Journal of Power Electronics
    • /
    • v.11 no.2
    • /
    • pp.228-236
    • /
    • 2011
  • The ${\Delta}\;V_{10}$ or 10-Hz flicker index, as a common method of measurement of voltage flicker severity in power systems, requires a high computational cost and a large amount of memory. In this paper, for measuring the ${\Delta}\;V_{10}$ index, a new method based on the Adaline (adaptive linear neuron) system, the FFT (fast Fourier transform), and the PSO (particle swarm optimization) algorithm is proposed. In this method, for reducing the sampling frequency, calculations are carried out on the envelope of a power system voltage that contains a flicker component. Extracting the envelope of the voltage is implemented by the Adaline system. In addition, in order to increase the accuracy in computing the flicker components, the PSO algorithm is used for reducing the spectral leakage error in the FFT calculations. Therefore, the proposed method has a lower computational cost in FFT computation due to the use of a smaller sampling window. It also requires less memory since it uses the envelope of the power system voltage. Moreover, it shows more accuracy because the PSO algorithm is used in the determination of the flicker frequency and the corresponding amplitude. The sensitivity of the proposed method with respect to the main frequency drift is very low. The proposed algorithm is evaluated by simulations. The validity of the simulations is proven by the implementation of the algorithm with an ARM microcontroller-based digital system. Finally, its function is evaluated with real-time measurements.

The Study of Comparison of DCT-based H.263 Quantizer for Computative Quantity Reduction (계산량 감축을 위한 DCT-Based H.263 양자화기의 비교 연구)

  • Shin, Kyung-Cheol
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.3
    • /
    • pp.195-200
    • /
    • 2008
  • To compress the moving picture data effectively, it is needed to reduce spatial and temporal redundancy of input image data. While motion estimation! compensation methods is effectively able to reduce temporal redundancy but it is increased computation complexity because of the prediction between frames. So, the study of algorithm for computation reduction and real time processing is needed. This paper is presenting quantizer effectively able to quantize DCT coefficient considering the human visual sensitivity. As quantizer that proposed DCT-based H.263 could make transmit more frame than TMN5 at a same transfer speed, and it could decrease the frame drop effect. And the luminance signal appeared the difference of $-0.3{\sim}+0.65dB$ in the average PSNR for the estimation of objective image quality and the chrominance signal appeared the improvement in about 1.73dB in comparision with TMN5. The proposed method reduces $30{\sim}31%$ compared with NTSS and $20{\sim}21%$ compared to 4SS in comparition of calculation quantity.

  • PDF

Design of Collaborative Response Framework Based on the Security Information Sharing in the Inter-domain Environments (도메인간 보안 정보 공유를 통한 협력 대응 프레임워크 설계)

  • Lee, Young-Seok;An, Gae-Il;Kim, Jong-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.3
    • /
    • pp.605-612
    • /
    • 2011
  • Recently, cyber attacks against public communications networks are getting more complicated and varied. Moreover, in some cases, one country could make systematic attacks at a national level against another country to steal its confidential information and intellectual property. Therefore, the issue of cyber attacks is now regarded as a new major threat to national security. The conventional way of operating individual information security systems such as IDS and IPS may not be sufficient to cope with those attacks committed by highly-motivated attackers with significant resources. In this paper, we discuss the technologies and standard trends about actual cyber threat and response methods, design the collaborative response framework based on the security information sharing in the inter-domain environments. The computation method of network threat level based on the collaborative response framework is proposed. The network threats are be quickly detected and real-time response can be executed using the proposed computation method.

Parallel Structure Design Method for Mass Spring Simulation (질량스프링 시뮬레이션을 위한 병렬 구조 설계 방법)

  • Sung, Nak-Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.55-63
    • /
    • 2019
  • Recently, the GPU computing method has been utilized to improve the performance of the physics simulation field. In particular, in the case of a deformed object simulation requiring a large amount of computation, a GPU-based parallel processing algorithm is required to guarantee real-time performance. We have studied the parallel structure design method to improve the performance of the mass spring simulation method which is one of the methods of implementing the deformation object simulation. We used OpenGL's GLSL, a graphics library that allows direct access to the GPU, and implemented the GPGPU environment using an independent pipeline, the compute shader. In order to verify the effectiveness of the parallel structure design method, the mass - spring system was implemented based on CPU and GPU. Experimental results show that the proposed method improves computation speed by about 6,000% compared to the CPU Environment. It is expected that the lightweight simulation technology can be effectively applied to the augmented reality and the virtual reality field by using the design method proposed later in this research.

A Camera Tracking System for Post Production of TV Contents (방송 콘텐츠의 후반 제작을 위한 카메라 추적 시스템)

  • Oh, Ju-Hyun;Nam, Seung-Jin;Jeon, Seong-Gyu;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.692-702
    • /
    • 2009
  • Real-time virtual studios which could run only on expensive workstations are now available for personal computers thanks to the recent development of graphics hardware. Nevertheless, graphics are rendered off-line in the post production stage in film or TV drama productions, because the graphics' quality is still restricted by the real-time hardware. Software-based camera tracking methods taking only the source video into account take much computation time, and often shows unstable results. To overcome this restriction, we propose a system that stores camera motion data from sensors at shooting time as common virtual studios and uses them in the post production stage, named as POVIS(post virtual imaging system). For seamless registration of graphics onto the camera video, precise zoom lens calibration must precede the post production. A practical method using only two planar patterns is used in this work. We present a method to reduce the camera sensor's error due to the mechanical mismatch, using the Kalman filter. POVIS was successfully used to track the camera in a documentary production and saved much of the processing time, while conventional methods failed due to lack of features to track.

Data Association of Robot Localization and Mapping Using Partial Compatibility Test (Partial Compatibility Test 를 이용한 로봇의 위치 추정 및 매핑의 Data Association)

  • Yan, Rui Jun;Choi, Youn Sung;Wu, Jing;Han, Chang Soo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.33 no.2
    • /
    • pp.129-138
    • /
    • 2016
  • This paper presents a natural corners-based SLAM (Simultaneous Localization and Mapping) with a robust data association algorithm in a real unknown environment. Corners are extracted from raw laser sensor data, which are chosen as landmarks for correcting the pose of mobile robot and building the map. In the proposed data association method, the extracted corners in every step are separated into several groups with small numbers of corners. In each group, local best matching vector between new corners and stored ones is found by joint compatibility, while nearest feature for every new corner is checked by individual compatibility. All these groups with local best matching vector and nearest feature candidate of each new corner are combined by partial compatibility with linear matching time. Finally, SLAM experiment results in an indoor environment based on the extracted corners show good robustness and low computation complexity of the proposed algorithms in comparison with existing methods.

An effective background subtraction in dynamic scene. (동적 환경에서의 효과적인 움직이는 객체 추출)

  • Han, Jae-Hyek;Kim, Yong-Jin;Ryu, Sae-Woon;Lee, Sang-Hwa;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.631-636
    • /
    • 2009
  • Foreground segmentation methods have steadily been researched in the field of computer vision. Especially, background subtraction which extracts a foreground image from the difference between the current frame and a reference image, called as "background image" have been widely used for a variety of real-time applications because of low computation and high-quality. However, if the background scene was dynamically changed, the background subtraction causes lots of errors. In this paper, we propose an efficient background subtraction method in dynamic environment with both static and dynamic scene. The proposed method is a hybrid method that uses the conventional background subtraction for static scene and depth information for dynamic scene. Its validity and efficiency are verified by demonstration in dynamic environment, where a video projector projects various images in the background.

  • PDF

Implementation of Image Processing System for the Defect Inspection of Color Polyethylene (칼라팔레트의 불량 식별을 위한 영상처리 시스템 구현)

  • 김경민;박중조;송명현
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.6
    • /
    • pp.1157-1162
    • /
    • 2001
  • This paper deals with inspect algorithm using visual system. One of the major problems that arise during polymer production is the estimation of the noise of the color product.(bad pallets) An erroneous output can cause a lot of losses (production and financial losses). Therefore new methods for real-time inspection of the noise are demanded. For this reason, we have presented a development of vision system algorithm for the defect inspection of PE color pallets. First of all, in order to detect the edge of object, the differential filter is used. And we apply to the labelling algorithm for feature extraction. This algorithm is designed for the defect inspection of pallets. The labelling algorithm permits to separate all of the connected components appearing on the pallets. Labelling the connected regions of a image is a fundamental computation in image analysis and machine vision, with a large number of application. Also, we suggested vision processing program in window environment. Simulations and experimental results demonstrate the performance of the proposal algorithm.

  • PDF

Voxel-Based Thickness Analysis of Intricate Objects

  • Subburaj, K.;Patil, Sandeep;Ravi, B.
    • International Journal of CAD/CAM
    • /
    • v.6 no.1
    • /
    • pp.105-115
    • /
    • 2006
  • Thickness is a commonly used parameter in product design and manufacture. Its intuitive definition as the smallest dimension of a cross-section or the minimum distance between two opposite surfaces is ambiguous for intricate solids, and there is very little reported work in automatic computation of thickness. We present three generic definitions of thickness: interior thickness of points inside an object, exterior thickness for points on the object surface, and radiographic thickness along a view direction. Methods for computing and displaying the respective thickness values are also presented. The internal thickness distribution is obtained by peeling or successive skin removal, eventually revealing the object skeleton (similar to medial axis transformation). Another method involves radiographic scanning along a viewing direction, with minimum, maximum and total thickness options, displayed on the surface of the object. The algorithms have been implemented using an efficient voxel based representation that can handle up to one billion voxels (1000 per axis), coupled with a near-real time display scheme that uses a look-up table based on voxel neighborhood configurations. Three different types of intricate objects: industrial (press cylinder casting), sculpture (Ganesha idol), and medical (pelvic bone) were used for successfully testing the algorithms. The results are found to be useful for early evaluation of manufacturability and other lifecycle considerations.

Optimized Structures with Hop Constraints for Web Information Retrieval (Hop 제약조건이 고려된 최적화 웹정보검색)

  • Lee, Woo-Key;Kim, Ki-Baek;Lee, Hwa-Ki
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.33 no.4
    • /
    • pp.63-82
    • /
    • 2008
  • The explosively growing attractiveness of the Web is commencing significant demands for a structuring analysis on various web objects. The larger the substantial number of web objects are available, the more difficult for the clients(i.e. common web users and web robots) and the servers(i.e. Web search engine) to retrieve what they really want. We have in mind focusing on the structure of web objects by introducing optimization models for more convenient and effective information retrieval. For this purpose, we represent web objects and hyperlinks as a directed graph from which the optimal structures are derived in terms of rooted directed spanning trees and Top-k trees. Computational experiments are executed for synthetic data as well as for real web sites' domains so that the Lagrangian Relaxation approaches have exploited the Top-k trees and Hop constraint resolutions. In the experiments, our methods outperformed the conventional approaches so that the complex web graph can successfully be converted into optimal-structured ones within a reasonable amount of computation time.