• Title/Summary/Keyword: video filtering

Search Result 254, Processing Time 0.037 seconds

Emulation of Anti-alias Filtering in Vision Based Motion Mmeasurement (비전 센서의 앨리어싱 방지 필터링 모방 기법)

  • Kim, Jung-Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.1
    • /
    • pp.18-26
    • /
    • 2011
  • This paper presents a method, Exposure Controlled Temporal Filtering (ECF), applied to visual motion tracking, that can cancel the temporal aliasing of periodic vibrations of cameras and fluctuations in illumination through the control of exposure time. We first present a theoretical analysis of the exposure induced image time integration process and how it samples sensor impingent light that is periodically fluctuating. Based on this analysis we develop a simple method to cancel high frequency vibrations that are temporally aliased onto sampled image sequences and thus to subsequent motion tracking measurements. Simulations and experiments using the 'Center of Gravity' and Normalized Cross-Correlation motion tracking methods were performed on a microscopic motion tracking system to validate the analytical predictions.

Reduction Method of Computational Complexity for Image Filtering Utilizing the Factorization Theorem (인수분해 공식을 이용한 영상 필터링 연산량 저감 방법)

  • Jung, Chan-sung;Lee, Jaesung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.354-357
    • /
    • 2013
  • The filtering algorithm is used very frequently in the preprocessing stage of many image processing algorithms in computer vision processing. Because video signals are two-dimensional signals, computaional complexity is very high. To reduce the complexity, separable filters and the factorization theorem is applied to the filtering operation. As a result, it is shown that a significant reduction in computational complexity is achieved, although the experimental results could be slightly different depending on the condition of the image.

  • PDF

A Kalman Filter based Video Denoising Method Using Intensity and Structure Tensor

  • Liu, Yu;Zuo, Chenlin;Tan, Xin;Xiao, Huaxin;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.8
    • /
    • pp.2866-2880
    • /
    • 2014
  • We propose a video denoising method based on Kalman filter to reduce the noise in video sequences. Firstly, with the strong spatiotemporal correlations of neighboring frames, motion estimation is performed on video frames consisting of previous denoised frames and current noisy frame based on intensity and structure tensor. The current noisy frame is processed in temporal domain by using motion estimation result as the parameter in the Kalman filter, while it is also processed in spatial domain using the Wiener filter. Finally, by weighting the denoised frames from the Kalman and the Wiener filtering, a satisfactory result can be obtained. Experimental results show that the performance of our proposed method is competitive when compared with state-of-the-art video denoising algorithms based on both peak signal-to-noise-ratio and structural similarity evaluations.

Measurement of missing video frames in NPP control room monitoring system using Kalman filter

  • Mrityunjay Chaubey;Lalit Kumar Singh;Manjari Gupta
    • Nuclear Engineering and Technology
    • /
    • v.55 no.1
    • /
    • pp.37-44
    • /
    • 2023
  • Using the Kalman filtering technique, we propose a novel method for estimating the missing video frames to monitor the activities inside the control room of a nuclear power plant (NPP). The purpose of this study is to reinforce the existing security and safety procedures in the control room of an NPP. The NPP control room serves as the nervous system of the plant, with instrumentation and control systems used to monitor and control critical plant parameters. Because the safety and security of the NPP control room are critical, it must be monitored closely by security cameras in order to assess and reduce the onset of any incidents and accidents that could adversely impact the safety of the NPP. However, for a variety of technical and administrative reasons, continuous monitoring may be interrupted. Because of the interruption, one or more frames of the video may be distorted or missing, making it difficult to identify the activity during this time period. This could endanger overall safety. The demonstrated Kalman filter model estimates the value of the missing frame pixel-by-pixel using information from the frame that occurred in the video sequence before it and the frame that will occur in the video sequence after it. The results of the experiment provide evidence of the effectiveness of the algorithm.

Segmentation of Objects of Interest for Video Content Analysis (동영상 내용 분석을 위한 관심 객체 추출)

  • Park, So-Jung;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.8
    • /
    • pp.967-980
    • /
    • 2007
  • Video objects of interest play an important role in representing the video content and are useful for improving the performance of video retrieval and compression. The objects of interest may be a main object in describing contents of a video shot or a core object that a video producer wants to represent in the video shot. We know that any object attracting one's eye much in the video shot may not be an object of interest and a non-moving object may be an object of interest as well as a moving one. However it is not easy to define an object of interest clearly, because procedural description of human interest is difficult. In this paper, a set of four filtering conditions for extracting moving objects of interest is suggested, which is defined by considering variation of location, size, and moving pattern of moving objects in a video shot. Non-moving objects of interest are also defined as another set of four extracting conditions that are related to saliency of color/texture, location, size, and occurrence frequency of static objects in a video shot. On a test with 50 video shots, the segmentation method based on the two sets of conditions could extract the moving and non-moving objects of interest chosen manually on accuracy of 84%.

  • PDF

Loop-Filtering for Reducing Comer outlier (모서리 잡음 제거를 위한 Loop 필터링 기법)

  • 홍윤표;전병우
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.217-223
    • /
    • 2004
  • In block-based lossy video compression, severe quantization causes discontinuities along block boundaries so that annoying blocking artifacts are visible in decoded video images. These blocking artifacts significantly decrease the subjective image quality. In order to reduce the blocking artifacts in decoded images, many algorithms have been proposed. However studies on so called comer outlier, have been very limited. Corner outliers make image edges look disconnected from those of neighboring blocks at cross block boundary. In order to solve this problem we propose a corner outlier detection and compensation algorithm as loop-filtering in spatial domain. Experiment results show that the proposed method provides much improved subjective image quality.

Binary Image Based Fast DoG Filter Using Zero-Dimensional Convolution and State Machine LUTs

  • Lee, Seung-Jun;Lee, Kye-Shin;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.5 no.2
    • /
    • pp.131-138
    • /
    • 2018
  • This work describes a binary image based fast Difference of Gaussian (DoG) filter using zero-dimensional (0-d) convolution and state machine look up tables (LUTs) for image and video stitching hardware platforms. The proposed approach for using binary images to obtain DoG filtering can significantly reduce the data size compared to conventional gray scale based DoG filters, yet binary images still preserve the key features of the image such as contours, edges, and corners. Furthermore, the binary image based DoG filtering can be realized with zero-dimensional convolution and state machine LUTs which eliminates the major portion of the adder and multiplier blocks that are generally used in conventional DoG filter hardware engines. This enables fast computation time along with the data size reduction which can lead to compact and low power image and video stitching hardware blocks. The proposed DoG filter using binary images has been implemented with a FPGA (Altera DE2-115), and the results have been verified.

A Study of Detecting The Fish Robot Position Using The Object Boundary Algorithm (물체 형상인식 알고리즘을 이용한 물고기 로봇 위치 검출에 관한 연구)

  • Amarnath, Varma Angani;Kang, Min Jeong;Shin, Kyoo Jae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1350-1353
    • /
    • 2015
  • In this paper, we have researched about how to detect the fish robot objects in aquarium. We had used designed fish robots DOMI ver1.0, which had researched and developed for aquarium underwater robot. The model of the robot fish is analysis to maximize the momentum of the robot fish and the body of the robot is designed through the analysis of the biological fish swimming. We are planned to non-external equipment to find the position and manipulated the position using creating boundary to fish robot to detect the fish robot objects. Also, we focused the detecting fish robot in aquarium by using boundary algorithm. In order to the find the object boundary, it is filtering the video frame to picture frames and changing the RGB to gray. Then, applied the boundary algorithm stand of equations which operates the boundary for objects. We called these procedures is kind of image processing that can distinguish the objects and background in the captured video frames. It was confirmed that excellent performance in the field test such as filtering image, object detecting and boundary algorithm.

A Study on Robustness Indicators for Performance Evaluation of Immersive 360-degree Video Filtering (실감형 360도 영상 필터링 성능 평가를 위한 강인성 지표에 관한 연구)

  • Jang, Seyoung;Yoo, Injae;Lee, Jaecheng;Park, Byeongchan;Kim, Youngmo;Kim, Seok-Yoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.437-438
    • /
    • 2020
  • 국내 실감형 콘텐츠 시장은 전년도비 42.9%의 연평균 성장률을 보이며 2020년에는 약 5조 7,271억원의 규모에 이를 것으로 전망된다. 특히 2018년 기점으로 하드웨어보다는 콘텐츠 시장이 확대되었다. 최근 실감형 콘텐츠의 유통이 본격적으로 시작됨에 따라 저작권 침해 사례들이 나타나고 있으나 시장의 저변확대 측면에서 그렇게 주목받지 못하고 있다. 실감형 저작물을 제작하는 업체가 주로 소기업이고, 제작하는 비용이 고비용인 점을 고려할 때 저작권 보호 기술인 필터링 기술이 절대적으로 요구되고 있다. 필터링 기술의 성능 평가할 기준인 강인성 지표가 미정립 된 상태이다. 따라서 본 논문에서는 특정 기술에 종속되지 않는 실감형 360도 영상 콘텐츠 강인성 지표를 제안한다.

  • PDF

Fast Intraframe Coding for High Efficiency Video Coding

  • Huang, Han;Zhao, Yao;Lin, Chunyu;Bai, Huihui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.3
    • /
    • pp.1093-1104
    • /
    • 2014
  • The High Efficiency Video Coding (HEVC) is a new video coding standard that can provide much better compression efficiency than its predecessor H.264/AVC. However, it is computationally more intensive due to the use of flexible quadtree coding unit structure and more choices of prediction modes. In this paper, a fast intraframe coding scheme is proposed for HEVC. Firstly, a fast bottom-up pruning algorithm is designed to skip the mode decision process or reduce the candidate modes at larger block size coding unit. Then, a low complexity rough mode decision process is adopted to choose a small candidate set, followed by early DC and Planar mode decision and mode filtering to further reduce the number of candidate modes. The proposed method is evaluated by the HEVC reference software HM8.2. Averaging over 5 classes of HEVC test sequences, 41.39% encoding time saving is achieved with only 0.77% bitrate increase.