• Title/Summary/Keyword: Video processor

Search Result 227, Processing Time 0.027 seconds

A Hardware Implementation of EGML-based Moving Object Detection Algorithm (EGML 기반 이동 객체 검출 알고리듬의 하드웨어 구현)

  • Kim, Gyeong-hun;An, Hyo-sik;Shin, Kyung-wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.10
    • /
    • pp.2380-2388
    • /
    • 2015
  • A hardware implementation of MOD(moving object detection) algorithm using EGML(effective Gaussian mixture learning)- based background subtraction to detect moving objects in video is described. Some approximations of EGML calculations are applied to reduce hardware complexity, and pipelining technique is adopted to improve operating speed. The MOD processor designed in Verilog-HDL has been verified by FPGA-in-the-loop verification using MATLAB/Simulink. The MOD processor has 2,218 slices on the Virtex5-XC5VSX95T FPGA device and its throughput is 102 MSamples/s at 102 MHz clock frequency. Evaluation results of the MOD processor for 12 images in the IEEE CDW-2012 dataset show that the average recall value is 0.7631, the average precision value is 0.7778 and the average F-measure value is 0.7535.

Hardware Design of Super Resolution on Human Faces for Improving Face Recognition Performance of Intelligent Video Surveillance Systems (지능형 영상 보안 시스템의 얼굴 인식 성능 향상을 위한 얼굴 영역 초해상도 하드웨어 설계)

  • Kim, Cho-Rong;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.9
    • /
    • pp.22-30
    • /
    • 2011
  • Recently, the rising demand for intelligent video surveillance system leads to high-performance face recognition systems. The solution for low-resolution images acquired by a long-distance camera is required to overcome the distance limits of the existing face recognition systems. For that reason, this paper proposes a hardware design of an image resolution enhancement algorithm for real-time intelligent video surveillance systems. The algorithm is synthesizing a high-resolution face image from an input low-resolution image, with the help of a large collection of other high-resolution face images, called training set. When we checked the performance of the algorithm at 32bit RISC micro-processor, the entire operation took about 25 sec, which is inappropriate for real-time target applications. Based on the result, we implemented the hardware module and verified it using Xilinx Virtex-4 and ARM9-based embedded processor(S3C2440A). The designed hardware can complete the whole operation within 33 msec, so it can deal with 30 frames per second. We expect that the proposed hardware could be one of the solutions not only for real-time processing at the embedded environment, but also for an easy integration with existing face recognition system.

A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31 (다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation)

  • Oksenhendler, V.;Bensrhair, Abdelaziz;Miche, Pierre;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.124-130
    • /
    • 1998
  • High-speed 3D vision systems are essential for autonomous robot or vehicle control applications. In our study, a stereo vision process has been developed. It consists of three steps : extraction of edges in right and left images, matching corresponding edges and calculation of the 3D map. This process is implemented in a VME 150/40 Imaging Technology vision system. It is a modular system composed by a display, an acquisition, a four Mbytes image frame memory, and three computational cards. Programmable accelerator computational modules are running at 40 MHz and are based on TMS320C31 DSP with a $64{\times}32$ bit instruction cache and two $1024{\times}32$ bit internal RAMs. Each is equipped with 512 Kbytes static RAM, 4 Mbytes image memory, 1 Mbytes flash EEPROM and a serial port. Data transfers and communications between modules are provided by three 8 bit global video bus, and three local configurable pipeline 8 bit video bus. The VME bus is dedicated to system management. Tasks between DSPs are distributed as follows: two DSPs are used to edges detection, one for the right image and the other for the left one. The last processor computes the matching process and the 3D calculation. With $512{\times}512$ pixels images, this sensor generates dense 3D maps at a rate of about 1 Hz depending of the scene complexity. Results can surely be improved by using a special suited multiprocessors cards.

  • PDF

Design of the Entropy Processor using the Memory Stream Allocation for the Image Processing (메모리 스트림 할당 기법을 이용한 영상처리용 엔트로피 프로세서 설계)

  • Lee, Seon-Keun;Jeong, Woo-Yeol
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.5
    • /
    • pp.1017-1026
    • /
    • 2012
  • Due to acceleration of the IT industry and the environment for a variety of media in modern society, such as real-time video images 3D-TV is a very important issue. These high-quality live video is being applied to various fields such as CCTV footage has become an important performance parameters. However, these high quality images, even vulnerable because of shortcomings secure channel or by using various security algorithms attempt to get rid of these disadvantages are underway very active. These shortcomings, this study added extra security technologies to reduce the processing speed image processing itself, but by adding security features to transmit real-time processing and security measures for improving the present.

Data Input/Output Time Reduction Scheme with the Simultaneous Transmission Method for Multi-participants Video Conference System (다자간 화상회의 시스템에서의 동시 전송방법에 의한 데이터 입출력 시간 단축 방안)

  • 김현기
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.3
    • /
    • pp.234-240
    • /
    • 2000
  • In this paper, we propose the method in which a stream of multimedia data simultaneously transfers to the main memory and the multimedia processor from the network interface card using a conventional system bus. The proposed method can reduce the input/output time of multimedia data and improve the data stream in the system bus. Also, we compared the number of system bus accesses, bus cycles and data transmission time to the number of participants between the proposed method and the conventional methods in the multi-party video conference systems. The comparison results of performance anticipate that the number of bus accesses of the proposed method was reduced by 50%, and the total transmission time was reduced by 75% as much as the conventional method regardless of the relation of the participant numbers.

  • PDF

Edge Adaptive Color Interpolation for Ultra-Small HD-Grade CMOS Video Sensor in Camera Phones

  • Jang, Won-Woo;Kim, Joo-Hyun;Yang, Hoon-Gee;Lee, Gi-Dong;Kang, Bong-Soon
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.1
    • /
    • pp.51-58
    • /
    • 2010
  • This paper proposes an edge adaptive color interpolation for an ultra-small HD-grade complementary metal-oxide semiconductor (CMOS) video sensor in camera phones that can process 720-p/30-fps videos. Recently, proposed methods with great image quality perceptually reconstruct the green component and then estimate the red/blue component using the reconstructed green and neighbor red and blue pixels. However, these methods require the bulky memory line buffers in order to temporally store the reconstructed green components. The edge adaptive color interpolation method uses seven or nine patterns to calculate the six edge directions. At the same time, the threshold values are adaptively adjusted by the sum of the color values of the selected pixels. This method selects the suitable one among the patterns using two flowcharts proposed in this paper, and then interpolates the missing color values. For verification, we calculated the peak-signal-to-noise-ratio (PSNR) in the test images, which were processed by the proposed algorithm, and compared the calculated PSNR of the existing methods. The proposed color interpolation is also fabricated with the 0.18-${\mu}m$ CMOS flash memory process.

Energy-Efficient Multi- Core Scheduling for Real-Time Video Processing (실시간 비디오 처리에 적합한 에너지 효율적인 멀티코어 스케쥴링)

  • Paek, Hyung-Goo;Yeo, Jeong-Mo;Lee, Wan-Yeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.6
    • /
    • pp.11-20
    • /
    • 2011
  • In this paper, we propose an optimal scheduling scheme that minimizes the energy consumption of a real-time video task on the multi-core platform supporting dynamic voltage and frequency scaling. Exploiting parallel execution on multiple cores for less energy consumption, the propose scheme allocates an appropriate number of cores to the task execution, turns off the power of unused cores, and assigns the lowest clock frequency meeting the deadline. Our experiments show that the proposed scheme saves a significant amount of energy, up to 67% and 89% of energy consumed by two previous methods that execute the task on a single core and on all cores respectively.

Scalable Video Coding with Low Complex Wavelet Transform (공간 웨이블릿 변환의 복잡도를 줄인 스케일러블 비디오 부호화에 관한 연구)

  • Park Seong-Ho;Jeong Se-Yoon;Kim Won-Ha
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.53-62
    • /
    • 2005
  • In the decoding process of interframe Wavelet coding, the Wavelet transform requires huge computational complexity. Since the decoder may need to be used in various devices such as PDAs, notebooks, or PC, the decoder's complexity should be adapted to the processor's computational power. So, it is natural that the low complexity codec is also required for scalable video coding. In this paper, we develop a method of controlling and lowering the complexity of the spatial Wavelet transform while sustaining the same coding efficiency as the conventional spatial Wavelet transform. In addition, the proposed method may alleviate the ringing effect for slowly changing image sequences.

The Design of Video Compression Browsing for Low Capacity and High Quality (저용량, 고화질 비디오 압축 브라우징에 대한 설계)

  • 강진석;김무영;김장형
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.193-198
    • /
    • 1999
  • In the 21th century, everyone feels that the multimedia system is close at hand in real life due to the rapid advance of the computer processing ability and high speed and high guality of communication services. Also the limited frequencies resource will be optimized due to rapid advances in digital video technology which is believed superior to analogue technology in information engineering. MEPG-2 has been introduced for broadcasting use such as digital TV Thus it features the high-definition and hyper-low bit rate. But, because of much throughput it has been implemented by high-priced private ASIC chip and is not in general use yet. But in this research, noticing the rapid enhancement of PC processor performance comparing with the price. MPEG-2 was developed by real time software MPEG-2 had been known impossible to implement with S/W, but the research proved the possibility of the S/W implementation and below are the pictures also in the research was improved 'Motion Vector and Compensation' Algorithm which requires the most operations and UT was made possible real time process. Multimedia Info Society has settled and accompanied by the rapid advance of image-processing technology and lots of standards.

  • PDF

Signal Converter for 3D Video Signal Display (3차원 영상 디스플레이를 위한 신호 변환 장치)

  • 이영훈;임승수
    • Journal of the Korea Society of Computer and Information
    • /
    • v.6 no.3
    • /
    • pp.11-16
    • /
    • 2001
  • This paper propose the methods for input systems which can save video inputs from multic in PC environment. For 3-D display the input data from 4 channels are sequentially disp turn with the speed of 4 times faster than one channels. The system consists of 4 cameras, monitor for monitoring the input process. an input processor, and a monitor which can dis data with the speed of 120f㎰. This paper also discusses the operation of the system and st requirements of the system. The circuit of the proposed system is verified using PSpice simulation.

  • PDF