• Title/Summary/Keyword: Moving object edge

Search Result 87, Processing Time 0.095 seconds

Implementation of a Task Level Pipelined Multicomputer RV860-PIPE for Computer Vision Applications (컴퓨터 비젼 응용을 위한 태스크 레벨 파이프라인 멀티컴퓨터 RV860-PIPE의 구현)

  • Lee, Choong-Hwan;Kim, Jun-Sung;Park, Kyu-Ho
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.38-48
    • /
    • 1996
  • We implemented and evaluated the preformance of a task level pipelined multicomputer "RV860-PIPE(Realtime Vision i860 system using PIPEline)" for computer vision applications. RV860-PIPE is a message-passing MIMD computer having ring interconnection network which is appropriate for vision processing. We designed the node computer of RV860-PIPE using a 64-bit microprocessor to have generality and high processing power for various vision algorithms. Furthermore, to reduce the communication overhead between node computers and between node computer and a frame grabber, we designed dedicated high speed communication channels between them. We showed the practical applicability of the implemented system by evaluting performances of various computer vision applications like edge detection, real-time moving object tracking, and real-time face recognition.

  • PDF

Robust Method of Updating Reference Background Image in Unstable Illumination Condition (불안정한 조명 환경에 강인한 참조 배경 영상의 갱신 기법)

  • Ji, Young-Suk;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.1
    • /
    • pp.91-102
    • /
    • 2010
  • It is very difficult that a previous surveillance system and vehicle detection system find objects on a limited and unstable illumination condition. This paper proposes a robust method of adaptively updating a reference background image for solving problems that are generated by the unstable illumination. The first input image is set up as the reference background image, and is divided into three block categories according to an edge component. Then a block state analysis, which uses a rate of change of the brightness, a stability, a color information, and an edge component on each block, is applied to the input image. On the reference background image, neighbourhood blocks having the same state of a updated block are merged as a block. The proposed method can generate a robust reference background image because it distinguishes a moving object area from an unstable illumination. The proposed method very efficiently updates the reference background image from the point of view of the management and the processing time. In order to demonstrate the superiority of the proposed stable manner in situation that an illumination quickly changes.

Depth Extraction of Partially Occluded 3D Objects Using Axially Distributed Stereo Image Sensing

  • Lee, Min-Chul;Inoue, Kotaro;Konishi, Naoki;Lee, Joon-Jae
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.4
    • /
    • pp.275-279
    • /
    • 2015
  • There are several methods to record three dimensional (3D) information of objects such as lens array based integral imaging, synthetic aperture integral imaging (SAII), computer synthesized integral imaging (CSII), axially distributed image sensing (ADS), and axially distributed stereo image sensing (ADSS). ADSS method is capable of recording partially occluded 3D objects and reconstructing high-resolution slice plane images. In this paper, we present a computational method for depth extraction of partially occluded 3D objects using ADSS. In the proposed method, the high resolution elemental stereo image pairs are recorded by simply moving the stereo camera along the optical axis and the recorded elemental image pairs are used to reconstruct 3D slice images using the computational reconstruction algorithm. To extract depth information of partially occluded 3D object, we utilize the edge enhancement and simple block matching algorithm between two reconstructed slice image pair. To demonstrate the proposed method, we carry out the preliminary experiments and the results are presented.

Content-Based Video Retrieval Algorithms using Spatio-Temporal Information about Moving Objects (객체의 시공간적 움직임 정보를 이용한 내용 기반 비디오 검색 알고리즘)

  • Jeong, Jong-Myeon;Moon, Young-Shik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.631-644
    • /
    • 2002
  • In this paper efficient algorithms for content-based video retrieval using motion information are proposed, including temporal scale-invariant retrieval and temporal scale-absolute retrieval. In temporal scale-invariant video retrieval, the distance transformation is performed on each trail image in database. Then, from a given que교 trail the pixel values along the query trail are added in each distance image to compute the average distance between the trails of query image and database image, since the intensity of each pixel in distance image represents the distance from that pixel to the nearest edge pixel. For temporal scale-absolute retrieval, a new coding scheme referred to as Motion Retrieval Code is proposed. This code is designed to represent object motions in the human visual sense so that the retrieval performance can be improved. The proposed coding scheme can also achieve a fast matching, since the similarity between two motion vectors can be computed by simple bit operations. The efficiencies of the proposed methods are shown by experimental results.

Method of Video Stitching based on Minimal Error Seam (최소 오류 경계를 활용한 동적 물체 기반 동영상 정합 방안)

  • Kang, Jeonho;Kim, Junsik;Kim, Sang-IL;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.142-152
    • /
    • 2019
  • There is growing interest in ultra-high-resolution content that gives a more realistic sense of presence than existing broadcast content. However, in order to provide ultra-high-resolution contents in existing broadcast services, there are limitations in view angle and resolution of the image acquisition device. In order to solve this problem, many researches on stitching, which is an image synthesis method using a plurality of input devices, have been conducted. In this paper, we propose method of dynamic object based video stitching using minimal error seam in order to overcome the temporal invariance degradation of moving objects in the stitching process of horizontally oriented videos.

Sensor Fusion Docking System of Drone and Ground Vehicles Using Image Object Detection (영상 객체 검출을 이용한 드론과 지상로봇의 센서 융합 도킹 시스템)

  • Beck, Jong-Hwan;Park, Hee-Su;Oh, Se-Ryeong;Shin, Ji-Hun;Kim, Sang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.217-222
    • /
    • 2017
  • Recent studies for working robot in dangerous places have been carried out on large unmanned ground vehicles or 4-legged robots with the advantage of long working time, but it is difficult to apply in practical dangerous fields which require the real-time system with high locomotion and capability of delicate working. This research shows the collaborated docking system of drone and ground vehicles which combines image processing algorithm and laser sensors for effective detection of docking markers, and is finally capable of moving a long distance and doing very delicate works. We proposed the docking system of drone and ground vehicles with sensor fusion which also suggests two template matching methods appropriate for this application. The system showed 95% docking success rate in 50 docking attempts.

A Study on Correcting Virtual Camera Tracking Data for Digital Compositing (디지털영상 합성을 위한 가상카메라의 트래킹 데이터 보정에 관한 연구)

  • Lee, Junsang;Lee, Imgeun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.11
    • /
    • pp.39-46
    • /
    • 2012
  • The development of the computer widens the expressive ways for the nature objects and the scenes. The cutting edge computer graphics technologies effectively create any images we can imagine. Although the computer graphics plays an important role in filming and video production, the status of the domestic contents production industry is not favorable for producing and research all at the same time. In digital composition, the match moving stage, which composites the captured real sequence with computer graphics image, goes through many complicating processes. The camera tracking process is the most important issue in this stage. This comprises the estimation of the 3D trajectory and the optical parameter of the real camera. Because the estimating process is based only on the captured sequence, there are many errors which make the process more difficult. In this paper we propose the method for correcting the tracking data. The proposed method can alleviate the unwanted camera shaking and object bouncing effect in the composited scene.