• Title/Summary/Keyword: Single camera

Search Result 770, Processing Time 0.024 seconds

Implementation of Real-Time Multi-Camera Video Surveillance System with Automatic Resolution Control Using Motion Detection (움직임 감지를 사용하여 영상 해상도를 자동 제어하는 실시간 다중 카메라 영상 감시 시스템의 구현)

  • Jung, Seulkee;Lee, Jong-Bae;Lee, Seongsoo
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.612-619
    • /
    • 2014
  • This paper proposes a real-time multi-camera video surveillance system with automatic resolution control using motion detection. In ordinary times, it acquires 4 channels of QVGA images, and it merges them into single VGA image and transmit it. When motion is detected, it automatically increases the resolution of motion-occurring channel to VGA and decreases those of 3 other channels to QQVGA, and then these images are overlaid and transmitted. Thus, it can magnifies and watches the motion-occurring channel while maintaining transmission bandwidth and monitoring all other channels. When it is synthesized with 0.18 um technology, the maximum operating frequency is 110 MHz, which can theoretically support 4 HD cameras.

Development of 3-D Volume PIV (3차원 Volume PIV의 개발)

  • Choi, Jang-Woon;Nam, Koo-Man;Lee, Young-Ho;Kim, Mi-Young
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.27 no.6
    • /
    • pp.726-735
    • /
    • 2003
  • A Process of 3-D Particle image velocimetry, called here, as '3-D volume PIV' was developed for the full-field measurement of 3-D complex flows. The present method includes the coordinate transformation from image to camera, calibration of camera by a calibrator based on the collinear equation, stereo matching of particles by the approximation of the epipolar lines, accurate calculation of 3-D particle positions, identification of velocity vectors by 3-D cross-correlation equation, removal of error vectors by a statistical method followed by a continuity equation criterior, and finally 3-D animation as the post processing. In principle, as two frame images only are necessary for the single instantaneous analysis 3-D flow field, more effective vectors are obtainable contrary to the previous multi-frame vector algorithm. An Experimental system was also used for the application of the proposed method. Three analog CCD camera and a Halogen lamp illumination were adopted to capture the wake flow behind a bluff obstacle. Among 200 effective particle s in two consecutive frames, 170 vectors were obtained averagely in the present study.

A Study on the Characteristics on Ultra Precision Machining of IR Camera Mirror (적외선 카메라용 반사경의 초정밀 절삭특성에 관한 연구)

  • Kim Gun-Hee;Kim Hyo-Sik;Shin Hyun-Soo;Won Jong-Ho;Yang Sun-Choel
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.5 s.182
    • /
    • pp.44-50
    • /
    • 2006
  • This paper describs about the technique of ultra-precision machining for an infrared(IR) camera aspheric mirror. A 200 mm diameter aspheric mirror was fabricated by SPDTM(Single Point Diamond Turning Machine). Aluminum alloy as mirror substrates is known to be easily machined, but not polishable due to its ductility. Aspheric large reflector without a polishing process, the surface roughness of 5 nm Ra, and the form error of ${\lambda}/2\;({\lambda}=632.8\;nm)$ for reference curved surface 200 mm has been required. The purpose of this research is to find the optimum machining conditions for cutting reflector using Al6061-T651 and apply the SPDTM technique to the manufacturing of ultra precision optical components of Al-alloy aspheric reflector. The cutting force and the surface roughness are measured according to each cutting conditions feed rate, depth of cut and cutting speed, using diamond turning machine to perform cutting processing. As a result, the surface roughness is good when feed rate is 1mm/min, depth of cut $4{\mu}m$ and cutting speed is 220 m/min. We could machined the primary mirror for IR camera in diamond machine with a surface roughness within $0.483{\mu}m$ Rt on aspheric.

Efficient Lane Detection for Preceding Vehicle Extraction by Limiting Search Area of Sequential Images (전방의 차량포착을 위한 연속영상의 대상영역을 제한한 효율적인 차선 검출)

  • Han, Sang-Hoon;Cho, Hyung-Je
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.705-717
    • /
    • 2001
  • In this paper, we propose a rapid lane detection method to extract a preceding vehicle from sequential images captured by a single monocular CCD camera. We detect positions of lanes for an individual image within the limited area that would not be hidden and thereby compute the slopes of the detected lanes. Then we find a search area where vehicles would exist and extract the position of the preceding vehicle within the area with edge component by applying a structured method. To verify the effects of the proposed method, we capture the road images with a notebook PC and a CCD camera for PC and present the results such as processing time for lane detection, accuracy and vehicles detection against the images.

  • PDF

Evaluation of a Corrected Cam for an Interchangeable Lens with a Distance Window

  • Kim, Jin Woo;Ryu, Jae Myung;Jo, Jae Heung;Kim, Young-Joo
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.1
    • /
    • pp.23-31
    • /
    • 2014
  • Recently, the number of camera companies that produce commercializing interchangeable lens systems such as digital single lens reflex (DSLR) and compact system camera (CSC) lenses has been gradually increasing. These interchangeable lenses have various kinds of lenses with distinct specifications. In particular, the distance window among these specifications is the function most preferred by customers. Mechanical manual zoom and manual focus in these high end camera lenses with a distance window are in particular desirable specifications and are required for product quality. However, the AF lens group is linked to the zoom cam and moves. Because the AF lens group moves along with the object distance, we can not realize the distance window with only zoom locus calculation. In this paper, in order to solve the problem, we suggest an optical calculation method for a corrected AF zoom cam for an interchangeable lens with a distance window to achieve product differentiation and analyze the error in the calculation.

A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence (단안영상에서 움직임 벡터를 이용한 영역의 깊이추정)

  • 손정만;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.96-105
    • /
    • 2004
  • The recovering 3D image from 2D requires the depth information for each picture element. The manual creation of those 3D models is time consuming and expensive. The goal in this paper is to estimate the relative depth information of every region from single view image with camera translation. The paper is based on the fact that the motion of every point within image which taken from camera translation depends on the depth. Motion vector using full-search motion estimation is compensated for camera rotation and zooming. We have developed a framework that estimates the average frame depth by analyzing motion vector and then calculates relative depth of region to average frame depth. Simulation results show that the depth of region belongs to a near or far object is consistent accord with relative depth that man recognizes.

  • PDF

A Framework for Real Time Vehicle Pose Estimation based on synthetic method of obtaining 2D-to-3D Point Correspondence

  • Yun, Sergey;Jeon, Moongu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.904-907
    • /
    • 2014
  • In this work we present a robust and fast approach to estimate 3D vehicle pose that can provide results under a specific traffic surveillance conditions. Such limitations are expressed by single fixed CCTV camera that is located relatively high above the ground, its pitch axes is parallel to the reference plane and the camera focus assumed to be known. The benefit of our framework that it does not require prior training, camera calibration and does not heavily rely on 3D model shape as most common technics do. Also it deals with a bad shape condition of the objects as we focused on low resolution surveillance scenes. Pose estimation task is presented as PnP problem to solve it we use well known "POSIT" algorithm [1]. In order to use this algorithm at least 4 non coplanar point's correspondence is required. To find such we propose a set of techniques based on model and scene geometry. Our framework can be applied in real time video sequence. Results for estimated vehicle pose are shown in real image scene.

Camera-based Dog Unwanted Behavior Detection (영상 기반 강아지의 이상 행동 탐지)

  • Atif, Othmane;Lee, Jonguk;Park, Daehee;Chung, Yongwha
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.419-422
    • /
    • 2019
  • The recent increase in single-person households and family income has led to an increase in the number of pet owners. However, due to the owners' difficulty to communicate with them for 24 hours, pets, and especially dogs, tend to display unwanted behavior that can be harmful to themselves and their environment when left alone. Therefore, detecting those behaviors when the owner is absent is necessary to suppress them and prevent any damage. In this paper, we propose a camera-based system that detects a set of normal and unwanted behaviors using deep learning algorithms to monitor dogs when left alone at home. The frames collected from the camera are arranged into sequences of RGB frames and their corresponding optical flow sequences, and then features are extracted from each data flow using pre-trained VGG-16 models. The extracted features from each sequence are concatenated and input to a bi-directional LSTM network that classifies the dog action into one of the targeted classes. The experimental results show that our method achieves a good performance exceeding 0.9 in precision, recall and f-1 score.

On low cost model-based monitoring of industrial robotic arms using standard machine vision

  • Karagiannidisa, Aris;Vosniakos, George C.
    • Advances in robotics research
    • /
    • v.1 no.1
    • /
    • pp.81-99
    • /
    • 2014
  • This paper contributes towards the development of a computer vision system for telemonitoring of industrial articulated robotic arms. The system aims to provide precision real time measurements of the joint angles by employing low cost cameras and visual markers on the body of the robot. To achieve this, a mathematical model that connects image features and joint angles was developed covering rotation of a single joint whose axis is parallel to the visual projection plane. The feature that is examined during image processing is the varying area of given circular target placed on the body of the robot, as registered by the camera during rotation of the arm. In order to distinguish between rotation directions four targets were used placed every $90^{\circ}$ and observed by two cameras at suitable angular distances. The results were deemed acceptable considering camera cost and lighting conditions of the workspace. A computational error analysis explored how deviations from the ideal camera positions affect the measurements and led to appropriate correction. The method is deemed to be extensible to multiple joint motion of a known kinematic chain.

Vehicle-Level Traffic Accident Detection on Vehicle-Mounted Camera Based on Cascade Bi-LSTM

  • Son, Hyeon-Cheol;Kim, Da-Seul;Kim, Sung-Young
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.167-175
    • /
    • 2020
  • In this paper, we propose a traffic accident detection on vehicle-mounted camera. In the proposed method, the minimum bounding box coordinates the central coordinates on the bird's eye view and motion vectors of each vehicle object, and ego-motions of the vehicle equipped with dash-cam are extracted from the dash-cam video. By using extracted 4 kinds features as the input of Bi-LSTM (bidirectional LSTM), the accident probability (score) is predicted. To investigate the effect of each input feature on the probability of an accident, we analyze the performance of the detection the case of using a single feature input and the case of using a combination of features as input, respectively. And in these two cases, different detection models are defined and used. Bi-LSTM is used as a cascade, especially when a combination of the features is used as input. The proposed method shows 76.1% precision and 75.6% recall, which is superior to our previous work.