• Title/Summary/Keyword: Video sensor

Search Result 319, Processing Time 0.039 seconds

An Efficient Implementation of Key Frame Extraction and Sharing in Android for Wireless Video Sensor Network

  • Kim, Kang-Wook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.9
    • /
    • pp.3357-3376
    • /
    • 2015
  • Wireless sensor network is an important research topic that has attracted a lot of attention in recent years. However, most of the interest has focused on wireless sensor network to gather scalar data such as temperature, humidity and vibration. Scalar data are insufficient for diverse applications such as video surveillance, target recognition and traffic monitoring. However, if we use camera sensors in wireless sensor network to collect video data which are vast in information, they can provide important visual information. Video sensor networks continue to gain interest due to their ability to collect video information for a wide range of applications in the past few years. However, how to efficiently store the massive data that reflect environmental state of different times in video sensor network and how to quickly search interested information from them are challenging issues in current research, especially when the sensor network environment is complicated. Therefore, in this paper, we propose a fast algorithm for extracting key frames from video and describe the design and implementation of key frame extraction and sharing in Android for wireless video sensor network.

Development CMOS Sensor-Based Portable Video Scope and It's Image Processing Application (CMOS 센서를 이용한 휴대용 비디오스코프 및 영상처리 응용환경 개발)

  • 김상진;김기만;강진영;김영욱;백준기
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.517-520
    • /
    • 2003
  • Commercial video scope use CCD sensor and frame grabber for image capture and A/D interface but application limited by input resolution and high cost. In this paper we introduce portable video scope using CMOS sensor, USB pen and tuner card (low frame grabber) in place of commercial CCD sensor and frame grabber. Our video scope serves as an essential link between advancing commercial technology and research, providing cost effective solutions for educational, engineering and medical applications across an entire spectrum of needs. The software implementation is done using Direct Show in second version after initial trials using First version VFW (video for window), which gave very low frame rate. Our video scope operates on windows 98, ME, XP, 2000. The drawback of our video scope is crossover problem in output images caused due to interpolation, which has to be rectified for more efficient performance.

  • PDF

Smart Vision Sensor for Satellite Video Surveillance Sensor Network (위성 영상감시 센서망을 위한 스마트 비젼 센서)

  • Kim, Won-Ho;Im, Jae-Yoo
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.2
    • /
    • pp.70-74
    • /
    • 2015
  • In this paper, satellite communication based video surveillance system that consisted of ultra-small aperture terminals with small-size smart vision sensor is proposed. The events such as forest fire, smoke, intruder movement are detected automatically in field and false alarms are minimized by using intelligent and high-reliable video analysis algorithms. The smart vision sensor is necessary to achieve high-confidence, high hardware endurance, seamless communication and easy maintenance requirements. To satisfy these requirements, real-time digital signal processor, camera module and satellite transceiver are integrated as a smart vision sensor-based ultra-small aperture terminal. Also, high-performance video analysis and image coding algorithms are embedded. The video analysis functions and performances were verified and confirmed practicality through computer simulation and vision sensor prototype test.

Layer based Cooperative Relaying Algorithm for Scalable Video Transmission over Wireless Video Sensor Networks (무선 비디오 센서 네트워크에서 스케일러블 비디오 전송을 위한 계층 기반 협업 중계 알고리즘*)

  • Ha, Hojin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.18 no.4
    • /
    • pp.13-21
    • /
    • 2022
  • Recently, in wireless video sensor networks(WVSN), various schemes for efficient video data transmission have been studied. In this paper, a layer based cooperative relaying(LCR) algorithm is proposed for minimizing scalable video transmission distortion from packet loss in WVSN. The proposed LCR algorithm consists of two modules. In the first step, a parameter based error propagation metric is proposed to predict the effect of each scalable layer on video quality degradation at low complexity. In the second step, a layer-based cooperative relay algorithm is proposed to minimize distortion due to packet loss using the proposed error propagation metric and channel information of the video sensor node and relay node. In the experiment, the proposed algorithm showed that the improvement of peak signal-to-noise ratio (PSNR) in various channel environments, compared to the previous algorithm(Energy based Cooperative Relaying, ECR) without considering the metric of error propagation.The proposed LCR algorithm minimizes video quality degradation from packet loss using both the channel information of relaying node and the amount of layer based error propagation in scalable video.

Traffic Estimation Method for Visual Sensor Networks (비쥬얼 센서 네트워크에서 트래픽 예측 방법)

  • Park, Sang-Hyun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.11
    • /
    • pp.1069-1076
    • /
    • 2016
  • Recent development in visual sensor technologies has encouraged various researches on adding imaging capabilities to sensor networks. Video data are bigger than other sensor data, so it is essential to manage the amount of image data efficiently. In this paper, a new method of video traffic estimation is proposed for efficient traffic management of visual sensor networks. In the proposed method, a first order autoregressive model is used for modeling the traffic with the consideration of the characteristics of video traffics acquired from visual sensors, and a Kalman filter algorithm is used to estimate the amount of video traffics. The proposed method is computationally simple, so it is proper to be applied to sensor nodes. It is shown by experimental results that the proposed method is simple but estimate the video traffics exactly by less than 1% of the average.

Dynamic Modeling and Georegistration of Airborne Video Sequences

  • Lee, Changno
    • Korean Journal of Geomatics
    • /
    • v.3 no.1
    • /
    • pp.23-32
    • /
    • 2003
  • Rigorous sensor and dynamic modeling techniques are required if spatial information is to be accurately extracted from video imagery. First, a mathematical model for an uncalibrated video camera and a description of a bundle adjustment with added parameters, for purposes of general block triangulation, is presented. This is followed by the application of invariance-based techniques, with constraints, to derive initial approximations for the camera parameters. Finally, dynamic modeling using the Kalman Filter is discussed. The results of various experiments with real video imagery, which apply the developed techniques, are given.

  • PDF

Study on 3 DoF Image and Video Stitching Using Sensed Data

  • Kim, Minwoo;Chun, Jonghoon;Kim, Sang-Kyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4527-4548
    • /
    • 2017
  • This paper proposes a method to generate panoramic images by combining conventional feature extraction algorithms (e.g., SIFT, SURF, MPEG-7 CDVS) with sensed data from inertia sensors to enhance the stitching results. The challenge of image stitching increases when the images are taken from two different mobile phones with no posture calibration. Using inertia sensor data obtained by the mobile phone, images with different yaw, pitch, and roll angles are preprocessed and adjusted before performing stitching process. Performance of stitching (e.g., feature extraction time, inlier point numbers, stitching accuracy) between conventional feature extraction algorithms is reported along with the stitching performance with/without using the inertia sensor data. In addition, the stitching accuracy of video data was improved using the same sensed data, with discrete calculation of homograph matrix. The experimental results for stitching accuracies and speed using sensed data are presented in this paper.

A Study on Taekwondo Training System using Hybrid Sensing Technique

  • Kwon, Doo Young
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1439-1445
    • /
    • 2013
  • We present a Taekwondo training system using a hybrid sensing technique of a body sensor and a visual sensor. Using a body sensor (accelerometer), rotational and inertial motion data are captured which are important for Taekwondo motion detection and evaluation. A visual sensor (camera) captures and records the sequential images of the performance. Motion chunk is proposed to structuralize Taekwondo motions and design HMM (Hidden Markov Model) for motion recognition. Trainees can evaluates their trial motions numerically by computing the distance to the standard motion performed by a trainer. For motion training video, the real-time video images captured by a camera is overlayed with a visualized body sensor data so that users can see how the rotational and inertial motion data flow.

A study of recognition system to the situation reaction (객체 정보에 대한 데이터베이스 구성 연구)

  • Park, Sangjoon;Lee, Jongchan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.161-162
    • /
    • 2018
  • In this paper, we consider the development the database configuration to the search and management of the object information from GPS sensor and video sensor. Also, the design that the object trace of the video sensor to recognized object would be considered.

  • PDF

Aerial Video Summarization Approach based on Sensor Operation Mode for Real-time Context Recognition (실시간 상황 인식을 위한 센서 운용 모드 기반 항공 영상 요약 기법)

  • Lee, Jun-Pyo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.6
    • /
    • pp.87-97
    • /
    • 2015
  • An Aerial video summarization is not only the key to effective browsing video within a limited time, but also an embedded cue to efficiently congregative situation awareness acquired by unmanned aerial vehicle. Different with previous works, we utilize sensor operation mode of unmanned aerial vehicle, which is global, local, and focused surveillance mode in order for accurately summarizing the aerial video considering flight and surveillance/reconnaissance environments. In focused mode, we propose the moving-react tracking method which utilizes the partitioning motion vector and spatiotemporal saliency map to detect and track the interest moving object continuously. In our simulation result, the key frames are correctly detected for aerial video summarization according to the sensor operation mode of aerial vehicle and finally, we verify the efficiency of video summarization using the proposed mothed.