• Title/Summary/Keyword: Video Sensor Network

Search Result 76, Processing Time 0.026 seconds

An Efficient Implementation of Key Frame Extraction and Sharing in Android for Wireless Video Sensor Network

  • Kim, Kang-Wook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.9
    • /
    • pp.3357-3376
    • /
    • 2015
  • Wireless sensor network is an important research topic that has attracted a lot of attention in recent years. However, most of the interest has focused on wireless sensor network to gather scalar data such as temperature, humidity and vibration. Scalar data are insufficient for diverse applications such as video surveillance, target recognition and traffic monitoring. However, if we use camera sensors in wireless sensor network to collect video data which are vast in information, they can provide important visual information. Video sensor networks continue to gain interest due to their ability to collect video information for a wide range of applications in the past few years. However, how to efficiently store the massive data that reflect environmental state of different times in video sensor network and how to quickly search interested information from them are challenging issues in current research, especially when the sensor network environment is complicated. Therefore, in this paper, we propose a fast algorithm for extracting key frames from video and describe the design and implementation of key frame extraction and sharing in Android for wireless video sensor network.

Smart Vision Sensor for Satellite Video Surveillance Sensor Network (위성 영상감시 센서망을 위한 스마트 비젼 센서)

  • Kim, Won-Ho;Im, Jae-Yoo
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.2
    • /
    • pp.70-74
    • /
    • 2015
  • In this paper, satellite communication based video surveillance system that consisted of ultra-small aperture terminals with small-size smart vision sensor is proposed. The events such as forest fire, smoke, intruder movement are detected automatically in field and false alarms are minimized by using intelligent and high-reliable video analysis algorithms. The smart vision sensor is necessary to achieve high-confidence, high hardware endurance, seamless communication and easy maintenance requirements. To satisfy these requirements, real-time digital signal processor, camera module and satellite transceiver are integrated as a smart vision sensor-based ultra-small aperture terminal. Also, high-performance video analysis and image coding algorithms are embedded. The video analysis functions and performances were verified and confirmed practicality through computer simulation and vision sensor prototype test.

Traffic Estimation Method for Visual Sensor Networks (비쥬얼 센서 네트워크에서 트래픽 예측 방법)

  • Park, Sang-Hyun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.11
    • /
    • pp.1069-1076
    • /
    • 2016
  • Recent development in visual sensor technologies has encouraged various researches on adding imaging capabilities to sensor networks. Video data are bigger than other sensor data, so it is essential to manage the amount of image data efficiently. In this paper, a new method of video traffic estimation is proposed for efficient traffic management of visual sensor networks. In the proposed method, a first order autoregressive model is used for modeling the traffic with the consideration of the characteristics of video traffics acquired from visual sensors, and a Kalman filter algorithm is used to estimate the amount of video traffics. The proposed method is computationally simple, so it is proper to be applied to sensor nodes. It is shown by experimental results that the proposed method is simple but estimate the video traffics exactly by less than 1% of the average.

Video Ranking Model: a Data-Mining Solution with the Understood User Engagement

  • Chen, Yongyu;Chen, Jianxin;Zhou, Liang;Yan, Ying;Huang, Ruochen;Zhang, Wei
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.67-75
    • /
    • 2014
  • Nowadays as video services grow rapidly, it is important for the service providers to provide customized services. Video ranking plays a key role for the service providers to attract the subscribers. In this paper we propose a weekly video ranking mechanism based on the quantified user engagement. The traditional QoE ranking mechanism is relatively subjective and usually is accomplished by grading, while QoS is relatively objective and is accomplished by analyzing the quality metrics. The goal of this paper is to establish a ranking mechanism which combines the both advantages of QoS and QoE according to the third-party data collection platform. We use data mining method to classify and analyze the collected data. In order to apply into the actual situation, we first group the videos and then use the regression tree and the decision tree (CART) to narrow down the number of them to a reasonable scale. After that we introduce the analytic hierarchy process (AHP) model and use Elo rating system to improve the fairness of our system. Questionnaire results verify that the proposed solution not only simplifies the computation but also increases the credibility of the system.

  • PDF

Prioritized Multipath Video Forwarding in WSN

  • Asad Zaidi, Syed Muhammad;Jung, Jieun;Song, Byunghun
    • Journal of Information Processing Systems
    • /
    • v.10 no.2
    • /
    • pp.176-192
    • /
    • 2014
  • The realization of Wireless Multimedia Sensor Networks (WMSNs) has been fostered by the availability of low cost and low power CMOS devices. However, the transmission of bulk video data requires adequate bandwidth, which cannot be promised by single path communication on an intrinsically low resourced sensor network. Moreover, the distortion or artifacts in the video data and the adherence to delay threshold adds to the challenge. In this paper, we propose a two stage Quality of Service (QoS) guaranteeing scheme called Prioritized Multipath WMSN (PMW) for transmitting H.264 encoded video. Multipath selection based on QoS metrics is done in the first stage, while the second stage further prioritizes the paths for sending H.264 encoded video frames on the best available path. PMW uses two composite metrics that are comprised of hop-count, path energy, BER, and end-to-end delay. A color-coded assisted network maintenance and failure recovery scheme has also been proposed using (a) smart greedy mode, (b) walking back mode, and (c) path switchover. Moreover, feedback controlled adaptive video encoding can smartly tune the encoding parameters based on the perceived video quality. Computer simulation using OPNET validates that the proposed scheme significantly outperforms the conventional approaches on human eye perception and delay.

Lifetime Maximization of Wireless Video Sensor Network Node by Dynamically Resizing Communication Buffer

  • Choi, Kang-Woo;Yi, Kang;Kyung, Chong Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.5149-5167
    • /
    • 2017
  • Reducing energy consumption in a wireless video sensor network (WVSN) is a crucial problem because of the high video data volume and severe energy constraints of battery-powered WVSN nodes. In this paper, we present an adaptive dynamic resizing approach for a SRAM communication buffer in a WVSN node in order to reduce the energy consumption and thereby, to maximize the lifetime of the WVSN nodes. To reduce the power consumption of the communication part, which is typically the most energy-consuming component in the WVSN nodes, the radio needs to remain turned off during the data buffer-filling period as well as idle period. As the radio ON/OFF transition incurs extra energy consumption, we need to reduce the ON/OFF transition frequency, which requires a large-sized buffer. However, a large-sized SRAM buffer results in more energy consumption because SRAM power consumption is proportional to the memory size. We can dynamically adjust any active buffer memory size by utilizing a power-gating technique to reflect the optimal control on the buffer size. This paper aims at finding the optimal buffer size, based on the trade-off between the respective energy consumption ratios of the communication buffer and the radio part, respectively. We derive a formula showing the relationship between control variables, including active buffer size and total energy consumption, to mathematically determine the optimal buffer size for any given conditions to minimize total energy consumption. Simulation results show that the overall energy reduction, using our approach, is up to 40.48% (26.96% on average) compared to the conventional wireless communication scheme. In addition, the lifetime of the WVSN node has been extended by 22.17% on average, compared to the existing approaches.

A Novel Approach for Object Detection in Illuminated and Occluded Video Sequences Using Visual Information with Object Feature Estimation

  • Sharma, Kajal
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.2
    • /
    • pp.110-114
    • /
    • 2015
  • This paper reports a novel object-detection technique in video sequences. The proposed algorithm consists of detection of objects in illuminated and occluded videos by using object features and a neural network technique. It consists of two functional modules: region-based object feature extraction and continuous detection of objects in video sequences with region features. This scheme is proposed as an enhancement of the Lowe's scale-invariant feature transform (SIFT) object detection method. This technique solved the high computation time problem of feature generation in the SIFT method. The improvement is achieved by region-based feature classification in the objects to be detected; optimal neural network-based feature reduction is presented in order to reduce the object region feature dataset with winner pixel estimation between the video frames of the video sequence. Simulation results show that the proposed scheme achieves better overall performance than other object detection techniques, and region-based feature detection is faster in comparison to other recent techniques.

Energy-Aware Video Coding Selection for Solar-Powered Wireless Video Sensor Networks

  • Yi, Jun Min;Noh, Dong Kun;Yoon, Ikjune
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.7
    • /
    • pp.101-108
    • /
    • 2017
  • A wireless image sensor node collecting image data for environmental monitoring or surveillance requires a large amount of energy to transmit the huge amount of video data. Even though solar energy can be used to overcome the energy constraint, since the collected energy is also limited, an efficient energy management scheme for transmitting a large amount of video data is needed. In this paper, we propose a method to reduce the number of blackout nodes and increase the amount of gathered data by selecting an appropriate video coding method according to the energy condition of the node in a solar-powered wireless video sensor network. This scheme allocates the amount of energy that can be used over time in order to seamlessly collect data regardless of night or day, and selects a high compression coding method when the allocated energy is large and a low compression coding when the quota is low. Thereby, it reduces the blackout of the relay node and increases the amount of data obtained at the sink node by allowing the data to be transmitted continuously. Also, if the energy is lower than operating normaly, the frame rate is adjusted to prevent the energy exhaustion of nodes. Simulation results show that the proposed scheme suppresses the energy exhaustion of the relay node and collects more data than other schemes.

Method of extracting context from media data by using video sharing site

  • Kondoh, Satoshi;Ogawa, Takeshi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.709-713
    • /
    • 2009
  • Recently, a lot of research that applies data acquired from devices such as cameras and RFIDs to context aware services is being performed in the field on Life-Log and the sensor network. A variety of analytical techniques has been proposed to recognize various information from the raw data because video and audio data include a larger volume of information than other sensor data. However, manually watching a huge amount of media data again has been necessary to create supervised data for the update of a class or the addition of a new class because these techniques generally use supervised learning. Therefore, the problem was that applications were able to use only recognition function based on fixed supervised data in most cases. Then, we proposed a method of acquiring supervised data from a video sharing site where users give comments on any video scene because those sites are remarkably popular and, therefore, many comments are generated. In the first step of this method, words with a high utility value are extracted by filtering the comment about the video. Second, the set of feature data in the time series is calculated by applying functions, which extract various feature data, to media data. Finally, our learning system calculates the correlation coefficient by using the above-mentioned two kinds of data, and the correlation coefficient is stored in the DB of the system. Various other applications contain a recognition function that is used to generate collective intelligence based on Web comments, by applying this correlation coefficient to new media data. In addition, flexible recognition that adjusts to a new object becomes possible by regularly acquiring and learning both media data and comments from a video sharing site while reducing work by manual operation. As a result, recognition of not only the name of the seen object but also indirect information, e.g. the impression or the action toward the object, was enabled.

  • PDF

Rotational Wireless Video Sensor Networks with Obstacle Avoidance Capability for Improving Disaster Area Coverage

  • Bendimerad, Nawel;Kechar, Bouabdellah
    • Journal of Information Processing Systems
    • /
    • v.11 no.4
    • /
    • pp.509-527
    • /
    • 2015
  • Wireless Video Sensor Networks (WVSNs) have become a leading solution in many important applications, such as disaster recovery. By using WVSNs in disaster scenarios, the main goal is achieving a successful immediate response including search, location, and rescue operations. The achievement of such an objective in the presence of obstacles and the risk of sensor damage being caused by disasters is a challenging task. In this paper, we propose a fault tolerance model of WVSN for efficient post-disaster management in order to assist rescue and preparedness operations. To get an overview of the monitored area, we used video sensors with a rotation capability that enables them to switch to the best direction for getting better multimedia coverage of the disaster area, while minimizing the effect of occlusions. By constructing different cover sets based on the field of view redundancy, we can provide a robust fault tolerance to the network. We demonstrate by simulating the benefits of our proposal in terms of reliability and high coverage.