• Title/Summary/Keyword: Video technology

Search Result 2,740, Processing Time 0.04 seconds

Optimizing the Joint Source/Network Coding for Video Streaming over Multi-hop Wireless Networks

  • Cui, Huali;Qian, Depei;Zhang, Xingjun;You, Ilsun;Dong, Xiaoshe
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.800-818
    • /
    • 2013
  • Supporting video streaming over multi-hop wireless networks is particularly challenging due to the time-varying and error-prone characteristics of the wireless channel. In this paper, we propose a joint optimization scheme for video streaming over multi-hop wireless networks. Our coding scheme, called Joint Source/Network Coding (JSNC), combines source coding and network coding to maximize the video quality under the limited wireless resources and coding constraints. JSNC segments the streaming data into generations at the source node and exploits the intra-session coding on both the source and the intermediate nodes. The size of the generation and the level of redundancy influence the streaming performance significantly and need to be determined carefully. We formulate the problem as an optimization problem with the objective of minimizing the end-to-end distortion by jointly considering the generation size and the coding redundancy. The simulation results demonstrate that, with the appropriate generation size and coding redundancy, the JSNC scheme can achieve an optimal performance for video streaming over multi-hop wireless networks.

ROI-based Encoding using Face Detection and Tracking for mobile video telephony (얼굴 인식과 추적을 이용한 ROI 기반 영상 통화 코덱 설계 및 구현)

  • Lee, You-Sun;Kim, Chang-Hee;Na, Tae-Young;Lim, Jeong-Yeon;Joo, Young-Ho;Kim, Ki-Mun;Byun, Jae-Woan;Kim, Mun-Churl
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.77-78
    • /
    • 2008
  • With advent of 3G mobile communication services, video telephony becomes one of the major services. However, due to a narrow channel bandwidth, the current video telephony services have not yet reached a satisfied level. In this paper, we propose an ROI (Region-Of-Interest) based improvement of visual quality for video telephony services with the H.264|MPEG-4 Part 10 (AVC: Advanced Video Coding) codec. To this end, we propose a face detection and tracking method to define ROI for the AVC codec based video telephony. Experiment results show that our proposed ROI based method allowed for improved visual quality in both objective and subjective perspectives.

  • PDF

Predicting Learning Achievements with Indicators of Perceived Affordances Based on Different Levels of Content Complexity in Video-based Learning

  • Dasom KIM;Gyeoun JEONG
    • Educational Technology International
    • /
    • v.25 no.1
    • /
    • pp.27-65
    • /
    • 2024
  • The purpose of this study was to identify differences in learning patterns according to content complexity in video-based learning environments and to derive variables that have an important effect on learning achievement within particular learning contexts. To achieve our aims, we observed and collected data on learners' cognitive processes through perceived affordances, using behavioral logs and eye movements as specific indicators. These two types of reaction data were collected from 67 male and female university students who watched two learning videos classified according to their task complexity through the video learning player. The results showed that when the content complexity level was low, learners tended to navigate using other learners' digital logs, but when it was high, students tended to control the learning process and directly generate their own logs. In addition, using derived prediction models according to the degree of content complexity level, we identified the important variables influencing learning achievement in the low content complexity group as those related to video playback and annotation. In comparison, in the high content complexity group, the important variables were related to active navigation of the learning video. This study tried not only to apply the novel variables in the field of educational technology, but also attempt to provide qualitative observations on the learning process based on a quantitative approach.

Real-time Stabilization Method for Video acquired by Unmanned Aerial Vehicle (무인 항공기 촬영 동영상을 위한 실시간 안정화 기법)

  • Cho, Hyun-Tae;Bae, Hyo-Chul;Kim, Min-Uk;Yoon, Kyoungro
    • Journal of the Semiconductor & Display Technology
    • /
    • v.13 no.1
    • /
    • pp.27-33
    • /
    • 2014
  • Video from unmanned aerial vehicle (UAV) is influenced by natural environments due to the light-weight UAV, specifically by winds. Thus UAV's shaking movements make the video shaking. Objective of this paper is making a stabilized video by removing shakiness of video acquired by UAV. Stabilizer estimates camera's motion from calculation of optical flow between two successive frames. Estimated camera's movements have intended movements as well as unintended movements of shaking. Unintended movements are eliminated by smoothing process. Experimental results showed that our proposed method performs almost as good as the other off-line based stabilizer. However estimation of camera's movements, i.e., calculation of optical flow, becomes a bottleneck to the real-time stabilization. To solve this problem, we make parallel stabilizer making average 30 frames per second of stabilized video. Our proposed method can be used for the video acquired by UAV and also for the shaking video from non-professional users. The proposed method can also be used in any other fields which require object tracking, or accurate image analysis/representation.

Review for vision-based structural damage evaluation in disasters focusing on nonlinearity

  • Sifan Wang;Mayuko Nishio
    • Smart Structures and Systems
    • /
    • v.33 no.4
    • /
    • pp.263-279
    • /
    • 2024
  • With the increasing diversity of internet media, available video data have become more convenient and abundant. Related video data-based research has advanced rapidly in recent years owing to advantages such as noncontact, low-cost data acquisition, high spatial resolution, and simultaneity. Additionally, structural nonlinearity extraction has attracted increasing attention as a tool for damage evaluation. This review paper aims to summarize the research experience with the recent developments and applications of video data-based technology for structural nonlinearity extraction and damage evaluation. The most regularly used object detection images and video databases are first summarized, followed by suggestions for obtaining video data on structural nonlinear damage events. Technologies for linear and nonlinear system identification based on video data are then discussed. In addition, common nonlinear damage types in disaster events and prevalent processing algorithms are reviewed in the section on structural damage evaluation using video data uploaded on online platform. Finally, a discussion regarding some potential research directions is proposed to address the weaknesses of the current nonlinear extraction technology based on video data, such as the use of uni-dimensional time-series data as leverage to further achieve nonlinear extraction and the difficulty of real-time detection, including the fields of nonlinear extraction for spatial data, real-time detection, and visualization.

Gradient Fusion Method for Night Video Enhancement

  • Rao, Yunbo;Zhang, Yuhong;Gou, Jianping
    • ETRI Journal
    • /
    • v.35 no.5
    • /
    • pp.923-926
    • /
    • 2013
  • To resolve video enhancement problems, a novel method of gradient domain fusion wherein gradient domain frames of the background in daytime video are fused with nighttime video frames is proposed. To verify the superiority of the proposed method, it is compared to conventional techniques. The implemented output of our method is shown to offer enhanced visual quality.

Face Detection based on Video Sequence (비디오 영상 기반의 얼굴 검색)

  • Ahn, Hyo-Chang;Rhee, Sang-Burm
    • Journal of the Semiconductor & Display Technology
    • /
    • v.7 no.3
    • /
    • pp.45-49
    • /
    • 2008
  • Face detection and tracking technology on video sequence has developed indebted to commercialization of teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Complex background, color distortion by luminance effect and condition of luminance has hindered face recognition system. In this paper, we have proceeded to research of face recognition on video sequence. We extracted facial area using luminance and chrominance component on $YC_bC_r$ color space. After extracting facial area, we have developed the face recognition system applied to our improved algorithm that combined PCA and LDA. Our proposed algorithm has shown 92% recognition rate which is more accurate performance than previous methods that are applied to PCA, or combined PCA and LDA.

  • PDF

A New Denoising Method for Time-lapse Video using Background Modeling

  • Park, Sanghyun
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.125-138
    • /
    • 2020
  • Due to the development of camera technology, the cost of producing time-lapse video has been reduced, and time-lapse videos are being applied in many fields. Time-lapse video is created using images obtained by shooting for a long time at long intervals. In this paper, we propose a method to improve the quality of time-lapse videos monitoring the changes in plants. Considering the characteristics of time-lapse video, we propose a method of separating the desired and unnecessary objects and removing unnecessary elements. The characteristic of time-lapse videos that we have noticed is that unnecessary elements appear intermittently in the captured images. In the proposed method, noises are removed by applying a codebook background modeling algorithm to use this characteristic. Experimental results show that the proposed method is simple and accurate to find and remove unnecessary elements in time-lapse videos.

Status and development direction of Virtual Reality Video technology (가상현실 영상 기술의 현황과 발전방향 연구)

  • Liu, Miaoyihai;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.19 no.12
    • /
    • pp.405-411
    • /
    • 2021
  • Virtual reality technology is a new practical technology developed in the 20th century. In recent years, the related industry is rapidly developing due to the continuous development and improvement of virtual reality (VR) technology, and various image contents that are realistic through the use of virtual reality technology provide users with a better visual experience. In addition, it has excellent characteristics in terms of interaction and imagination, so a bright prospect can be expected in the field of video content production. This paper introduced the types of display of VR video, technology, and how users view VR video at the current stage. In addition, the difference in resolution between the past VR equipment and the current equipment was compared and analyzed, and the reason why the resolution affects the VR image was explored. Among the future development of VR video, we will present some development directions and provide convenience to people.

Implementation of Video Processing Module for Integrated Modular Avionics System (모듈통합형 항공전자시스템을 위한 Video Processing Module 구현)

  • Jeon, Eun-Seon;Kang, Dae-Il;Ban, Chang-Bong;Yang, Seong-Yul
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.5
    • /
    • pp.437-444
    • /
    • 2014
  • The integrated modular avionics (IMA) system has quite a number of line repalceable moduels (LRMs) in a cabinet. The LRM performs functions like line replaceable units (LRUs) in federated architecture. The video processing module (VPM) acts as a video bus bridge and gateway of ARINC 818 avionics digital video bus (ADVB). The VPM is a LRM in IMA core system. The ARINC 818 video interface and protocol standard was developed for high-bandwidth, low-latency and uncompressed digital video transmission. FPGAs of the VPM include video processing function such as ARINC 818 to DVI, DVI to ARINC 818 convertor, video decoder and overlay. In this paper we explain how to implement VPM's Hardware. Also we show the verification results about VPM functions and IP core performance.