• Title/Summary/Keyword: Video Images

Search Result 1,466, Processing Time 0.026 seconds

Review for vision-based structural damage evaluation in disasters focusing on nonlinearity

  • Sifan Wang;Mayuko Nishio
    • Smart Structures and Systems
    • /
    • v.33 no.4
    • /
    • pp.263-279
    • /
    • 2024
  • With the increasing diversity of internet media, available video data have become more convenient and abundant. Related video data-based research has advanced rapidly in recent years owing to advantages such as noncontact, low-cost data acquisition, high spatial resolution, and simultaneity. Additionally, structural nonlinearity extraction has attracted increasing attention as a tool for damage evaluation. This review paper aims to summarize the research experience with the recent developments and applications of video data-based technology for structural nonlinearity extraction and damage evaluation. The most regularly used object detection images and video databases are first summarized, followed by suggestions for obtaining video data on structural nonlinear damage events. Technologies for linear and nonlinear system identification based on video data are then discussed. In addition, common nonlinear damage types in disaster events and prevalent processing algorithms are reviewed in the section on structural damage evaluation using video data uploaded on online platform. Finally, a discussion regarding some potential research directions is proposed to address the weaknesses of the current nonlinear extraction technology based on video data, such as the use of uni-dimensional time-series data as leverage to further achieve nonlinear extraction and the difficulty of real-time detection, including the fields of nonlinear extraction for spatial data, real-time detection, and visualization.

Zoom Lens Distortion Correction Of Video Sequence Using Nonlinear Zoom Lens Distortion Model (비선형 줌-렌즈 왜곡 모델을 이용한 비디오 영상에서의 줌-렌즈 왜곡 보정)

  • Kim, Dae-Hyun;Shin, Hyoung-Chul;Oh, Ju-Hyun;Nam, Seung-Jin;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.14 no.3
    • /
    • pp.299-310
    • /
    • 2009
  • In this paper, we proposed a new method to correct the zoom lens distortion for the video sequence captured by the zoom lens. First, we defined the nonlinear zoom lens distortion model which is represented by the focal length and the lens distortion using the characteristic that lens distortion parameters are nonlinearly and monotonically changed while the focal length is increased. Then, we chose some sample images from the video sequence and estimated a focal length and a lens distortion parameter for each sample image. Using these estimated parameters, we were able to optimize the zoom lens distortion model. Once the zoom lens distortion model was obtained, lens distortion parameters of other images were able to be computed as their focal lengths were input. The proposed method has been made experiments with many real images and videos. As a result, accurate distortion parameters were estimated from the zoom lens distortion model and distorted images were well corrected without any visual artifacts.

Quantization Level Selection of Intra-Frame for MPEG-4 Video Encoder (MPEG-4 부호화기에서의 인트라 프레임 양자화 레벨 선정)

  • Kim Jeong Woo;Cho Seong Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.1
    • /
    • pp.9-18
    • /
    • 2005
  • This paper presents the method of calculating the quantization level of the intra-frame in MPEG-4 video encoder. The intra-frame is an essential part in that the quality of the whole GOP is affected by the quality of this frame since the intra-frame, which works as a reference frame within GOP, continuously propagates through other frames. This work proposes how to use bits assigned for gaining the quantization level of the intra-frame, complexity of input images, and GOP structures. The result shows that while existing approaches have the decline in efficiency by using fixed values or show different qualifies depending on the characteristics of the images, the current approach shows the steady results in various images. Comparing with Q2 algorithm obtained in MPEG-4 VM, the approach suggested in this paper gains the benefit of maximum 3.49dB with some variations depending on the characteristics of the images.

  • PDF

Toward a Key-frame Extraction Framework for Video Storyboard Surrogates Based on Users' EEG Signals (이용자 기반의 비디오 키프레임 자동 추출을 위한 뇌파측정기술(EEG) 적용)

  • Kim, Hyun-Hee;Kim, Yong-Ho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.49 no.1
    • /
    • pp.443-464
    • /
    • 2015
  • This study examined the feasibility of using EEG signals and ERP P3b for extracting video key-frames based on users' cognitive responses. Twenty participants were used to collect EEG signals. This research found that the average amplitude of right parietal lobe is higher than that of left parietal lobe when relevant images were shown to participants; there is a significant difference between the average amplitudes of both parietal lobes. On the other hand, the average amplitude of left parietal lobe in the case of non-relevant images is lower than that in the case of relevant images. Moreover, there is no significant difference between the average amplitudes of both parietal lobes in the case of non-relevant images. Additionally, the latency of MGFP1 and channel coherence can be also used as criteria to extract key-frames.

21 Century Video Image Fashion Communication - Focusing on Prada Fashion Animation - (21세기 영상 패션 커뮤니케이션 - 프라다 패션 애니메이션을 중심으로 -)

  • Jang, Ra-Yoon;Yang, Sook-Hi
    • The Research Journal of the Costume Culture
    • /
    • v.18 no.6
    • /
    • pp.1318-1330
    • /
    • 2010
  • The 21st century is the age when a sensational image has more explanatory power and can deliver a more powerful message than a message consisting of logical thinking. Powerful visual images create a big impact on many people throughout the world, overcoming linguistic barriers and even replacing language as a means of communication. In the fashion field, the concept and power of visual images within the new multimedia of the 21st century are becoming increasingly important. In recent years, other than the above methods, videos, movies and animation features have been produced directly to enhance visual effects and attempts are increasing to use these new tools as communication methods. This study focuses on animation contents that have been used in the fashion industry to overcome prejudice of luxury international brands that feature images that emphasize value, quality and heritage. The purpose of this study is to focus on the specific character of fashion animation in order to overview the concept of 21st video fashion communication and to show how the collection concept that includes color and detail places an emphasis on visual images. Analysis of previous research, theoretical research through literature and case study on Prada fashion animation led to the following conclusion. The common features of two different Prada fashion animation show that both animation have the following features in common : realism, dramatic impact and convergence for expression methods, and creativeness, hybrid and a happy ending for contents. Beginning with this study, I believe that various angles of interest and concern about communication in the fashion world, which is a social and cultural phenomenon that changes rapidly, can be will be looked at and learned from.

A study on the sensory elements of the advertising image symbolizing sound (사운드를 심벌화한 광고 영상의 감각요소 연구)

  • Kim, HyungJoon;Chung, JeanHun
    • Journal of Digital Convergence
    • /
    • v.18 no.7
    • /
    • pp.369-374
    • /
    • 2020
  • A variety of sensory elements are used in video advertisements promoting products. Video advertising using visual and auditory elements is a representative means of marketing. The advertising video that promotes the product by using such a sensory element is imprinted on viewers by continuously or repeatedly exposing visual elements such as a logo or a specific image or phrase. Such visual images are used as an effective way to symbolize brand image effectively. If the visual elements were symbolized in the advertising images of most car brands, Kia's K5's advertising images symbolized auditory elements, or sounds, to produce K5's unique advertising images. In this paper, we compared Kia's K5 advertisement image symbolizing auditory elements with other brands' advertisement image of other companies, and studied the techniques and effects used in advertisement image production.

A Bus Data Compression Method on a Phase-Based On-Chip Bus

  • Lee, Jae-Sung
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.12 no.2
    • /
    • pp.117-126
    • /
    • 2012
  • This paper provides a method for compression transmission of on-chip bus data. As the data traffic on on-chip buses is rapidly increasing with enlarged video resolutions, many video processor chips suffer from a lack of bus bandwidth and their IP cores have to wait for a longer time to get a bus grant. In multimedia data such as images and video, the adjacent data signals very often have little or no difference between them. Taking advantage of this point, this paper develops a simple bus data compression method to improve the chip performance and presents its hardware implementation. The method is applied to a Video Codec - 1 (VC-1) decoder chip and reduces the processing time of one macro-block by 13.6% and 10.3% for SD and HD videos, respectively

Content-based Video Information Retrieval and Streaming System using Viewpoint Invariant Regions

  • Park, Jong-an
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.1
    • /
    • pp.43-50
    • /
    • 2009
  • This paper caters the need of acquiring the principal objects, characters, and scenes from a video in order to entertain the image based query. The movie frames are divided into frames with 2D representative images called "key frames". Various regions in a key frame are marked as key objects according to their textures and shapes. These key objects serve as a catalogue of regions to be searched and matched from rest of the movie, using viewpoint invariant regions calculation, providing the location, size, and orientation of all the objects occurring in the movie in the form of a set of structures collaborating as video profile. The profile provides information about occurrences of every single key object from every frame of the movie it exists in. This information can further ease streaming of objects over various network-based viewing qualities. Hence, the method provides an effective reduced profiling approach of automatic logging and viewing information through query by example (QBE) procedure, and deals with video streaming issues at the same time.

  • PDF

Present Condition and View on the Wireless Communications of Geo-spatial Video System in Subway Trains (대열차 공간 화상설비의 무선설비 현황 및 전망)

  • Kim, Ji-Ho;Lee, Hyang-Beom
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2007.08a
    • /
    • pp.366-370
    • /
    • 2007
  • This paper discusses a most suitable wireless communications system for subway trains on condition that a geo-spatial video system is implemented in subway trains. Geo-spatial video system for subway trains refers to the device that can transfers the images captured by cameras within a subway station building or in and around a subway track to on coming trains wirelessly, which allows the operator in operating room to monitor the state of a platform and a subway track, the flow of passengers, and the condition of passengers getting on and off. To minimize the problem, secure civil safety and prevent accidents and calamity from occurring, a geo-spatial video system for subway trains has been increasingly introduced. The wireless communications systems for GVS for subway trains involve HF(High Frequency), IR(Infra Red), M/W(Micro Wave), wireless LAN approaches. Each has its own strengths/weaknesses, and different vendors have different technology.

  • PDF

Removing Shadows for the Surveillance System Using a Video Camera (비디오 카메라를 이용한 감시 장치에서 그림자의 제거)

  • Kim, Jung-Dae;Do, Yong-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.176-178
    • /
    • 2005
  • In the images of a video camera employed for surveillance, detecting targets by extracting foreground image is of great importance. The foreground regions detected, however, include not only moving targets but also their shadows. This paper presents a novel technique to detect shadow pixels in the foreground image of a video camera. The image characteristics of video cameras employed, a web-cam and a CCD, are first analysed in the HSV color space and a pixel-level shadow detection technique is proposed based on the analysis. Compared with existing techniques where unified criteria are used to all pixels, the proposed technique determines shadow pixels utilizing a fact that the effect of shadowing to each pixel is different depending on its brightness in background image. Such an approach can accommodate local features in an image and hold consistent performance even in changing environment. In experiments targeting pedestrians, the proposed technique showed better results compared with an existing technique.

  • PDF