• Title/Summary/Keyword: Video Images

Search Result 1,440, Processing Time 0.028 seconds

A Study on the 3D Video Generation Technique using Multi-view and Depth Camera (다시점 카메라 및 depth 카메라를 이용한 3 차원 비디오 생성 기술 연구)

  • Um, Gi-Mun;Chang, Eun-Young;Hur, Nam-Ho;Lee, Soo-In
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.549-552
    • /
    • 2005
  • This paper presents a 3D video content generation technique and system that uses the multi-view images and the depth map. The proposed uses 3-view video and depth inputs from the 3-view video camera and depth camera for the 3D video content production. Each camera is calibrated using Tsai's calibration method, and its parameters are used to rectify multi-view images for the multi-view stereo matching. The depth and disparity maps for the center-view are obtained from both the depth camera and the multi-view stereo matching technique. These two maps are fused to obtain more reliable depth map. Obtained depth map is not only used to insert a virtual object to the scene based on the depth key, but is also used to synthesize virtual viewpoint images. Some preliminary test results are given to show the functionality of the proposed technique.

  • PDF

Analysis of the Psychological Effects of Exposure to Different Types of Waterscape Facilities for Urban Green Space Planning

  • Jo, Hyun-Ju;Wang, Jie-Ming
    • Journal of Environmental Science International
    • /
    • v.25 no.9
    • /
    • pp.1223-1231
    • /
    • 2016
  • To create urban landscapes that take human emotion into consideration, the present study verified the psychological effects of artificial waterscape facilities on users, as these facilities significantly impact their psychological comfort. Data was collected using the SD scales and POMS of 60 male and 60 female participants after they watched a video of four waterscape facilities. Participants deemed the video clip of a fountain waterscape to be artificial and linked it with changeable images that increased their vigor. The video clip of waterfall stimulated various impressions (e.g., vital, liked, active, etc.) and changed participant' mood states by increasing their vigor and decreasing fatigue. The video clip of the pond yielded familiar impressions, produced less free images, and decreased tension among participants. Finally, the video clip of the stream stimulated quiet and comfortable images as well as reduced negative feelings of anger, confusion, and depression among participants. Furthermore, males experienced more positive effects than females, regardless of the type of waterscape facility. The study findings indicate that the four different waterscape facilities influenced participants' mood states. Additionally, the psychological effects differed by gender. The data suggest that landscape planners need to carefully consider their choice of waterscape facility when designing green spaces to ensure that the space is psychologically comforting to users.

A Method for Object Tracking Based on Background Stabilization (동적 비디오 기반 안정화 및 객체 추적 방법)

  • Jung, Hunjo;Lee, Dongeun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.1
    • /
    • pp.77-85
    • /
    • 2018
  • This paper proposes a robust digital video stabilization algorithm to extract and track an object, which uses a phase correlation-based motion correction. The proposed video stabilization algorithm consists of background stabilization based on motion estimation and extraction of a moving object. The motion vectors can be estimated by calculating the phase correlation of a series of frames in the eight sub-images, which are located in the corner of the video. The global motion vector can be estimated and the image can be compensated by using the multiple local motions of sub-images. Through the calculations of the phase correlation, the motion of the background can be subtracted from the former frame and the compensated frame, which share the same background. The moving objects in the video can also be extracted. In this paper, calculating the phase correlation to track the robust motion vectors results in the compensation of vibrations, such as movement, rotation, expansion and the downsize of videos from all directions of the sub-images. Experimental results show that the proposed digital image stabilization algorithm can provide continuously stabilized videos and tracking object movements.

Video Image Transmissions over DDS Protocol for Unmanned Air System (DDS 표준 기반 무인기 영상 데이터 전송 연구)

  • Go, Kyung-Min;Kwon, Cheol-Hee;Lee, Jong-Soon;Kim, Young-Taek
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11B
    • /
    • pp.1732-1737
    • /
    • 2010
  • Currently, one of the main purposes of the military using Unmanned Air System (VAS) is to perform surveillance and reconnaissance of hostile enemy. To carry out their mission, Unmanned aerial vechicle (UAV) transmits video images to ground control station using ISR devices installed on the UAV. After receiving the images, the ground control station distribute them to various type of users. At this case, it is important to keep QoS. This paper presents data delivery and QoS managements using DDS for DDS for UAV video images. The experiment result, based on H.264 and JPEG2000, shows that DDS standard is able to be applied to video image transmission for UAS.

2D Adjacency Matrix Generation using DCT for UWV contents

  • Li, Xiaorui;Lee, Euisang;Kang, Dongjin;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.39-42
    • /
    • 2016
  • Since a display device such as TV or signage is getting larger, the types of media is getting changed into wider view one such as UHD, panoramic and jigsaw-like media. Especially, panoramic and jigsaw-like media is realized by stitching video clips, which are captured by different camera or devices. In order to stich those video clips, it is required to find out 2D Adjacency Matrix, which tells spatial relationships among those video clips. Discrete Cosine Transform (DCT), which is used as a compression transform method, can convert the each frame of video source from the spatial domain (2D) into frequency domain. Based on the aforementioned compressed features, 2D adjacency Matrix of images could be found that we can efficiently make the spatial map of the images by using DCT. This paper proposes a new method of generating 2D adjacency matrix by using DCT for producing a panoramic and jigsaw-like media through various individual video clips.

  • PDF

A New Denoising Method for Time-lapse Video using Background Modeling

  • Park, Sanghyun
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.125-138
    • /
    • 2020
  • Due to the development of camera technology, the cost of producing time-lapse video has been reduced, and time-lapse videos are being applied in many fields. Time-lapse video is created using images obtained by shooting for a long time at long intervals. In this paper, we propose a method to improve the quality of time-lapse videos monitoring the changes in plants. Considering the characteristics of time-lapse video, we propose a method of separating the desired and unnecessary objects and removing unnecessary elements. The characteristic of time-lapse videos that we have noticed is that unnecessary elements appear intermittently in the captured images. In the proposed method, noises are removed by applying a codebook background modeling algorithm to use this characteristic. Experimental results show that the proposed method is simple and accurate to find and remove unnecessary elements in time-lapse videos.

Video Augmentation by Image-based Rendering

  • Seo, Yong-Duek;Kim, Seung-Jin;Sang, Hong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.147-153
    • /
    • 1998
  • This paper provides a method for video augmentation using image interpolation. In computer graphics or augmented reality, 3D information of a model object is necessary to generate 2D views of the model, which are then inserted into or overlayed on environmental views or real video frames. However, we do not require any three dimensional model but images of the model object at some locations to render views according to the motion of video camera which is calculated by an SFM algorithm using point matches under weak-perspective (scaled-orthographic) projection model. Thus, a linear view interpolation algorithm is applied rather than a 3D ray-tracing method to get a view of the model at different viewpoints from model views. In order to get novel views in a way that agrees with the camera motion the camera coordinate system is embedded into model coordinate system at initialization time on the basis of 3D information recovered from video images and model views, respectively. During the sequence, motion parameters from video frames are used to compute interpolation parameters, and rendered model views are overlayed on corresponding video frames. Experimental results for real video frames and model views are given. Finally, discussion on the limitations of the method and subjects for future research are provided.

  • PDF

Statistical Motion Activity Descriptor for Video Retrieval (비디오 검색을 위한 통계적 움직임 활동 기술자)

  • 심동규;정재원;오대일;김해광
    • Journal of Broadcast Engineering
    • /
    • v.5 no.1
    • /
    • pp.2-9
    • /
    • 2000
  • This paper presents a statistical motion activity description method and video retrievals by using the intensity and directions of the extracted motion vectors from video sequence. Since the proposed method can represent temporal and spatial cognitive characteristics of an entire video, several images between key frames, and images in a certain interval, it can be effectively applied to digital video services such as video retrieval, surveilance, multimedia database, and broadcasting filterings. In the paper, the effectiveness of the proposed algorithm is shown with a lot of shots of MPEG-7 video dataset.

  • PDF

Registration of Video Avatar by Comparing Real and Synthetic Images (실제와 합성영상의 비교에 의한 비디오 아바타의 정합)

  • Park Moon-Ho;Ko Hee-Dong;Byun Hye-Ran
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.8
    • /
    • pp.477-485
    • /
    • 2006
  • In this paper, video avatar, made from live video streams captured from a real participant, was used to represent a virtual participant. By using video avatar to represent participants, the sense of reality for participants can be increased, but the correct registration is also an important issue. We configured the real and virtual cameras to have the same characteristics in order to register the video avatar. Comparing real and synthetic images, which is possible because of the similarities between real and virtual cameras, resolved registration between video avatar captured from real environment and virtual environment. The degree of incorrect registration was represented as energy, and the energy was then minimized to produce seamless registration. Experimental results show the proposed method can be used effectively for registration of video avatar.

3D-Distortion Based Rate Distortion Optimization for Video-Based Point Cloud Compression

  • Yihao Fu;Liquan Shen;Tianyi Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.435-449
    • /
    • 2023
  • The state-of-the-art video-based point cloud compression(V-PCC) has a high efficiency of compressing 3D point cloud by projecting points onto 2D images. These images are then padded and compressed by High-Efficiency Video Coding(HEVC). Pixels in padded 2D images are classified into three groups including origin pixels, padded pixels and unoccupied pixels. Origin pixels are generated from projection of 3D point cloud. Padded pixels and unoccupied pixels are generated by copying values from origin pixels during image padding. For padded pixels, they are reconstructed to 3D space during geometry reconstruction as well as origin pixels. For unoccupied pixels, they are not reconstructed. The rate distortion optimization(RDO) used in HEVC is mainly aimed at keeping the balance between video distortion and video bitrates. However, traditional RDO is unreliable for padded pixels and unoccupied pixels, which leads to significant waste of bits in geometry reconstruction. In this paper, we propose a new RDO scheme which takes 3D-Distortion into account instead of traditional video distortion for padded pixels and unoccupied pixels. Firstly, these pixels are classified based on the occupancy map. Secondly, different strategies are applied to these pixels to calculate their 3D-Distortions. Finally, the obtained 3D-Distortions replace the sum square error(SSE) during the full RDO process in intra prediction and inter prediction. The proposed method is applied to geometry frames. Experimental results show that the proposed algorithm achieves an average of 31.41% and 6.14% bitrate saving for D1 metric in Random Access setting and All Intra setting on geometry videos compared with V-PCC anchor.