• Title/Summary/Keyword: 3D video system

Search Result 405, Processing Time 0.027 seconds

Design and Implementation of a Video-conference System supporting H.323 and SIP (H.323과 SIP를 지원하는 영상회의 시스템의 설계 및 구현)

  • Seong, Dong-Su
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.521-530
    • /
    • 2003
  • Various multimedia application services have been developed with techniques of high speed networks and computer. Among these services, a video-conference system over internet is useful and important, and its standardization has been shown in ITU-T H.323 and IETF SIP. H.323 standardization is currently used in many products, but two standardizations will coexist for a long time. Since the interoperability is needed between two standardizations, H.323-SIP gateway is developed for it. In audition, a video-conference system supporting two standardizations is necessary to be developed. We have implemented the video-conference terminal supporting both ITU-T H.323 and IETF SIP standardizations over internet. The system implementation has been designed using the common modules of two standardizations maximally, and the interoperability among different systems is satisfied through experiments.

A Study of Video Synchronization Method for Live 3D Stereoscopic Camera (실시간 3D 영상 카메라의 영상 동기화 방법에 관한 연구)

  • Han, Byung-Wan;Lim, Sung-Jun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.6
    • /
    • pp.263-268
    • /
    • 2013
  • A stereoscopic image is made via 3 dimensional image processing for combining two images from left and right camera. In this case, it is very important to synchronize input images from two cameras. The synchronization method for two camera input images is proposed in this paper. A software system is used to support various video format. And it will be used in the system for glassless stereoscopic images using several cameras.

A Robust Approach for Human Activity Recognition Using 3-D Body Joint Motion Features with Deep Belief Network

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1118-1133
    • /
    • 2017
  • Computer vision-based human activity recognition (HAR) has become very famous these days due to its applications in various fields such as smart home healthcare for elderly people. A video-based activity recognition system basically has many goals such as to react based on people's behavior that allows the systems to proactively assist them with their tasks. A novel approach is proposed in this work for depth video based human activity recognition using joint-based motion features of depth body shapes and Deep Belief Network (DBN). From depth video, different body parts of human activities are segmented first by means of a trained random forest. The motion features representing the magnitude and direction of each joint in next frame are extracted. Finally, the features are applied for training a DBN to be used for recognition later. The proposed HAR approach showed superior performance over conventional approaches on private and public datasets, indicating a prominent approach for practical applications in smartly controlled environments.

Robust 3D Wavelet Watermarking for Video

  • Jie Yang;Kim, Young-Gon;Lee, Moon-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2002.11a
    • /
    • pp.173-176
    • /
    • 2002
  • This paper proposes a new approach for digital watermarking and secure copyright protection of video. the principal aim being to discourage illicit copying and distribution of copyrighted material. The method presented here is based on the three dimensional discrete wavelet transform of video scene. The watermark is a copyright information encoded in the form of a spread spectrum signal to ensure the system security and is embedded in the 3D DWT magnitude of video chunks. The performance of the presented technique is evaluated experimentally.

  • PDF

3D Augmented Reality Streaming System Based on a Lamina Display

  • Baek, Hogil;Park, Jinwoo;Kim, Youngrok;Park, Sungwoong;Choi, Hee-Jin;Min, Sung-Wook
    • Current Optics and Photonics
    • /
    • v.5 no.1
    • /
    • pp.32-39
    • /
    • 2021
  • We propose a three-dimensional (3D) streaming system based on a lamina display that can convey field information in real-time by creating floating 3D images that can satisfy the accommodation cue. The proposed system is mainly composed of three parts, namely: a 3D vision camera unit to obtain and provide RGB and depth data in real-time, a 3D image engine unit to realize the 3D volume with a fast response time by using the RGB and depth data, and an optical floating unit to bring the implemented 3D image out of the system and consequently increase the sense of presence. Furthermore, we devise the streaming method required for implementing augmented reality (AR) images by using a multilayered image, and the proposed method for implementing AR 3D video in real-time non-face-to-face communication has been experimentally verified.

Enhanced 3D Residual Network for Human Fall Detection in Video Surveillance

  • Li, Suyuan;Song, Xin;Cao, Jing;Xu, Siyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3991-4007
    • /
    • 2022
  • In the public healthcare, a computational system that can automatically and efficiently detect and classify falls from a video sequence has significant potential. With the advancement of deep learning, which can extract temporal and spatial information, has become more widespread. However, traditional 3D CNNs that usually adopt shallow networks cannot obtain higher recognition accuracy than deeper networks. Additionally, some experiences of neural network show that the problem of gradient explosions occurs with increasing the network layers. As a result, an enhanced three-dimensional ResNet-based method for fall detection (3D-ERes-FD) is proposed to directly extract spatio-temporal features to address these issues. In our method, a 50-layer 3D residual network is used to deepen the network for improving fall recognition accuracy. Furthermore, enhanced residual units with four convolutional layers are developed to efficiently reduce the number of parameters and increase the depth of the network. According to the experimental results, the proposed method outperformed several state-of-the-art methods.

A 1.485 Gbps Wireless Video Signal Transmission System at 240 GHz (240 GHz, 1.485 Gbps 비디오신호 무선 전송 시스템)

  • Lee, Won-Hui;Chung, Tae-Jin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.4
    • /
    • pp.105-113
    • /
    • 2010
  • In this paper, a 1.485 Gbps video signal transmission system using the carrier frequency of 240 GHz band was designed and simulated. The sub-harmonic mixer based on Schottky barrier diode was simulated in the transmitter and receiver. Both of heterodyne and direct detection receivers were simulated for each performance analysis. The ASK modulation was used in the transmitter and the envelop detection method was used in the receiver. The transmitter simulation results showed that the RF output power was -11.4 dBm($73{\mu}W$), when the IF input power was -3 dBm(0.5 mW) at the LO power of 7 dBm(5 mW) in sub-harmonic mixer, which corresponds to SSB(Single Side Band) conversion loss of 8.4 dB. This value is similar to the conversion loss of 8.0 dB(SSB) of VDI's commercial model WR3.4SHM(220~325 GHz) at 240 GHz. The combined transmitter and receiver simulation results showed that the recovered signal waveforms were in good agreement to the transmitted 1.485 Gbps NRZ signal.

Production of fusion-type realistic contents using 3D motion control technology (3D모션 컨트롤 기술을 이용한 융합형 실감 콘텐츠 제작)

  • Jeong, Sun-Ri;Chang, Seok-Joo
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.4
    • /
    • pp.146-151
    • /
    • 2019
  • In this paper, we developed a multi-view video content based on real-world technology and a pilot using the production technology, and provided realistic contents production technology that can select a desired direction at a user 's view point by providing users with various viewpoint images. We also created multi-view video contents that can indirectly experience local cultural tourism resources and produced cyber tour contents based on multi-view video (realistic technology). This technology development can be used to create 3D interactive real-world contents that are used in all public education fields such as libraries, kindergartens, elementary schools, middle schools, elderly universities, housewives classrooms, lifelong education centers, The domestic VR market is still in it's infancy, and it's expected to develop in combination with the 3D market related to games and shopping malls. As the domestic educational trend and the demand for social public education system are growing, it is expected to increase gradually.

A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31 (다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation)

  • Oksenhendler, V.;Bensrhair, Abdelaziz;Miche, Pierre;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.124-130
    • /
    • 1998
  • High-speed 3D vision systems are essential for autonomous robot or vehicle control applications. In our study, a stereo vision process has been developed. It consists of three steps : extraction of edges in right and left images, matching corresponding edges and calculation of the 3D map. This process is implemented in a VME 150/40 Imaging Technology vision system. It is a modular system composed by a display, an acquisition, a four Mbytes image frame memory, and three computational cards. Programmable accelerator computational modules are running at 40 MHz and are based on TMS320C31 DSP with a $64{\times}32$ bit instruction cache and two $1024{\times}32$ bit internal RAMs. Each is equipped with 512 Kbytes static RAM, 4 Mbytes image memory, 1 Mbytes flash EEPROM and a serial port. Data transfers and communications between modules are provided by three 8 bit global video bus, and three local configurable pipeline 8 bit video bus. The VME bus is dedicated to system management. Tasks between DSPs are distributed as follows: two DSPs are used to edges detection, one for the right image and the other for the left one. The last processor computes the matching process and the 3D calculation. With $512{\times}512$ pixels images, this sensor generates dense 3D maps at a rate of about 1 Hz depending of the scene complexity. Results can surely be improved by using a special suited multiprocessors cards.

  • PDF

An Automatic Camera Tracking System for Video Surveillance

  • Lee, Sang-Hwa;Sharma, Siddharth;Lin, Sang-Lin;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.42-45
    • /
    • 2010
  • This paper proposes an intelligent video surveillance system for human object tracking. The proposed system integrates the object extraction, human object recognition, face detection, and camera control. First, the object in the video signals is extracted using the background subtraction. Then, the object region is examined whether it is human or not. For this recognition, the region-based shape descriptor, angular radial transform (ART) in MPEG-7, is used to learn and train the shapes of human bodies. When it is decided that the object is human or something to be investigated, the face region is detected. Finally, the face or object region is tracked in the video, and the pan/tilt/zoom (PTZ) controllable camera tracks the moving object with the motion information of the object. This paper performs the simulation with the real CCTV cameras and their communication protocol. According to the experiments, the proposed system is able to track the moving object(human) automatically not only in the image domain but also in the real 3-D space. The proposed system reduces the human supervisors and improves the surveillance efficiency with the computer vision techniques.

  • PDF