• Title/Summary/Keyword: 3D video system

Search Result 405, Processing Time 0.04 seconds

Video Representation via Fusion of Static and Motion Features Applied to Human Activity Recognition

  • Arif, Sheeraz;Wang, Jing;Fei, Zesong;Hussain, Fida
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3599-3619
    • /
    • 2019
  • In human activity recognition system both static and motion information play crucial role for efficient and competitive results. Most of the existing methods are insufficient to extract video features and unable to investigate the level of contribution of both (Static and Motion) components. Our work highlights this problem and proposes Static-Motion fused features descriptor (SMFD), which intelligently leverages both static and motion features in the form of descriptor. First, static features are learned by two-stream 3D convolutional neural network. Second, trajectories are extracted by tracking key points and only those trajectories have been selected which are located in central region of the original video frame in order to to reduce irrelevant background trajectories as well computational complexity. Then, shape and motion descriptors are obtained along with key points by using SIFT flow. Next, cholesky transformation is introduced to fuse static and motion feature vectors to guarantee the equal contribution of all descriptors. Finally, Long Short-Term Memory (LSTM) network is utilized to discover long-term temporal dependencies and final prediction. To confirm the effectiveness of the proposed approach, extensive experiments have been conducted on three well-known datasets i.e. UCF101, HMDB51 and YouTube. Findings shows that the resulting recognition system is on par with state-of-the-art methods.

Da Vinci Robot-Assisted Pulmonary Lobectomy in Early Stage Lung Cancer - 3 cases report - (조기 폐암에서 다빈치 로봇을 이용한 폐엽절제술 - 3예 보고 -)

  • Haam, Seok-Jin;Lee, Kyo-Joon;Cho, Sang-Ho;Kim, Hyung-Joong;Jeon, Se-Eun;Lee, Doo-Yun
    • Journal of Chest Surgery
    • /
    • v.41 no.5
    • /
    • pp.659-662
    • /
    • 2008
  • Video-assisted pulmonary lobectomy was introduced in the early 1990's by several authors, and the frequency of video-assisted thoracic surgery (VATS) lobectomy for lung cancer has been slowly increasing because of its safety and oncologic acceptability in patients with early stage lung cancer However, VATS is limited by 2D imaging, an unsteady camera platform, and limited maneuverability of its instruments. The da Vinci Surgical System was recently introduced to overcome these limitations. It has a 3D endoscopic system with high resolution and magnified binocular views and EndoWrist instruments. We report three cases of da Vinci robot system-assisted pulmonary lobectomy in patients with early stage lung cancer.

Influence of 3D Characteristics Perception on Presence, and Presence on Visual Fatigue and Perceived Eye Movement (3D 영상 특성 인식이 프레즌스, 그리고 프레즌스가 시각 피로도와 인지된 안구운동에 미치는 영향)

  • Yang, Ho-Cheol;Chung, Dong-Hun
    • Journal of Broadcast Engineering
    • /
    • v.17 no.1
    • /
    • pp.60-72
    • /
    • 2012
  • After the movie "AVATAR" became a good model of cash-cow in 3D movie, the profit of 3D-movie significantly reduced. One of the reasons why it happens comes from rare understanding of human factors for instance how viewers get immersed, but sometimes tired. Although 3D images should be more considered human visual system including eye, unfortunately most communication research ignored human factors. For those reasons this study observed the effect of 3D video on viewers' psychological response, especially for perceived eye movement, perceived characteristics, visual fatigue, and presence. With 90 participants, the results show that viewers' perceived feature effects on their presence. In detail, first, materiality and tangibility are more important factors than clarity in 3D video, and it means that when making 3D content or devices, materiality and tangibility should be considers that any other factor. Second, this study examined whether we perceive our eyes as media, and the result shows that as viewers' presence level became higher we perceive eye movements more, and as viewers' presence level became higher perceived visual fatigue became lower. This result means that when we move eyes, we interact with surrounded environment, so 3D content needs to provide vivid features to be more interactive. On the other hand, since level of presence increase visual fatigue, it must be balanced when producing and playing.

Fast Algorithm for 360-degree Videos Based on the Prediction of Cu Depth Range and Fast Mode Decision

  • Zhang, Mengmeng;Zhang, Jing;Liu, Zhi;Mao, Fuqi;Yue, Wen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3165-3181
    • /
    • 2019
  • Spherical videos, which are also called 360-degree videos, have become increasingly popular due to the rapid development of virtual reality technology. However, the large amount of data in such videos is a huge challenge for existing transmission system. To use the existing encode framework, it should be converted into a 2D image plane by using a specific projection format, e.g. the equi-rectangular projection (ERP) format. The existing high-efficiency video coding standard (HEVC) can effectively compress video content, but its enormous computational complexity makes the time spent on compressing high-frame-rate and high-resolution 360-degree videos disproportionate to the benefits of compression. Focusing on the ERP format characteristics of 360-degree videos, this work develops a fast decision algorithm for predicting the coding unit depth interval and adaptive mode decision for intra prediction mode. The algorithm makes full use of the video characteristics of the ERP format by dealing with pole and equatorial areas separately. It sets different reference blocks and determination conditions according to the degree of stretching, which can reduce the coding time while ensuring the quality. Compared with the original reference software HM-16.16, the proposed algorithm can reduce time consumption by 39.3% in the all-intra configuration, and the BD-rate increases by only 0.84%.

Hardware Channel Decoder for Holographic WORM Storage (홀로그래픽 WORM의 하드웨어 채널 디코더)

  • Hwang, Eui-Seok;Yoon, Pil-Sang;Kim, Hak-Sun;Park, Joo-Youn
    • Transactions of the Society of Information Storage Systems
    • /
    • v.1 no.2
    • /
    • pp.155-160
    • /
    • 2005
  • In this paper, the channel decoder promising reliable data retrieving in noisy holographic channel has been developed for holographic WORM(write once read many) system. It covers various DSP(digital signal processing) blocks, such as align mark detector, adaptive channel equalizer, modulation decoder and ECC(error correction code) decoder. The specific schemes of DSP are designed to reduce the effect of noises in holographic WORM(H-WORM) system, particularly in prototype of DAEWOO electronics(DEPROTO). For real time data retrieving, the channel decoder is redesigned for FPGA(field programmable gate array) based hardware, where DSP blocks calculate in parallel sense with memory buffers between blocks and controllers for driving peripherals of FPGA. As an input source of the experiments, MPEG2 TS(transport stream) data was used and recorded to DEPROTO system. During retrieving, the CCD(charge coupled device), capturing device of DEPROTO, detects retrieved images and transmits signals of them to the FPGA of hardware channel decoder. Finally, the output data stream of the channel decoder was transferred to the MPEG decoding board for monitoring video signals. The experimental results showed the error corrected BER(bit error rate) of less than $10^{-9}$, from the raw BER of DEPROTO, about $10^{-3}$. With the developed hardware channel decoder, the real-time video demonstration was possible during the experiments. The operating clock of the FPGA was 60 MHz, of which speed was capable of decoding up to 120 mega channel bits per sec.

  • PDF

Deep Learning Based Pine Nut Detection in UAV Aerial Video (UAV 항공 영상에서의 딥러닝 기반 잣송이 검출)

  • Kim, Gyu-Min;Park, Sung-Jun;Hwang, Seung-Jun;Kim, Hee Yeong;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.1
    • /
    • pp.115-123
    • /
    • 2021
  • Pine nuts are Korea's representative nut forest products and profitable crops. However, pine nuts are harvested by climbing the trees themselves, thus the risk is high. In order to solve this problem, it is necessary to harvest pine nuts using a robot or an unmanned aerial vehicle(UAV). In this paper, we propose a deep learning based detection method for harvesting pine nut in UAV aerial images. For this, a video was recorded in a real pine forest using UAV, and a data augmentation technique was used to supplement a small number of data. As the data for 3D detection, Unity3D was used to model the virtual pine nut and the virtual environment, and the labeling was acquired using the 3D transformation method of the coordinate system. Deep learning algorithms for detection of pine nuts distribution area and 2D and 3D detection of pine nuts objects were used DeepLabV3+, YOLOv4, and CenterNet, respectively. As a result of the experiment, the detection rate of pine nuts distribution area was 82.15%, the 2D detection rate was 86.93%, and the 3D detection rate was 59.45%.

Object Tracking for a Video Sequence from a Moving Vehicle: A Multi-modal Approach

  • Hwang, Tae-Hyun;Cho, Seong-Ick;Park, Jong-Hyun;Choi, Kyoung-Ho
    • ETRI Journal
    • /
    • v.28 no.3
    • /
    • pp.367-370
    • /
    • 2006
  • This letter presents a multi-modal approach to tracking geographic objects such as buildings and road signs in a video sequence recorded from a moving vehicle. In the proposed approach, photogrammetric techniques are successfully combined with conventional tracking methods. More specifically, photogrammetry combined with positioning technologies is used to obtain 3-D coordinates of chosen geographic objects, providing a search area for conventional feature trackers. In addition, we present an adaptive window decision scheme based on the distance between chosen objects and a moving vehicle. Experimental results are provided to show the robustness of the proposed approach.

  • PDF

Implementing Multi-view 360 Video Compression System for Immersive Media (실감형 미디어를 위한 다시점 360 비디오 압축 시스템 구현)

  • Jeong, Jong-Beom;Lee, Soonbin;Jang, Dongmin;Ryu, Il-Woong;Le, Tuan Thanh;Ryu, Jaesung;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.140-143
    • /
    • 2019
  • 본 논문에서는 사용자 시점에 대응하는 고화질 360 비디오 제공을 위해 다시점 360 비디오 중복성 제거기법을 적용하고 잔여 비디오를 하나의 영상으로 병합하여 압축 후 전송하는 시스템을 구현한다. 사용자 움직임 적응적 360 비디오 스트리밍을 지원하는 three degrees of freedom plus (3DoF+)를 위한 시스템은 다시점에서 촬영된 다수의 고화질 360 비디오 전송을 요구한다. 이에 대한 방안으로 다시점 비디오 간 중복성 제거를 위한 3D warping 을 기반으로 하는 뷰 간 중복성 제거 기술과 비디오 복원에 필요한 타일들만 추출 및 병합해주는 잔여 뷰 병합 기술에 대한 구현 내용을 설명한다. 제안된 시스템을 기반으로 다시점 360 비디오 전송을 수행하면, 기존 high-efficiency video coding (HEVC)을 사용하여 전송했을 때 대비 최대 20.14%의 BD-rate 감소가 가능함을 확인하였다.

  • PDF

Development of Emotional Messenger for IPTV (IPTV를 위한 감성 메신저의 개발)

  • Sung, Min-Young;Paek, Seon-Uck;Ahn, Seong-Hye;Lee, Jun-Ha
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.12
    • /
    • pp.51-58
    • /
    • 2010
  • In the environment of instant messengers, the recognition of human emotions and its automated representation with personalized 3D character animations facilitate the use of affectivity in the machine-based communication, which will contribute to enhanced communication. This paper describes an emotional messenger system developed for the automated recognition and expression of emotions for IPTVs (Internet Protocol televisions). Aiming for efficient delivery of users' emotions, we propose emotion estimation that assesses the affective contents of given textual messages, character animation that supports both 3D rendering and video playback, and smart phone-based input method. Demonstration and experiments validate the usefulness and performance of the proposed system.

Video Rate Image Signal Processing for Optical Coherence Tomography (광학 영상기를 위한 실시간 영상 신호 처리에 관한 연구)

  • 나지훈;이병하;이창수
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.3
    • /
    • pp.239-248
    • /
    • 2004
  • Optical coherence tomography(OCT) is high resolution imaging system which can see the cross section of microscopic organs in the living tissue. In this paper, we analyze the relation between the light source and the resolution of modulated signal in Michelson interferometer. We construct 1-D OCT signal processing hardware such as amplifiers, filters, and demodulate electronic signals from the photo detector. In order to get 2-D OCT image, the synchronization among optical delay line, sample stage and A/D converter is dealt with. In experiments, we verify analog and digital signal processing blocks which apply to the stacks of glasses. Finally we aquire high resolution 2-D OCT image with respect to the onion tissue. We expect that this result can be applied to the medical instrument through performance improvement.