• Title/Summary/Keyword: 3D video

Search Result 1,152, Processing Time 0.03 seconds

3D Video Simulation System Using GPS (GPS를 이용한 3D 영상 구현 시뮬레이션 시스템)

  • Kim, Han-Kil;Joo, Sang-Woong;Kim, Hun-Hee;Jung, Hoe-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.4
    • /
    • pp.855-860
    • /
    • 2014
  • Currently, aircraft and automobile simulator for training provides a variety of training by making hypothetical situation on a simulator installed on the ground currently. And the instructor maximizes the effectiveness of the training by monitoring training and instructing the required training. When trainees are boarding the aircraft or automobile. The Instructor in the ground is not able to monitoring aircraft, automobile. The assessment of the training is not easy after the end of the training. Therefore, it is difficult to provide high quality of education to the students. In this paper, simulation system is to develop the following. Collecting GPS and real-time information for aircraft, automobile $\grave{a}$implementing 3D simulation. Implementing current image of the aircraft or automobile in the screen by 3D real-time monitoring of training situation at the control center utilizing for training saving 3D video files analysis, evaluation on training after the end of the training.

3D video simulation system using GPS (GPS를 이용한 3D 영상 구현 시뮬레이션 시스템)

  • Joo, Sang-Woong;Kang, Byeong-Jun;Shim, Kyou-Chul;Kim, Kyung-Hwan;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.891-893
    • /
    • 2012
  • Currently, aircraft and automobile simulator for training provides a variety of training by making hypothetical situation on a simulator Installed on the ground Currently. And the instructor maximizes the effectiveness of the training by Monitoring training and instructing the required training. When trainees are boarding the aircraft or automobile. The Instructor in the ground is not able to monitoring aircraft, automobile. The assessment of the training is not easy after the end of the training Therefore, it is difficult to provide high quality of education to the students. In this paper, Simulation software is to develop the following. Collecting GPS and real-time information for aircraft, automobile ${\grave{a}}implementing$ 3D simulation. Implementing Current image of the aircraft or automobile in the screen by 3D Real-time monitoring of training situation at the control center utilizing for training saving 3D video files Analysis, evaluation on training After the end of the training.

  • PDF

Error Resilient Scheme in Video Data Transmission using Information Hiding (정보은닉을 이용한 동영상 데이터의 전송 오류 보정)

  • Bae, Chang-Seok;Choe, Yoon-Sik
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.189-196
    • /
    • 2003
  • This paper describes an error resilient video data transmission method using information hiding. In order to localize transmission errors in receiver, video encoder embeds one bit for a macro block during encoding process. Embedded information is detected during decoding process in the receiver, and the transmission errors can be localized by comparing the original embedding data. The localized transmission errors can be easily corrected, thus the degradation in a reconstructed image can be alleviated. Futhermore, the embedded information can be applied to protect intellectual property rights of the video data. Experimental results for 3 QCIF sized video sequenced composed of 150 frames respectively show that, while degradation in video streams in which the information is embedded is negligible, especially in a noisy channel, the average PSNR of reconstructed images can be improved about 5 dB by using embedded information. Also, intellectual property rights information can be effectively obtained from reconstructed images.

The effects of emotional matching between video color-temperature and scent on reality improvement (영상의 색온도와 향의 감성적 일치가 영상실감 향상에 미치는 효과)

  • Lee, Guk-Hee;Li, Hyung-Chul O.;Ahn, ChungHyun;Ki, MyungSeok;Kim, ShinWoo
    • Journal of the HCI Society of Korea
    • /
    • v.10 no.1
    • /
    • pp.29-41
    • /
    • 2015
  • Technologies for video reality (e.g., 3D displays, vibration, surround sound, etc.) utilize various sensory input and many of them are now commercialized. However, when it comes to the use of olfaction for video reality, there has not been much progress in both practical and academic respects. Because olfactory sense is tightly associated with human emotion, proper use of this sense is expected to help to achieve a high degree of video reality. This research tested the effects of a video's color-temperature related scent on reality improvement when the video does not have apparent object (e.g., coffee, flower, etc.) which suggest specific smell. To this end, we had participants to rate 48 scents based on a color-temperature scale of 1,500K (warm)-15,000K (cold) and chose 8 scents (4 warm scents, 4 cold scents) which showed clear correspondence with warm or cold color-temperatures (Expt. 1). And then after applying warm (3,000K), neutral (6,500K), or cold (14,000K) color-temperatures to images or videos, we presented warm or cold scents to participants while they rate reality improvement on a 7-point scale depending on relatedness of scent vs. color-temperature (related, unrelated, neutral) (Expts. 2-3). The results showed that participants experienced greater reality when scent and color-temperature was related than when they were unrelated or neutral. This research has important practical implications in demonstrating the possibility that provision of color-temperature related scent improves video reality even when there are no concrete objects that suggest specific olfactory information.

A Study on the Utilization of Video Industry Using Virtual Reality (가상현실을 이용한 영상산업 활용에 관한 연구)

  • 백승만
    • Archives of design research
    • /
    • v.15 no.1
    • /
    • pp.163-170
    • /
    • 2002
  • Virtual Reality is the technique which makes the man experience the similar interaction behavior to the experience in the real world through virtual space. The users participating in the 3D virtual space using virtual reality technique can have the various experiences in the space desired without restrictions on time and space and then it has been applied in many application areas such as video industry, entertainment simulator, medical treatment, construction and design. The area of video among them has been highlighted as a high-added value industry. Therefore this study classifies video industry into four including movie, broadcasting, advertisement and internet and is to examine their characteristics, application cases and developmental potential. In the industry using virtual reality technique in video industry, it is implied for special elect in the area of movie and for providing the various graphic virtual word to audiences with the introduction of virtual studio and character in the area of broadcasting. It can give audiences a synergy effect by inserting 3D advertisement into virtual space in the area of advertisement. Also the implementation of 3D virtual reality such as virtual museum, virtual model house, virtual home shopping and entertainment on the web is possible with the emergence of Virtual Reality Modeling Language (VRML) and it plays the roles of more entertainments. Accordingly, this study is to seek the application methods using virtual reality technique in video industry.

  • PDF

A study for video-conference architecture in Virtual Reality based on SVTE (SVTE를 기반으로 한 가상 공간에서의 비디오 컨퍼런스 설계에 관한 연구)

  • Kim, Tae-Hun;Kim, Nam-Hyo;Kim, Sun-Woo;Choi, Yeon-Sung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.219-222
    • /
    • 2005
  • 비디오 컨퍼런스는 시간적, 공간적으로 분산된 그룹의 의사소통 및 공동작업에 매우 유용하게 활용되고 있다. 이는 이동에 소요되는 시간 및 경제성에서 특히 강점을 가진다. 본 논문에서는 SVTE(Shared Virtual Table Environment)를 기반으로 하여, 3D 비디오와 Virtual Reality 기술을 결합한 차세데 비디오 컨퍼런스 시스템을 제안한다. 또한 서로 공유된 공간내에서, 평면상의 2D 화면보다 더욱 사실감 있는 3D 영상을 제공하는 비디오 컨퍼런스 시스템의 구조를 보이고, 제안된 시스템에서 고려되어야 할 영상처리 기법을 설명한다.

  • PDF

MPEG-4 to H.264 Transcoding (MPEG-4에서 H.264로 트랜스코딩)

  • 이성선;이영렬
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.275-282
    • /
    • 2004
  • In this paper, a transcoding method that transforms MPEG-4 video bitstream coded in 30 Hz frame rate into H.264 video bitstream of 15 Hz frame rate is proposed. The block modes and motion vectors in MPEG-4 is utilized in H.264 for block mode conversion and motion vector (MV) interpolation methods. The proposed three types of MV interpolation method can be used without performing full motion estimation in H.264. The proposed transcoder reduces computation amount for full motion estimation in H.264 and provides good quality of H.264 video at low bitrates. In experimental results, the proposed methods achieves 3.2-4 times improvement in computational complexity compared to the cascaded pixel-domain transcoding, while the PSNR (peak signal to noise ratio) is degraded with 0.2-0.9dB depending on video sizes.

Development of a Cost-Effective Tele-Robot System Delivering Speaker's Affirmative and Negative Intentions (화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발)

  • Jin, Yong-Kyu;You, Su-Jeong;Cho, Hye-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.3
    • /
    • pp.171-177
    • /
    • 2015
  • A telerobot offers a more engaging and enjoyable interaction with people at a distance by communicating via audio, video, expressive gestures, body pose and proxemics. To provide its potential benefits at a reasonable cost, this paper presents a telepresence robot system for video communication which can deliver speaker's head motion through its display stanchion. Head gestures such as nodding and head-shaking can give crucial information during conversation. We also can assume a speaker's eye-gaze, which is known as one of the key non-verbal signals for interaction, from his/her head pose. In order to develop an efficient head tracking method, a 3D cylinder-like head model is employed and the Harris corner detector is combined with the Lucas-Kanade optical flow that is known to be suitable for extracting 3D motion information of the model. Especially, a skin color-based face detection algorithm is proposed to achieve robust performance upon variant directions while maintaining reasonable computational cost. The performance of the proposed head tracking algorithm is verified through the experiments using BU's standard data sets. A design of robot platform is also described as well as the design of supporting systems such as video transmission and robot control interfaces.

Standardization Trends of 3D Video Coding in MPEG (3D 비디오 MPEG 표준화 동향)

  • Um, G.M.;Bang, G.;Hur, N.H.;Kim, J.W.
    • Electronics and Telecommunications Trends
    • /
    • v.24 no.3
    • /
    • pp.61-68
    • /
    • 2009
  • UHDTV와 함께 HDTV 이후의 차세대 방송기술로서 전세계적으로 연구되고 있는 3DTV 방송 기술은 시청자에게 보다 사실적 이고 현장감 있는 3D 비디오 콘텐츠를 제공한다. 최근 미국 헐리우드를 중심으로 활성화되고 있는 3D 입체 영화 시장 성장, 다수의 디스플레이 업체들에 의한 3D 지원 디스플레이 발표, 휴대단말에서의 3D 데이터 및 비디오 서비스, 가정에서의 3D 비디오 서비스를 위한 3D@Home 표준화 작업, 3D4YOU를 중심으로 한 3D 비디오 콘텐츠의 생성기술과 배포 포맷에 관한 연구, 다시 점 3DTV 서비스 등을 위한 MPEG에서의 3DV 표준화 등 관련 기술 개발 및 표준화가 이뤄지고 있다. 본 고에서는 MPEG 3DV 그룹에서 진행중인 3DV부호화 기술 표준화 진행 현황과 기고 주요 기술에 대해 설명하고, 향후 3D 비디오 표준화 진행 전망에 대해 다루기로 한다.

A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31 (다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation)

  • Oksenhendler, V.;Bensrhair, Abdelaziz;Miche, Pierre;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.124-130
    • /
    • 1998
  • High-speed 3D vision systems are essential for autonomous robot or vehicle control applications. In our study, a stereo vision process has been developed. It consists of three steps : extraction of edges in right and left images, matching corresponding edges and calculation of the 3D map. This process is implemented in a VME 150/40 Imaging Technology vision system. It is a modular system composed by a display, an acquisition, a four Mbytes image frame memory, and three computational cards. Programmable accelerator computational modules are running at 40 MHz and are based on TMS320C31 DSP with a $64{\times}32$ bit instruction cache and two $1024{\times}32$ bit internal RAMs. Each is equipped with 512 Kbytes static RAM, 4 Mbytes image memory, 1 Mbytes flash EEPROM and a serial port. Data transfers and communications between modules are provided by three 8 bit global video bus, and three local configurable pipeline 8 bit video bus. The VME bus is dedicated to system management. Tasks between DSPs are distributed as follows: two DSPs are used to edges detection, one for the right image and the other for the left one. The last processor computes the matching process and the 3D calculation. With $512{\times}512$ pixels images, this sensor generates dense 3D maps at a rate of about 1 Hz depending of the scene complexity. Results can surely be improved by using a special suited multiprocessors cards.

  • PDF