• 제목/요약/키워드: On-scene time

검색결과 519건 처리시간 0.027초

주문형 게임 서비스를 위한 장면 기술자 기반 고속 게임 부호화기 (Fast Game Encoder Based on Scene Descriptor for Gaming-on-Demand Service)

  • 전찬웅;조현호;심동규
    • 한국멀티미디어학회논문지
    • /
    • 제14권7호
    • /
    • pp.849-857
    • /
    • 2011
  • 주문형 게임 서비스는 서버에서 실행하는 게임을 동영상 부호화하여 클라이언트에 전송하고, 클라이언트에서 비디오 복호화를 통해 게임을 즐길 수 있게 해 준다. 다수의 사용자가 네트워크상에서 실시간 게임 서비스를 즐기기 위해서는 초고속 게임 인코더가 필요하다. 본 논문에서 제안한 방법은 장면 기술자를 정의하고, 이를 게임 영상을 부호화하는 부호화기에 부가적인 정보로 입력함으로써 움직임 예측, 율 왜곡 최적화와 같은 복잡도가 높은 부호화 과정을 생략하여 부호화기를 고속화한다. 장면 기술자를 움직임 벡터로 사용하고, 장면 기술자를 이용하여 매크로블록 모드를 결정해 부호화기를 고속화한다. 제안하는 방법의 성능 평가를 위해 H.264/AVC의 오픈 소프트웨어인 x264와 비교한 결과, x264에 어셈블리 코드가 포함되지 않은 경우에 대해서 약 192%의 부호화 속도 향상을 확인하였고, x264에서 일부 모듈에 대해서 어셈블리 최적화를 반영한 결과에 대해서는 86%의 부호화 속도가 향상되는 것을 확인할 수 있었다. 부호화기의 고속화 결과 60 FPS의 부호화 속도를 넘어 주문형 게임을 실시간으로 수행할 수 있게 되었다.

소방 방화복 교차오염 저감 및 관리체계 개선을 위한 델파이 연구 (Delphi Study on the Reduction of Cross-contamination and Improvement of Management System on Firefighting Protection Suit)

  • 김수진;함승헌
    • 한국산업보건학회지
    • /
    • 제32권2호
    • /
    • pp.182-194
    • /
    • 2022
  • Objectives: This study evaluates and recommends the priority of policy implementation to improve the fire protection clothing management system used by firefighters and the reduction of cross-contamination from contaminated clothing at the scene of a fire. Methods: It consisted of 7 experts and conducted three interviews and two modified Delphi surveys. Through the results of previous research and interviews with experts, a plan to reduce cross-contamination of fire suits and improve the management system was first derived. An improvement plan was presented in the four areas including resources, management, fire protection related work, and laws and regulations, and the priority of policy implementation was derived by analyzing the importance and practicality of the policy at the same time. Results: As a result of the analysis, the first priority was education on the health effects of pollutants at the disaster scene for firefighters, and the second priority was the addition of SOP for the primary decontamination of on-scene personal protective equipment in preparation for the health effects of the disaster scene, and education for fire suppression and rescue workers. The next step was to improve the management system of personal protective equipment such as fire suits and develop a training course for systematic operation. Conclusions: This findings could be used in the implementation of mid- to long-term firefighting policies for the systematic operation and establishment of a systematic management system for personal protective equipment such as fire protective suits.

무인 자동차의 2차원 레이저 거리 센서를 이용한 도시 환경에서의 빠른 주변 환경 인식 방법 (Fast Scene Understanding in Urban Environments for an Autonomous Vehicle equipped with 2D Laser Scanners)

  • 안승욱;최윤근;정명진
    • 로봇학회논문지
    • /
    • 제7권2호
    • /
    • pp.92-100
    • /
    • 2012
  • A map of complex environment can be generated using a robot carrying sensors. However, representation of environments directly using the integration of sensor data tells only spatial existence. In order to execute high-level applications, robots need semantic knowledge of the environments. This research investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The proposed system is decomposed into five steps: sequential LIDAR scan, point classification, ground detection and elimination, segmentation, and object classification. This method could classify the various objects in urban environment, such as cars, trees, buildings, posts, etc. The simple methods minimizing time-consuming process are developed to guarantee real-time performance and to perform data classification on-the-fly as data is being acquired. To evaluate performance of the proposed methods, computation time and recognition rate are analyzed. Experimental results demonstrate that the proposed algorithm has efficiency in fast understanding the semantic knowledge of a dynamic urban environment.

Development of 3D Stereoscopic Image Generation System Using Real-time Preview Function in 3D Modeling Tools

  • Yun, Chang-Ok;Yun, Tae-Soo;Lee, Dong-Hoon
    • 한국멀티미디어학회논문지
    • /
    • 제11권6호
    • /
    • pp.746-754
    • /
    • 2008
  • A 3D stereoscopic image is generated by interdigitating every scene with video editing tools that are rendered by two cameras' views in 3D modeling tools, like Autodesk MAX(R) and Autodesk MAYA(R). However, the depth of object from a static scene and the continuous stereo effect in the view of transformation, are not represented in a natural method. This is because after choosing the settings of arbitrary angle of convergence and the distance between the modeling and those two cameras, the user needs to render the view from both cameras. So, the user needs a process of controlling the camera's interval and rendering repetitively, which takes too much time. Therefore, in this paper, we will propose the 3D stereoscopic image editing system for solving such problems as well as exposing the system's inherent limitations. We can generate the view of two cameras and can confirm the stereo effect in real-time on 3D modeling tools. Then, we can intuitively determine immersion of 3D stereoscopic image in real-time, by using the 3D stereoscopic image preview function.

  • PDF

도로교통 영상처리를 위한 고속 영상처리시스템의 하드웨어 구현 (An Onboard Image Processing System for Road Images)

  • 이운근;이준웅;조석빈;고덕화;백광렬
    • 제어로봇시스템학회논문지
    • /
    • 제9권7호
    • /
    • pp.498-506
    • /
    • 2003
  • A computer vision system applied to an intelligent safety vehicle has been required to be worked on a small sized real time special purposed hardware not on a general purposed computer. In addition, the system should have a high reliability even under the adverse road traffic environment. This paper presents a design and an implementation of an onboard hardware system taking into account for high speed image processing to analyze a road traffic scene. The system is mainly composed of two parts: an early processing module of FPGA and a postprocessing module of DSP. The early processing module is designed to extract several image primitives such as the intensity of a gray level image and edge attributes in a real-time Especially, the module is optimized for the Sobel edge operation. The postprocessing module of DSP utilizes the image features from the early processing module for making image understanding or image analysis of a road traffic scene. The performance of the proposed system is evaluated by an experiment of a lane-related information extraction. The experiment shows the successful results of image processing speed of twenty-five frames of 320$\times$240 pixels per second.

애니메이션 분야의 심미적 인식에 의한 동일시와 동기화 연출 (Directed Identification, Synchronization by Aesthetic Recognition of Animation Field)

  • 이현우;류창수
    • 한국멀티미디어학회논문지
    • /
    • 제25권10호
    • /
    • pp.1475-1482
    • /
    • 2022
  • Mickey Mousing perfect match between animation sound and image was an aesthetic in the field of animation, but since the 2000s, works such as and released by producers such as DreamWorks and Pixar have expanded the perfection of synchronization to irony. It also influenced the identification system of sentiment. It is time to view the directing attempt of these elements as a factor that changed the new paradigm of narrative, and related research is needed. In this study, the scene of was analyzed as a case study for the synchronization of animation sound and image components and the boundary direction on the recognition of identification between reality and fiction. Aesthetic recognition of the research work is based on the premise of real time and space perception, and the audience can recognize in the conceptual world as an integrated art by playfully producing fictional time and space. The direct antithesis of synchronization and identification was drawn to maintain the curiosity of the next scene by repeating selective concealment and disclosure of information in the direction of conveying an unfamiliar and heterogeneous feeling to the audience.

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 한국정보컨버전스학회 2008년도 International conference on information convergence
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF

비디오 부가데이터 서비스를 위한 지상파 DMB 시스템 개발 (The Development of Terrestrial DMB System for Video Associated Data Services)

  • 김현순;경일수;김상훈;김만식
    • 방송공학회논문지
    • /
    • 제11권4호
    • /
    • pp.541-553
    • /
    • 2006
  • 지상파 DMB (Digital Multimedia Broadcasting) 본 방송이 시작됨에 따라 고품질의 오디오, 비디오뿐만 아니라 부가 가치를 창출하기 위한 다양한 서비스 모델이 요구되고 있다. 본 논문은 이러한 서비스들 중 하나인 비디오 부가데이터 서비스를 위한 지상파 DMB 저작, 송출 시스템에 관한 것이다. 제안한 시스템은 국내 지상파 DMB 부가데이터 서비스 표준인 MPEG-4 BIFS(Binary Format for Scene) Core2D 장면서술 프로파일과 그래픽스 프로파일을 따른다. 본 시스템은 실시간 저작 수동 송출, 비실 시간 저작 자동 송출이라는 두 중요한 방송 요소를 제공하도록 설계되었으며, 양질의 콘텐츠를 효율적으로 제작하고 제작한 콘텐츠를 비디오 인코더와 연동하여 안정적으로 송출하는 데 중점을 두어 개발되었다. 제안한 시스템은 다양한 수신기와의 정합 실험을 통하여 그 성능을 입증하였으며, 추후 본 방송에 효율적으로 사용될 수 있을 것이다.

VALIDATION OF SEA ICE MOTION DERIVED FROM AMSR-E AND SSM/I DATA USING MODIS DATA

  • Yaguchi, Ryota;Cho, Ko-Hei
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2008년도 International Symposium on Remote Sensing
    • /
    • pp.301-304
    • /
    • 2008
  • Since longer wavelength microwave radiation can penetrate clouds, satellite passive microwave sensors can observe sea ice of the entire polar region on a daily basis. Thus, it is becoming popular to derive sea ice motion vectors from a pair of satellite passive microwave sensor images observed at one or few day interval. Usually, the accuracies of derived vectors are validated by comparing with the position data of drifting buoys. However, the number of buoys for validation is always quite limited compared to a large number of vectors derived from satellite images. In this study, the sea ice motion vectors automatically derived from pairs of AMSR-E 89GHz images (IFOV = 3.5 ${\times}$ 5.9km) by an image-to-image cross correlation were validated by comparing with sea ice motion vectors manually derived from pairs of cloudless MODIS images (IFOV=250 ${\times}$ 250m). Since AMSR-E and MODIS are both on the same Aqua satellite of NASA, the observation time of both sensors are the same. The relative errors of AMSR-E vectors against MODIS vectors were calculated. The accuracy validation has been conducted for 5 scenes. If we accept relative error of less than 30% as correct vectors, 75% to 92% of AMSR-E vectors derived from one scene were correct. On the other hand, the percentage of correct sea ice vectors derived from a pair of SSM/I 85GHz images (IFOV = 15 ${\times}$ 13km) observed nearly simultaneously with one of the AMSR-E images was 46%. The difference of the accuracy between AMSR-E and SSM/I is reflecting the difference of IFOV. The accuracies of H and V polarization were different from scene to scene, which may reflect the difference of sea ice distributions and their snow cover of each scene.

  • PDF

3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정 (Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction)

  • 김주희;김인철
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제4권4호
    • /
    • pp.187-194
    • /
    • 2015
  • 본 논문에서는 RGB-D 입력 영상들로부터 3차원 공간을 움직이는 카메라의 실시간 포즈를 효과적으로 추적할 수 있는 시각 주행 거리측정기를 제안한다. 본 논문에서 제안하는 시각 주행 거리 측정기에서는 컬러 영상과 깊이 영상의 풍부한 정보를 충분히 활용하면서도 실시간 계산량을 줄이기 위해, 특징 기반의 저밀도 주행 거리 계산 방법을 사용한다. 본 시스템에서는 보다 정확한 주행 거리 추정치를 얻기 위해, 카메라 이동 이전과 이동 이후의 영상에서 추출한 특징들을 정합한 뒤, 정합된 특징들에 대한 추가적인 정상 집합 정제 과정과 주행 거리 정제 작업을 반복한다. 또한, 정제 후 잔여 정상 집합의 크기가 충분치 않은 경우에도 잔여 정상 집합의 크기에 비례해 최종 주행 거리를 결정함으로써, 추적 성공률을 크게 향상시켰다. TUM 대학의 벤치마크 데이터 집합을 이용한 실험과 3차원 장면 복원 응용 시스템의 구현을 통해, 본 논문에서 제안하는 시각 주행 거리 측정 방법의 높은 성능을 확인할 수 있었다.