• Title/Summary/Keyword: 스트림 서비스

Search Result 575, Processing Time 0.02 seconds

Realtime Video Visualization based on 3D GIS (3차원 GIS 기반 실시간 비디오 시각화 기술)

  • Yoon, Chang-Rak;Kim, Hak-Cheol;Kim, Kyung-Ok;Hwang, Chi-Jung
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.63-70
    • /
    • 2009
  • 3D GIS(Geographic Information System) processes, analyzes and presents various real-world 3D phenomena by building 3D spatial information of real-world terrain, facilities, etc., and working with visualization technique such as VR(Virtual Reality). It can be applied to such areas as urban management system, traffic information system, environment management system, disaster management system, ocean management system, etc,. In this paper, we propose video visualization technology based on 3D geographic information to provide effectively real-time information in 3D geographic information system and also present methods for establishing 3D building information data. The proposed video visualization system can provide real-time video information based on 3D geographic information by projecting real-time video stream from network video camera onto 3D geographic objects and applying texture-mapping of video frames onto terrain, facilities, etc.. In this paper, we developed sem i-automatic DBM(Digital Building Model) building technique using both aerial im age and LiDAR data for 3D Projective Texture Mapping. 3D geographic information system currently provide static visualization information and the proposed method can replace previous static visualization information with real video information. The proposed method can be used in location-based decision-making system by providing real-time visualization information, and moreover, it can be used to provide intelligent context-aware service based on geographic information.

  • PDF

Developing of VOC sensor Signal Processing System using Embedded System on the Web Environment (웹 환경에서 임베디드 시스템을 이용한 VOC센서 원격 신호 모니터링 시스템 개발)

  • Park, Jin-Kwan;Lim, Hae-Jin;Nam, Si-Byung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.1
    • /
    • pp.375-383
    • /
    • 2011
  • Recent advances in digital technology and diversified internet services have resulted in a rapid growth of research on monitering systems using embedded web servers in USN systems. In designing USN systems equipped with wireless sensor modules requiring extra power for heating sensors for their appropriate operations, excessive power consumption introduces inefficiency to the entire system. In this paper, using embedded systems in web environment, we develop a remote-monitoring system with VOC (Volatile Organic Compounds) sensor signal, and propose a real time method of processing sensor-data streams by way of the serial bus from the sensor module in the USN system. The proposed system has an advantage of monitering the harmful gases on real-time basis and can be used semi-permanently by providing the sensor module with power through the serial bus. The harmful gas to be detected by the VOC sensor module is Toluene and the sensor module is composed of TGS-2602 VOC(Volatile Organic Compounds) sensors of FIGARO. The detected signal is transferred to the embedded web server using the RS-485 serial communication device. The proposed remote VOC monitering system is designed to coordinate in such a way that the VOC sensor module and embedded web server (EMPOS-II) work together effectively for real time monitering of harmful gases on the web at any places where the internet is connected.

A Real-Time Multiple Circular Buffer Model for Streaming MPEG-4 Media (MPEG-4 미디어 스트리밍에 적합한 실시간형 다중원형버퍼 모델)

  • 신용경;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.1
    • /
    • pp.13-24
    • /
    • 2003
  • MPEG-4 is a standard for multimedia applications and provides a set of technologies to satisfy the needs of authors, service providers and end users alike. In this paper, we suggest a Real-time Multiple Circular Buffer (M4RM Buffer) model, which is suitable for streaming these MPEG-4 contents efficiently. M4RM buffer generates each structure of the buffer, which matches well with each object composing an MPEG-4 content, according to the transferred information, and manipulates multiple read/write operations only by its reference. It divides the decoder buffer and the composition buffer, which are described in the standard, by the unit of frame allocated to minimize the range of access. This buffer unit of a frame is allocated according to the object description. Also, it processes the objects synchronization within the buffer and provides APIs for an efficient buffer management to process the real-time user events. Based on the performance evaluation, we show that M4RM buffer model decreases the waiting time in a buffer frame, and so allows the real-time streaming of an MPEG-4 content using the smaller size of the memory block than IM1-2D and Window Media Player.

An Efficient Thumbnail Extraction Method in H.264/AVC Bitstreams (H.264/AVC 비트스트림에서 효율적으로 축소 영상을 추출 하는 방법)

  • Yu, Sang-Jun;Yoon, Myung-Keun;Kim, Eun-Seok;Sohn, Chae-Bong;Sim, Dong-Gyu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.13 no.2
    • /
    • pp.222-235
    • /
    • 2008
  • Recently, as growing of high definition media services like HDTV and IPTV, fast moving picture manipulation techniques need to meet what those services require. Especially, a fast reduced-size image extracting method is required in the areas of video indexing and video summary Conventional DC image extracting methods, however, can't be applied to H.264/AVC streams since a spatial domain prediction scheme is adopted in H.264/AVC intra mode. In this paper, we propose a theoretical method for extracting a thumbnail image from an H.264/AVC intra frame in the frequency domain. Furthermore, the proposed scheme can extract the thumbnail very fast since all operations are applied to transform coefficients directly, after a general equation for the thumbnail extraction in nine H.264/AVC intra prediction modes is introduced, an LUT(Look Up Table) for each mode is designed. Through the implementation and performance evaluation, while the subject quality difference between the output of our scheme and a conventional output is negligible, the former can extract the thumbnail faster then the latter by up to 63%.

An emotional speech synthesis markup language processor for multi-speaker and emotional text-to-speech applications (다음색 감정 음성합성 응용을 위한 감정 SSML 처리기)

  • Ryu, Se-Hui;Cho, Hee;Lee, Ju-Hyun;Hong, Ki-Hyung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.523-529
    • /
    • 2021
  • In this paper, we designed and developed an Emotional Speech Synthesis Markup Language (SSML) processor. Multi-speaker emotional speech synthesis technology that can express multiple voice colors and emotional expressions have been developed, and we designed Emotional SSML by extending SSML for multiple voice colors and emotional expressions. The Emotional SSML processor has a graphic user interface and consists of following four components. First, a multi-speaker emotional text editor that can easily mark specific voice colors and emotions on desired positions. Second, an Emotional SSML document generator that creates an Emotional SSML document automatically from the result of the multi-speaker emotional text editor. Third, an Emotional SSML parser that parses the Emotional SSML document. Last, a sequencer to control a multi-speaker and emotional Text-to-Speech (TTS) engine based on the result of the Emotional SSML parser. Based on SSML which is a programming language and platform independent open standard, the Emotional SSML processor can easily integrate with various speech synthesis engines and facilitates the development of multi-speaker emotional text-to-speech applications.