• Title/Summary/Keyword: 스트리밍 형식

Search Result 44, Processing Time 0.017 seconds

Design and Implementation of Distributed Object Framework Supporting Audio/Video Streaming (오디오/비디오 스트리밍을 지원하는 분산 객체 프레임 워크 설계 및 구현)

  • Ban, Deok-Hun;Kim, Dong-Seong;Park, Yeon-Sang;Lee, Heon-Ju
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.4
    • /
    • pp.440-448
    • /
    • 1999
  • 본 논문은 객체지향형 분산처리 환경 하에서 오디오나 비디오 등과 같은 실시간(real-time) 스트림(stream) 데이타를 처리하는 데 필요한 소프트웨어 기반구조를 설계하고 구현한 내용을 기술한다. 본 논문에서 제시한 DAViS(Distributed Object Framework supporting Audio/Video Streaming)는, 오디오/비디오 데이타의 처리와 관련된 여러 소프트웨어 구성요소들을 분산객체로 추상화하고, 그 객체들간의 제어정보 교환경로와 오디오/비디오 데이타 전송경로를 서로 분리하여 처리한다. 분산응용프로그램 작성자는 DAViS에서 제공하는 서비스들을 이용하여, 기존의 분산프로그래밍 환경이 제공하는 것과 동일한 수준에서 오디오/비디오 데이타에 대한 처리를 표현할 수 있다. DAViS는, 새로운 형식의 오디오/비디오 데이타를 처리하는 부분을 손쉽게 통합하고, 하부 네트워크의 전송기술이나 컴퓨터시스템 관련 기술의 진보를 신속하고 자연스럽게 수용할 수 있도록 하는 유연한 구조를 가지고 있다. Abstract This paper describes the design and implementation of software framework which supports the processing of real-time stream data like audio and video in distributed object-oriented computing environment. DAViS(Distributed Object Framework supporting Audio/Video Streaming), proposed in this paper, abstracts software components concerning the processing of audio/video data as distributed objects and separates the transmission path of data between them from that of control information. Based on DAViS, distributed applications can be written in the same abstract level as is provided by the existing distributed environment in handling audio/video data. DAViS has a flexible internal structure enough to easily incorporate new types of audio/video data and to rapidly accommodate the progress of underlying network and computer system technology with very little modifications.

Web-based Text-To-Sign Language Translating System (웹기반 청각장애인용 수화 웹페이지 제작 시스템)

  • Park, Sung-Wook;Wang, Bo-Hyeun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.265-270
    • /
    • 2014
  • Hearing-impaired people have difficulty in hearing, so it is also hard for them to learn letters that represent sound and text that conveys complex and abstract concepts. Therefore it has been natural choice for the hearing-impaired people to use sign language for communication, which employes facial expression, and hands and body motion. However, the major communication methods in daily life are text and speech, which are big obstacles for the hearing-impaired people to access information, to learn and make intellectual activities, and to get jobs. As delivering information via internet become common the hearing-impaired people are experiencing more difficulty in accessing information since internet represents information mostly in text forms. This intensifies unbalance of information accessibility. This paper reports web-based text-to-sign language translating system that helps web designer to use sign language in web page design. Since the system is web-based, if web designers are equipped with common computing environment for internet browsing, they can use the system. The web-based text-to-sign language system takes the format of bulletin board as user interface. When web designers write paragraphs and post them through the bulletin board to the translating server, the server translates the incoming text to sign language, animates with 3D avatar and records the animation in a MP4 file. The file addresses are fetched by the bulletin board and it enables web designers embed the translated sign language file into their web pages by using HTML5 or Javascript. Also we analyzed text used by web pages of public services, then figured out new words to the translating system, and added to improve translation. This addition is expected to encourage wide and easy acceptance of web pages for hearing-impaired people to public services.

AUX Model for restoring and analyzing Associative User Experience informations (연상된 사용자 경험정보 축척 및 분석을 위한 AUX 모델)

  • Ryu, Chun-Yeol;Yang, Hae-Sool
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.586-596
    • /
    • 2011
  • In the IT industry, processing units of IT applications are getting smaller and high efficient. Furthermore, the realization of various smart functions is highly feasible now due to advances in sensing technology. The service infrastructures on high efficient and compact mobile devices are applied to various areas. These also could be possessed by users and is built into the devices. Currently, studies on the UX(User Experience) field to attempt an analysis and prediction of user's information are continuing with reference to the UI(User Interface). However, research on the common framework of classification and storing the user-information, and standardization of form has not been attempted yet. In this study, we proposed the AUX(Associative user Experience) model and process structure to store various empirical data by users. The AUX model expressed a diversity of user's empirical data using extended E-TCPN model. And also, we expressed the data structure using XML with reference to the application of AUX model. This expressed model and separation of process structure guarantee its specialty, productivity and flexibility through the humanistic characteristics of users and the independence of technical process structure. The AUX model maps out the AUX information process architecture and expressed the process with the improved MPP algorithm, to analyze of its performance. The simulation of movements applying to MPP traffic allocation of VOD is used to analyze of its performance. The playback deviation of MPP Graphic Allocation Algorism where the AUX model was applied was improved by 10.41% more than the one where it was not applied. As a result of that, playback performance has improved due to the conversion of AUX with accessing media, content of users and dynamic traffic allocation such as MPI and CPI.

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.