• Title/Summary/Keyword: 동영상 관리 시스템

Search Result 194, Processing Time 0.027 seconds

Development of the video-based smart utterance deep analyser (SUDA) application (동영상 기반 자동 발화 심층 분석(SUDA) 어플리케이션 개발)

  • Lee, Soo-Bok;Kwak, Hyo-Jung;Yun, Jae-Min;Shin, Dong-Chun;Sim, Hyun-Sub
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.63-72
    • /
    • 2020
  • This study aims to develop a video-based smart utterance deep analyser (SUDA) application that analyzes semiautomatically the utterances that child and mother produce during interactions over time. SUDA runs on the platform of Android, iPhones, and tablet PCs, and allows video recording and uploading to server. In this device, user modes are divided into three modes: expert mode, general mode and manager mode. In the expert mode which is useful for speech and language evaluation, the subject's utterances are analyzed semi-automatically by measuring speech and language factors such as disfluency, morpheme, syllable, word, articulation rate and response time, etc. In the general mode, the outcome of utterance analysis is provided in a graph form, and the manger mode is accessed only to the administrator controlling the entire system, such as utterance analysis and video deletion. SUDA helps to reduce clinicians' and researchers' work burden by saving time for utterance analysis. It also helps parents to receive detailed information about speech and language development of their child easily. Further, this device will contribute to building a big longitudinal data enough to explore predictors of stuttering recovery and persistence.

Content Based Video Retrieval by Example Considering Context (문맥을 고려한 예제 기반 동영상 검색 알고리즘)

  • 박주현;낭종호;김경수;하명환;정병희
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.12
    • /
    • pp.756-771
    • /
    • 2003
  • Digital Video Library System which manages a large amount of multimedia information requires efficient and effective retrieval methods. In this paper, we propose and implement a new video search and retrieval algorithm that compares the query video shot with the video shots in the archives in terms of foreground object, background image, audio, and its context. The foreground object is the region of the video image that has been changed in the successive frames of the shot, the background image is the remaining region of the video image, and the context is the relationship between the low-level features of the adjacent shots. Comparing these features is a result of reflecting the process of filming a moving picture, and it helps the user to submit a query focused on the desired features of the target video clips easily by adjusting their weights in the comparing process. Although the proposed search and retrieval algorithm could not totally reflect the high level semantics of the submitted query video, it tries to reflect the users' requirements as much as possible by considering the context of video clips and by adjusting its weight in the comparing process.

Design of Moving Picture Retrieval System using Scene Change Technique (장면 전환 기법을 이용한 동영상 검색 시스템 설계)

  • Kim, Jang-Hui;Kang, Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.8-15
    • /
    • 2007
  • Recently, it is important to process multimedia data efficiently. Especially, in case of retrieval of multimedia information, technique of user interface and retrieval technique are necessary. This paper proposes a new technique which detects cuts effectively in compressed image information by MPEG. A cut is a turning point of scenes. The cut-detection is the basic work and the first-step for video indexing and retrieval. Existing methods have a weak point that they detect wrong cuts according to change of a screen such as fast motion of an object, movement of a camera and a flash. Because they compare between previous frame and present frame. The proposed technique detects shots at first using DC(Direct Current) coefficient of DCT(Discrete Cosine Transform). The database is composed of these detected shots. Features are extracted by HMMD color model and edge histogram descriptor(EHD) among the MPEG-7 visual descriptors. And detections are performed in sequence by the proposed matching technique. Through this experiments, an improved video segmentation system is implemented that it performs more quickly and precisely than existing techniques have.

H5Station: An Effective HTML5-based Multimedia File Management System (H5Station: HTML5-기반 효과적인 멀티미디어 파일 관리 시스템)

  • Jeong, Da-Eun;Won, Ji-Hye;Kim, Su-Jung;Lee, Jong-Woo
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.141-150
    • /
    • 2012
  • As various smartphones users are increasing rapidly, the more contents they keep in their devices, however, the larger necessity for the effective and easy management of their files they want. To satisfy this need, we propose an integrated multimedia files management system, H5Station. By using H5Station client, users can connect to the H5Station server running on their own PC, and manage their multimedia files effectively, and enjoy every file they have. The H5Station client is developed by using HTML5 standard technology. Therefore H5Station client can run on any HTML5-supported browser. Main functions of the H5Station are as follows: configuring user settings, charting the distribution of user files, upload/download/deletion of files, synchronizing files' metadata between client and server, playing user-selected files, searching files users want, and finally creating user-customized buttons. Through these functions, users can easily find, play, and manage their multimedia contents at any place and anytime.

A Window-Based DVS Algorithm for MPEG Player (MPEG 동영상 재생기를 위한 윈도우 기반 동적 전압조절 알고리즘)

  • Seo, Young-Sun;Park, Kyung-Hwan;Baek, Yong-Gyu;Cho, Jin-Sung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.11
    • /
    • pp.517-526
    • /
    • 2008
  • As the functionality of portable devices arc being enhanced and the performance is being greatly improved, power dissipations of battery driven portable devices are being increased. So, an efficient power management for reducing their power consumption is needed. In this paper, we propose a window-based DVS algorithm for MPEG Player. The proposed algorithm maintains the recently frame information and execution time received from MPEG player in window queue and dynamically adjusts (frequency, voltage) level based on window queue information. Our algorithm can be implemented in the common multimedia player as a module. We employed well-known MPlayer for the measurement of performance. The experimental result shows that the proposed algorithm reduces energy consumption by 56% on maximal performance.

An Efficient Car Management System based on an Object-Oriented Modeling using Car Number Recognition and Smart Phone (자동차 번호판 인식 및 스마트폰을 활용한 객체지향 설계 기반의 효율적인 차량 관리 시스템)

  • Jung, Se-Hoon;Kwon, Young-Wook;Sim, Chun-Bo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.5
    • /
    • pp.1153-1164
    • /
    • 2012
  • In this paper, we propose an efficient car management system based on object-oriented modeling using car number recognition and smart phone. The proposed system perceives car number of repair vehicle after recognizing the licence plate using an IP camera in real time. And then, existing repair history information of the recognized car is be displayed in DID. In addition, maintenance process is shooting video while auto maintenance mechanic repairs car through IP-camera. That will be provide customer car identification and repairs history management function by sending key frames extracted from recorded video automatically. We provide user graphic interface based on web and mobile for your convenience. The module design of the proposed system apply software design modeling based on granular object-oriented considering reuse and extensibility after implementation. Car repairs center and maintenance companies can improve business efficiency, as well as the requested vehicle repair can increase customer confidence.

Video Event Analysis and Retrieval System for the KFD Web Database System (KFD 웹 데이터베이스 시스템을 위한 동영상 이벤트 분석 및 검색 시스템)

  • Oh, Seung-Geun;Im, Young-Hee;Chung, Yong-Wha;Chang, Jin-Kyung;Park, Dai-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.11
    • /
    • pp.20-29
    • /
    • 2010
  • The typical Kinetic Family Drawing (KFD) Web database system, a form of prototype system, has been developed, relying on the suggestions from family art therapists, with an aim to handle large amounts of assessment data and to facilitate effective implement of assessment activities. However, unfortunately such a system has an intrinsic problem that it fails to collect clients' behaviors, attitudes, facial expressions, voices, and other critical information observed while they are drawing. Accordingly we propose the ontology based video event analysis and video retrieval system in this paper, in order to enhance the function of a KFD Web database system by using a web camera and drawing tool. More specifically, a newly proposed system is designed to deliver two kinds of services: the client video retrieval service and the sketch video retrieval service, accompanied by a summary report of occurred events and dynamic behaviors relative to each family member object, respectively. The proposed system can support the reinforced KFD assessments by providing quantitative and subjective information on clients' working attitudes and behaviors, and KFD preparation processes.

Classification and Recommendation of Scene Templates for PR Video Making Service based on Strategic Meta Information (홍보동영상 제작 서비스를 위한 전략메타정보 기반 장면템플릿 분류 및 추천)

  • Park, Jongbin;Lee, Han-Duck;Kim, Kyung-Won;Jung, Jong-Jin;Lim, Tae-Beom
    • Journal of Broadcast Engineering
    • /
    • v.20 no.6
    • /
    • pp.848-861
    • /
    • 2015
  • In this paper, we introduce a new web-based PR video making service system. Many video editing tools have required tough editing skill or scenario planning stage for a just simple PR video making. Some users may prefer a simple and fast way than sophisticated and complex functionality. To solve this problem, it is important to provide easy user interface and intelligent classification and recommendation scheme. Therefore, we propose a new template classification and recommendation scheme using a topic modeling method. The proposed scheme has the big advantage of being able to handle the unstructured meta data as well as structured one.

Construction of Low Cost Tiled Display System with Super High Resolution (초고해상도 저가형 타일드 디스플레이 시스템 구축)

  • Kim, Gi-Beom;Kim, Dae-Hyun;Park, Seong-Won;Kim, Myoung-Jun
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.455-462
    • /
    • 2006
  • 본 논문에서는 저가의 보급형 프로젝터를 사용하고 Programmable GPU 기법중 Pixel Shader 기술을 이용하여 에지블렌딩을 수행하였으며, $7{\times}4$ 개의 프로젝터로 구성된 $6592{\times}2784$ 픽셀의 초고해상도를 가지는 $5.6m{\times}2.4m$ 의 대형 타일드 디스플레이를 구축해 보았다. 또한 타일드 디스플레이용 응용프로그램으로서 타일드 디스플레이 시스템을 마치 하나의 컴퓨터처럼 작동시킬 수 있는 타일드 디스플레이 관리 프로그램을 개발했으며, 이 프로그램은 컴퓨터와 프로젝터 제어, 응용프로그램 실행 및 종료를 담당한다. 그 외에도 일반 컴퓨터에서는 실행이 불가능한 초고해상도의 이미지 및 동영상까지도 볼 수 있는 이미지 뷰어와 동영상 플레이어를 개발하였다. 또한 100 만 폴리곤 이상의 3D 모델을 실시간으로 인터렉션 할 수 있는 3D 뷰어 등을 개발 하였다.

  • PDF

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.