• Title/Summary/Keyword: Video-integrated

Search Result 342, Processing Time 0.028 seconds

Integrated Approach of Multiple Face Detection for Video Surveillance

  • Kim, Tae-Kyun;Lee, Sung-Uk;Lee, Jong-Ha;Kee, Seok-Cheol;Kim, Sang-Ryong
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1960-1963
    • /
    • 2003
  • For applications such as video surveillance and human computer interface, we propose an efficiently integrated method to detect and track faces. Various visual cues are combined to the algorithm: motion, skin color, global appearance and facial pattern detection. The ICA (Independent Component Analysis)-SVM (Support Vector Machine based pattern detection is performed on the candidate region extracted by motion, color and global appearance information. Simultaneous execution of detection and short-term tracking also increases the rate and accuracy of detection. Experimental results show that our detection rate is 91% with very few false alarms running at about 4 frames per second for 640 by 480 pixel images on a Pentium IV 1㎓.

  • PDF

Level of Complete Knowledge on Five Moments of Hand Hygiene among Nurses Working at Integrated Nursing Care Service Wards (간호간병통합서비스 병동 간호사의 손위생 시점에 대한 완전지식 수준)

  • Kim, Eunhee;Jeong, Ihn Sook
    • Journal of Korean Academy of Nursing
    • /
    • v.51 no.4
    • /
    • pp.454-464
    • /
    • 2021
  • Purpose: This study aimed to identify the level of complete knowledge about hand hygiene indications among nurses working at integrated nursing care service wards. Methods: A total of 127 nurses in eight integrated nursing care service wards completed structured sheets while observing a video based on six scenarios developed by the research team. Complete knowledge level was calculated as the percentage (%) of participants who responded correctly to all questions among participants. Complete knowledge levels according to the scenarios were calculated and compared according to general characteristics using the chi-squared test or Wilcoxon rank-sum test. Results: The complete knowledge level for each scenario ranged from 7.9% (scenario 6) to 42.5% (scenarios 4 and 5), and no one had complete knowledge of all scenarios. Only 3.1% of participants demonstrated complete knowledge in more than four scenarios, and 26.0% had complete knowledge of four or more hand hygiene moments. Complete knowledge level per scenario did not differ depending on work experience at hospitals and study wards, or prior hand hygiene training in the last year. Conclusion: As the complete knowledge level regarding hand hygiene moment is very low, it is suggested that regular hand hygiene training should be provided to nurses using video media that reflect real nursing tasks. Thus, they can acquire complete knowledge of when hand hygiene is needed or not during complex nursing work situations.

Visual communications Over Broadband Packet Network (광대역 패킷 망에서의 영상통신)

  • 이상훈
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.14 no.5
    • /
    • pp.521-530
    • /
    • 1989
  • Broadband ATM(Asynchronous Transfer Mode) networking techniques based on lightwave technology and high speed integrated circuits appear to be the choice of transport technology for broadband ISDN. Among other problems, the issue of video transport over broadband packet(ATM) networks still requries further investigation. In this paper, the problems of transporting video signals over a broaband packet network are investigated together with possible solutions. In particular, clock recovery packet loss compensation and transport technique based on hierarchical video coding scheme are described in detail. This would allow efficient bandwidth sharing and minimum degradation in video quality.

  • PDF

Required Video Analytics and Event Processing Scenario at Large Scale Urban Transit Surveillance System (도시철도 종합감시시스템에서 요구되는 객체인식 기능 및 시나리오)

  • Park, Kwang-Young;Park, Goo-Man
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.3
    • /
    • pp.63-69
    • /
    • 2012
  • In this paper, we introduced design of intelligent surveillance camera system and typical event processing scenario for urban transit. To analyze video, we studied events that frequently occur in surveillance camera system. Event processing scenario is designed for seven representative situations(designated area intrusion, object abandon, object removal in designated area, object tracking, loitering and congestion measurement) in urban transit. Our system is optimized for low hardware complexity, real time processing and scenario dependent solution.

Design of Video Quality Assurance and Integrated Quality Management System using No Reference QoE (비 참조 QoE를 이용한 영상품질 측정 및 통합품질 관리 시스템의 설계)

  • Kim, Sang-Soo;Park, Dong-Soo
    • The Journal of Information Technology
    • /
    • v.12 no.3
    • /
    • pp.49-57
    • /
    • 2009
  • This Paper provides perceptual metrics for video quality based on properties of human visual system, and audio quality based on human audition. All metrics work without reference signals, allowing non-intrusive, in-service measurements. A simple and easy-to-learn user interface displays the metrics and saves them in popular file formats like CSV. In this paper, proposed method was able to various and corrective measurement for the multimedia service video quality. As that it was able to application to set up service guide line and the methode of measurement and system for the set up standardization of the high quality video service.

  • PDF

Enhancing Video Storyboarding with Artificial Intelligence: An Integrated Approach Using ChatGPT and Midjourney within AiSAC

  • Sukchang Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.3
    • /
    • pp.253-259
    • /
    • 2023
  • The increasing incorporation of AI in video storyboard creation has been observed recently. Traditionally, the production of storyboards requires significant time, cost, and specialized expertise. However, the integration of AI can amplify the efficiency of storyboard creation and enhance storytelling. In Korea, AiSAC stands at the forefront of AI-driven storyboard platforms, boasting the capability to generate realistic images built on open datasets foundations. Yet, a notable limitation is the difficulty in intricately conveying a director's vision within the storyboard. To address this challenge, we proposed the application of image generation features from ChatGPT and Midjourney to AiSAC. Through this research, we aimed to enhance the efficiency of storyboard production and refined the intricacy of expression, thereby facilitating advancements in the video production process.

Design and Implementation of Content-based Video Database using an Integrated Video Indexing Method (통합된 비디오 인덱싱 방법을 이용한 내용기반 비디오 데이타베이스의 설계 및 구현)

  • Lee, Tae-Dong;Kim, Min-Koo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.661-683
    • /
    • 2001
  • There is a rapid increase in the use of digital video information in recent years, it becomes more important to manage video databases efficiently. The development of high speed data network and digital techniques has emerged new multimedia applications such as internet broadcasting, Video On Demand(VOD) combined with video data processing and computer. Video database should be construct for searching fast, efficient video be extract the accurate feature information of video with more massive and more complex characteristics. Video database are essential differences between video databases and traditional databases. These differences lead to interesting new issues in searching of video, data modeling. So, cause us to consider new generation method of database, efficient retrieval method of video. In this paper, We propose the construction and generation method of the video database based on contents which is able to accumulate the meaningful structure of video and the prior production information. And by the proposed the construction and generation method of the video database implemented the video database which can produce the new contents for the internet broadcasting centralized on the video database. For this production, We proposed the video indexing method which integrates the annotation-based retrieval and the content-based retrieval in order to extract and retrieval the feature information of the video data using the relationship between the meaningful structure and the prior production information on the process of the video parsing and extracting the representative key frame. We can improve the performance of the video contents retrieval, because the integrated video indexing method is using the content-based metadata type represented in the low level of video and the annotation-based metadata type impressed in the high level which is difficult to extract the feature information of the video at he same time.

  • PDF

Methods for Video Caption Extraction and Extracted Caption Image Enhancement (영화 비디오 자막 추출 및 추출된 자막 이미지 향상 방법)

  • Kim, So-Myung;Kwak, Sang-Shin;Choi, Yeong-Woo;Chung, Kyu-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.235-247
    • /
    • 2002
  • For an efficient indexing and retrieval of digital video data, research on video caption extraction and recognition is required. This paper proposes methods for extracting artificial captions from video data and enhancing their image quality for an accurate Hangul and English character recognition. In the proposed methods, we first find locations of beginning and ending frames of the same caption contents and combine those multiple frames in each group by logical operation to remove background noises. During this process an evaluation is performed for detecting the integrated results with different caption images. After the multiple video frames are integrated, four different image enhancement techniques are applied to the image: resolution enhancement, contrast enhancement, stroke-based binarization, and morphological smoothing operations. By applying these operations to the video frames we can even improve the image quality of phonemes with complex strokes. Finding the beginning and ending locations of the frames with the same caption contents can be effectively used for the digital video indexing and browsing. We have tested the proposed methods with the video caption images containing both Hangul and English characters from cinema, and obtained the improved results of the character recognition.

LED Driver Solution for Backlighting large TFT-LCD Panels with Adaptive Power Control & Video Synchronization

  • Dhayagude, Tushar;Dilip, S;Santo, Hendrik;Vi, Kien;Chen, Sean;Kim, Min-Jong;Schindler, Matt;Ghoman, Ran
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.1487-1490
    • /
    • 2008
  • mSilica developed a scalable integrated circuit solution for driving multiple arrays of LEDs to backlight TFT-LCD panels. The drivers incorporate adaptive power control of the DC-DC power supply powering the LEDs to improve the efficiency while synchronizing PWM dimming with video timing signals VSYNC and HSYNC to reduce motion blur.

  • PDF

Integrated Management System for Vehicle CCTV Video Using Reverse Tunneling (리버스 터널링을 이용한 차량용 CCTV 영상 통합 관리 시스템)

  • Yang, Sun-Jin;Park, Jae-Pyo;Yang, Seung-Min
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.5
    • /
    • pp.19-24
    • /
    • 2019
  • The development of ICT technology has a huge impact on the existing closed CCTV security equipment market. With the importance of video data particularly highlighted in areas such as self-driving cars, unmanned vehicles and smart cities, various technologies using video are emerging. In this paper, we proposed a method to transmit videos and metadata as a part of smart city integration, and to solve the traffic, environment and security problems caused in urban life by utilizing the metadata instead of using CCTV videos for simple recording purposes, and reverse tunneling technique was designed and implemented as a method for accessing CCTV videos for vehicles from remote locations. Integrated management of CCTV videos and metadata for vehicles that have been used only for limited purposes in closed environments will enable efficient operation of integrated centers in real time required by smart cities, such as vehicle status check, road conditions and facility management.