• Title/Summary/Keyword: Shot Detection

Search Result 212, Processing Time 0.022 seconds

Effective Shot Boundary Detection Using Multiple Sliding Windows (다중 슬라이딩 윈도우들을 이용한 효과적인 샷 경계 검출 방법)

  • Min, Hyun-Seok;Jin, Sung-Ho;Ro, Yong-Man
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.15-18
    • /
    • 2006
  • 비디오를 내용 별로 분할하기 위한 최소 단위는 비디오의 샷이다. 따라서 비디오 내용 정보를 분석함에 있어서 비디오의 샷 경계 검출은 필수적인 과정이다. 이러한 샷 전환 과정은 급격한 샷 전환 과정과 점진적인 샷 전환 과정으로 구분할 수 있다. 점진적인 샷 전환 과정의 경우 전환 과정이 여러 프레임들에 걸쳐 발생되는 관계로 점진적인 샷 전환 과정을 검출하기 위하여, 기존 샷 경계 검출 알고리즘은 일정 간격을 슬라이딩한 윈도우 프레임들 간의 차이를 비교하는 방식을 이용하였다. 기존 슬라이딩 윈도우 방법은 점전적인 샷 전환 과정을 검출하기 위하여 고정된 크기의 윈도우 하나만을 이용하였다. 이 경우, 슬라이딩 윈도우의 길이가 점진적인 샷 전환 과정에 비해 짧으면, 샷 전환을 검출하지 못 한다. 슬라이딩 윈도우의 길이가 샷의 길이보다 길면 샷을 점진적인 샷 전환으로 검출하는 오류가 발생된다. 상기 문제점을 개선하기 위하여 본 논문에서는 서로 다른 크기의 다수의 슬라이딩 윈도우들과 적응적 경계치를 적용한 샷 경계 검출 방법을 제안한다.

  • PDF

Classification of C.elegans Behavioral Phenotypes Using Shape Information (형태적 특징 정보를 이용한 C.Elegans의 개체 분류)

  • Jeon, Mi-Ra;Nah, Won;Hong, Seung-Bum;Baek, Joong-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.7C
    • /
    • pp.712-718
    • /
    • 2003
  • C.elegans are often used to study of function of gene, but it is difficult for human observation to distinguish the mutants of C.elegans. To solve this problem, the system, which can classify the mutant types automatically using the computer vision, is now studying. Tn previous work[1], we described the preprocessing method for automated-classification system. In this paper, we introduce shape features, which can be extracted from an acquisition image. We divide the feature into two categories, which are related to size and posture of the worm, and each feature is described mathematically We validate the shape information experimentally. And we use hierarchical clustering algorithm for classification. It reveals that 4 mutants of the worm, which are used in experiment, can be classified with over 90% of success rate.

Video Indexing using Motion vector and brightness features (움직임 벡터와 빛의 특징을 이용한 비디오 인덱스)

  • 이재현;조진선
    • Journal of the Korea Society of Computer and Information
    • /
    • v.3 no.4
    • /
    • pp.27-34
    • /
    • 1998
  • In this paper we present a method for automatic motion vector and brightness based video indexing and retrieval. We extract a representational frame from each shot and compute some motion vector and brightness based features. For each R-frame we compute the optical flow field; motion vector features are then derived from this flow field, BMA(block matching algorithm) is used to find motion vectors and Brightness features are related to the cut detection of method brightness histogram. A video database provided contents based access to video. This is achieved by organizing or indexing video data based on some set of features. In this paper the index of features is based on a B+ search tree. It consists of internal and leaf nodes stores in a direct access a storage device. This paper defines the problem of video indexing based on video data models.

  • PDF

The Influence of Topic Exploration and Topic Relevance On Amplitudes of Endogenous ERP Components in Real-Time Video Watching (실시간 동영상 시청시 주제탐색조건과 주제관련성이 내재적 유발전위 활성에 미치는 영향)

  • Kim, Yong Ho;Kim, Hyun Hee
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.8
    • /
    • pp.874-886
    • /
    • 2019
  • To delve into the semantic gap problem of the automatic video summarization, we focused on an endogenous ERP responses at around 400ms and 600ms after the on-set of audio-visual stimulus. Our experiment included two factors: the topic exploration of experimental conditions (Topic Given vs. Topic Exploring) as a between-subject factor and the topic relevance of the shots (Topic-Relevant vs. Topic-Irrelevant) as a within-subject factor. For the Topic Given condition of 22 subjects, 6 short historical documentaries were shown with their video titles and written summaries, while in the Topic Exploring condition of 25 subjects, they were asked instead to explore topics of the same videos with no given information. EEG data were gathered while they were watching videos in real time. It was hypothesized that the cognitive activities to explore topics of videos while watching individual shots increase the amplitude of endogenous ERP at around 600 ms after the onset of topic relevant shots. The amplitude of endogenous ERP at around 400ms after the onset of topic-irrelevant shots was hypothesized to be lower in the Topic Given condition than that in the Topic Exploring condition. The repeated measure MANOVA test revealed that two hypotheses were acceptable.

Pose Estimation and Image Matching for Tidy-up Task using a Robot Arm (로봇 팔을 활용한 정리작업을 위한 물체 자세추정 및 이미지 매칭)

  • Piao, Jinglan;Jo, HyunJun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.4
    • /
    • pp.299-305
    • /
    • 2021
  • In this study, the task of robotic tidy-up is to clean the current environment up exactly like a target image. To perform a tidy-up task using a robot, it is necessary to estimate the pose of various objects and to classify the objects. Pose estimation requires the CAD model of an object, but these models of most objects in daily life are not available. Therefore, this study proposes an algorithm that uses point cloud and PCA to estimate the pose of objects without the help of CAD models in cluttered environments. In addition, objects are usually detected using a deep learning-based object detection. However, this method has a limitation in that only the learned objects can be recognized, and it may take a long time to learn. This study proposes an image matching based on few-shot learning and Siamese network. It was shown from experiments that the proposed method can be effectively applied to the robotic tidy-up system, which showed a success rate of 85% in the tidy-up task.

Development of Gravitational Wave Detection Technology at KASI (한국천문연구원의 중력파 검출기술 개발)

  • Lee, Sungho;Kim, Chang-Hee;Park, June Gyu;Kim, Yunjong;Jeong, Ueejeong;Je, Soonkyu;Seong, Hyeon Cheol;Han, Jeong-Yeol;Ra, Young-Sik;Gwak, Geunhee;Yoon, Youngdo
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.37.1-37.1
    • /
    • 2021
  • For the first time in Korea, we are developing technology for gravitational wave (GW) detectors as a major R&D program. Our main research target is quantum noise reduction technology which can enhance the sensitivity of a GW detector beyond its limit by classical physics. Technology of generating squeezed vacuum state of light (SQZ) can suppress quantum noise (shot noise at higher frequencies and radiation pressure noise at lower frequencies) of laser interferometer type GW detectors. Squeezing technology has recently started being used for GW detectors and becoming necessary and key components. Our ultimate goal is to participate and make contribution to international collaborations for upgrade of existing GW detectors and construction of next generation GW detectors. This presentation will summarize our results in 2020 and plan for the upcoming years. Technical details will be presented in other family talks.

  • PDF

Named Entity Detection Using Generative Al for Personal Information-Specific Named Entity Annotation Conversation Dataset (개인정보 특화 개체명 주석 대화 데이터셋 기반 생성AI 활용 개체명 탐지)

  • Yejee Kang;Li Fei;Yeonji Jang;Seoyoon Park;Hansaem Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.499-504
    • /
    • 2023
  • 본 연구에서는 민감한 개인정보의 유출과 남용 위험이 높아지고 있는 상황에서 정확한 개인정보 탐지 및 비식별화의 효율을 높이기 위해 개인정보 항목에 특화된 개체명 체계를 개발하였다. 개인정보 태그셋이 주석된 대화 데이터 4,981세트를 구축하고, 생성 AI 모델을 활용하여 개인정보 개체명 탐지 실험을 수행하였다. 실험을 위해 최적의 프롬프트를 설계하여 퓨샷러닝(few-shot learning)을 통해 탐지 결과를 평가하였다. 구축한 데이터셋과 영어 기반의 개인정보 주석 데이터셋을 비교 분석한 결과 고유식별번호 항목에 대해 본 연구에서 구축한 데이터셋에서 더 높은 탐지 성능이 나타났으며, 이를 통해 데이터셋의 필요성과 우수성을 입증하였다.

  • PDF

A Case Study of Sea Bottom Detection Within the Expected Range and Swell Effect Correction for the Noisy High-resolution Air-gun Seismic Data Acquired off Yeosu (잡음이 포함된 여수근해 고해상 에어건 탄성파 탐사자료에 대한 예상 범위에서의 해저면 선정 및 너울영향 보정 사례)

  • Lee, Ho-Young
    • Geophysics and Geophysical Exploration
    • /
    • v.22 no.3
    • /
    • pp.116-131
    • /
    • 2019
  • In order to obtain high-quality high-resolution marine seismic data, the survey needs to be carried out at very low-sea condition. However, the survey is often performed with a slight wave, which degrades the quality of data. In this case, it is possible to improve the quality of seismic data by detecting the exact location of the sea bottom signal and eliminating the influence of waves or swells automatically during data processing. However, if noise is included or the sea bottom signal is weakened due to sea waves, sea bottom detection errors are likely to occur. In this study, we applied a method reducing such errors by estimating the sea bottom location, setting a narrow detection range and detecting the sea bottom location within this range. The expected location of the sea bottom was calculated using previously detected sea bottom locations for each channel of multi-channel data. The expected location calculated in each channel is also compared and verified with expected locations of other channels in a shot gather. As a result of applying this method to the noisy 8-channel high-resolution air-gun seismic data acquired off Yeosu, the errors in selecting the strong noise before sea bottom or the strong subsurface reflected signal after the sea bottom signal are remarkably reduced and it is possible to produce the high-quality seismic section with the correction of ~ 2.5 m swell effect.

Monte Carlo Simulation based Optimal Aiming Point Computation Against Multiple Soft Targets on Ground (몬테칼로 시뮬레이션 기반의 다수 지상 연성표적에 대한 최적 조준점 산출)

  • Kim, Jong-Hwan;Ahn, Nam-Su
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.1
    • /
    • pp.47-55
    • /
    • 2020
  • This paper presents a real-time autonomous computation of shot numbers and aiming points against multiple soft targets on grounds by applying an unsupervised learning, k-mean clustering and Monte carlo simulation. For this computation, a 100 × 200 square meters size of virtual battlefield is created where an augmented enemy infantry platoon unit attacks, defences, and is scatted, and a virtual weapon with a lethal range of 15m is modeled. In order to determine damage types of the enemy unit: no damage, light wound, heavy wound and death, Monte carlo simulation is performed to apply the Carlton damage function for the damage effect of the soft targets. In addition, in order to achieve the damage effectiveness of the enemy units in line with the commander's intention, the optimal shot numbers and aiming point locations are calculated in less than 0.4 seconds by applying the k-mean clustering and repetitive Monte carlo simulation. It is hoped that this study will help to develop a system that reduces the decision time for 'detection-decision-shoot' process in battalion-scaled combat units operating Dronebot combat system.

Auto Frame Extraction Method for Video Cartooning System (동영상 카투닝 시스템을 위한 자동 프레임 추출 기법)

  • Kim, Dae-Jin;Koo, Ddeo-Ol-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.28-39
    • /
    • 2011
  • While the broadband multimedia technologies have been developing, the commercial market of digital contents has also been widely spreading. Most of all, digital cartoon market like internet cartoon has been rapidly large so video cartooning continuously has been researched because of lack and variety of cartoon. Until now, video cartooning system has been focused in non-photorealistic rendering and word balloon. But the meaningful frame extraction must take priority for cartooning system when applying in service. In this paper, we propose new automatic frame extraction method for video cartooning system. At frist, we separate video and audio from movie and extract features parameter like MFCC and ZCR from audio data. Audio signal is classified to speech, music and speech+music comparing with already trained audio data using GMM distributor. So we can set speech area. In the video case, we extract frame using general scene change detection method like histogram method and extract meaningful frames in the cartoon using face detection among the already extracted frames. After that, first of all existent face within speech area image transition frame extract automatically. Suitable frame about movie cartooning automatically extract that extraction image transition frame at continuable period of time domain.