• Title/Summary/Keyword: Semantic Event Detection

Search Result 8, Processing Time 0.023 seconds

Video Event Detection according to Generating of Semantic Unit based on Moving Object (객체 움직임의 의미적 단위 생성을 통한 비디오 이벤트 검출)

  • Shin, Ju-Hyun;Baek, Sun-Kyoung;Kim, Pan-Koo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.2
    • /
    • pp.143-152
    • /
    • 2008
  • Nowadays, many investigators are studying various methodologies concerning event expression for semantic retrieval of video data. However, most of the parts are still using annotation based retrieval that is defined into annotation of each data and content based retrieval using low-level features. So, we propose a method of creation of the motion unit and extracting event through the unit for the more semantic retrieval than existing methods. First, we classify motions by event unit. Second, we define semantic unit about classified motion of object. For using these to event extraction, we create rules that are able to match the low-level features, from which we are able to retrieve semantic event as a unit of video shot. For the evaluation of availability, we execute an experiment of extraction of semantic event in video image and get approximately 80% precision rate.

  • PDF

Semantic Event Detection in Golf Video Using Hidden Markov Model (은닉 마코프 모델을 이용한 골프 비디오의 시멘틱 이벤트 검출)

  • Kim Cheon Seog;Choo Jin Ho;Bae Tae Meon;Jin Sung Ho;Ro Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1540-1549
    • /
    • 2004
  • In this paper, we propose an algorithm to detect semantic events in golf video using Hidden Markov Model. The purpose of this paper is to identify and classify the golf events to facilitate highlight-based video indexing and summarization. In this paper we first define 4 semantic events, and then design HMM model with states made up of each event. We also use 10 multiple visual features based on MPEG-7 visual descriptors to acquire parameters of HMM for each event. Experimental results showed that the proposed algorithm provided reasonable detection performance for identifying a variety of golf events.

  • PDF

Semantic Ontology Speech Recognition Performance Improvement using ERB Filter (ERB 필터를 이용한 시맨틱 온톨로지 음성 인식 성능 향상)

  • Lee, Jong-Sub
    • Journal of Digital Convergence
    • /
    • v.12 no.10
    • /
    • pp.265-270
    • /
    • 2014
  • Existing speech recognition algorithm have a problem with not distinguish the order of vocabulary, and the voice detection is not the accurate of noise in accordance with recognized environmental changes, and retrieval system, mismatches to user's request are problems because of the various meanings of keywords. In this article, we proposed to event based semantic ontology inference model, and proposed system have a model to extract the speech recognition feature extract using ERB filter. The proposed model was used to evaluate the performance of the train station, train noise. Noise environment of the SNR-10dB, -5dB in the signal was performed to remove the noise. Distortion measure results confirmed the improved performance of 2.17dB, 1.31dB.

Analysis Framework using Process Mining for Block Movement Process in Shipyards (조선 산업에서 프로세스 마이닝을 이용한 블록 이동 프로세스 분석 프레임워크 개발)

  • Lee, Dongha;Bae, Hyerim
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.6
    • /
    • pp.577-586
    • /
    • 2013
  • In a shipyard, it is hard to predict block movement due to the uncertainty caused during the long period of shipbuilding operations. For this reason, block movement is rarely scheduled, while main operations such as assembly, outfitting and painting are scheduled properly. Nonetheless, the high operating costs of block movement compel task managers to attempt its management. To resolve this dilemma, this paper proposes a new block movement analysis framework consisting of the following operations: understanding the entire process, log clustering to obtain manageable processes, discovering the process model and detecting exceptional processes. The proposed framework applies fuzzy mining and trace clustering among the process mining technologies to find main process and define process models easily. We also propose additional methodologies including adjustment of the semantic expression level for process instances to obtain an interpretable process model, definition of each cluster's process model, detection of exceptional processes, and others. The effectiveness of the proposed framework was verified in a case study using real-world event logs generated from the Block Process Monitoring System (BPMS).

Detection of Syntactic and Semantic Anomaly in Korean Sentences: an ERP study (언어이해과정에서의 구문/의미요소 분리에 대한 ERP특성연구)

  • 김충명;이경민
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.05a
    • /
    • pp.61-67
    • /
    • 2000
  • 본고는 텍스트로 제시된 한국어 문장의 형태통사론적 오류와 의미적 논항결합시 하위범주화요건을 위배하는 논항선택 오류의 인식 및 판단에 따른 ERP(Event-Related Potential)를 검출하여, 이에 대한 문장이해과정의 시간추이적 양상을 연구의 대상으로 하였다. 참여한 피험자로부터 각각의 유형에 대한 통계분석 결과, 통사적 오류 추출에서 의미적 오류 추출에 이르기까지 기존의 연구에서 제시된 오류패턴 요소들(ELAN, N400, P600)을 확인하였으며, 아울러 한국어 문장이해과정의 특이성을 관찰할 수 있었다. 이를 통해 문장묵독시 일어나는 여러 종류의 문법오류에 대한 개별적 성격규명과 함께, 이들의 문법틀 내에서의 상호관계에 대한 일련의 가설설정이 이루어질 수 있으며, 또한 문장이해 메커니즘의 신경적 기전의 특성 규명으로 부수될 인간지능 모사가능성에 생리학적 토대가 더해 질 것으로 추정되는 바, 언어이해와 대뇌기전지형을 결정짓는 또 다른 규준이 될 것이다.

  • PDF

SEMANTIC EVENT DETECTION FOR CONTENT-BASED HIGHLIGHT SUMMARY (내용 기반 하이라이트 요약을 위한 의미 있는 이벤트 검출)

  • Kim, Cheon-Seog;Bae, Beet-Nara;Thanh, Nguyen-Ngoc;Ro, Yong-Man
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.73-76
    • /
    • 2002
  • 비디오 하이라이트 요약을 위해 내용기반에 의한 의미 있는 이벤트의 검출 방법에 대해 논하였다. 제안된 방법은 비디오 파싱을 포함한 5개의 단계로 구성 되었고, 다수의 기술자가 하위 레벨 특징들의 추출과 정확한 이벤트 검출을 위해 사용 되었다. 특징의 추출에 사용하는 샷과 키 프레임은 이벤트 검출에 힌트가 되는 부분만 사용함으로써 계산 복잡도를 줄였다. 각 샷은 사전에 정의된 추론 방법에 의해 요소가 부여되고, 이들 샷들의 의미를 통합하여 하나의 이벤트가 구성 된다.

  • PDF

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

Semantic Event Detection and Summary for TV Golf Program Using MPEG-7 Descriptors (MPEG-7 기술자를 이용한 TV 골프 프로그램의 이벤트검출 및 요약)

  • 김천석;이희경;남제호;강경옥;노용만
    • Journal of Broadcast Engineering
    • /
    • v.7 no.2
    • /
    • pp.96-106
    • /
    • 2002
  • We introduce a novel scheme to characterize and index events in TV golf programs using MPEG-7 descriptors. Our goal is to identify and localize the golf events of interest to facilitate highlight-based video indexing and summarization. In particular, we analyze multiple (low-level) visual features using domain-specific model to create a perceptual relation for semantically meaningful(high-level) event identification. Furthermore, we summarize a TV golf program with TV-Anytime segmentation metadata, a standard form of an XML-based metadata description, in which the golf events are represented by temporally localized segments and segment groups of highlights. Experimental results show that our proposed technique provides reasonable performance for identifying a variety of golf events.