• Title/Summary/Keyword: 이벤트 검출

Search Result 147, Processing Time 0.044 seconds

Frequency-Cepstral Features for Bag of Words Based Acoustic Context Awareness (Bag of Words 기반 음향 상황 인지를 위한 주파수-캡스트럴 특징)

  • Park, Sang-Wook;Choi, Woo-Hyun;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.33 no.4
    • /
    • pp.248-254
    • /
    • 2014
  • Among acoustic signal analysis tasks, acoustic context awareness is one of the most formidable tasks in terms of complexity since it requires sophisticated understanding of individual acoustic events. In conventional context awareness methods, individual acoustic event detection or recognition is employed to generate a relevant decision on the impending context. However this approach may produce poorly performing decision results in practical situations due to the possibility of events occurring simultaneously or the acoustically similar events that are difficult to distinguish with each other. Particularly, the babble noise acoustic event occurring at a bus or subway environment may create confusion to context awareness task since babbling is similar in any environment. Therefore in this paper, a frequency-cepstral feature vector is proposed to mitigate the confusion problem during the situation awareness task of binary decisions: bus or metro. By employing the Support Vector Machine (SVM) as the classifier, the proposed feature vector scheme is shown to produce better performance than the conventional scheme.

Development of simultaneous detection method for living modified cotton varieties MON757, MON88702, COT67B, and GHB811 (유전자변형 면화 MON757, MON88702, COT67B, GHB811의 동시검출법 개발)

  • Il Ryong Kim;Min-A Seol;A-Mi Yoon;Jung Ro Lee;Wonkyun Choi
    • Korean Journal of Environmental Biology
    • /
    • v.39 no.4
    • /
    • pp.415-422
    • /
    • 2021
  • Cotton is an important fiber crop, and its seeds are used as feed for dairy cattle. Crop biotechnology has been used to improve agronomic traits and quality in the agricultural industry. The frequent unintentional release of LM cotton into the environment in South Korea is attributed to the increased application of living modified (LM) cotton in food, feed, and processing industries. To identify and monitor the LM cotton, a method for detecting the approved LM cotton in South Korea is required. In this study, we developed a method for the simultaneous detection of four LM cotton varieties, MON757, MON88702, COT67B, and GHB811. The genetic information of each LM event was obtained from the European Commission-Joint Research Centre and Animal and Plant Quarantine Agency. We designed event-specific primers to develop a multiplex PCR method for LM cotton and confirmed the specific amplification. Using specificity assay, random reference material(RM) mixture analysis and limit of detection(LOD), we verified the accuracy and specificity of the multiplex PCR method. Our results demonstrate that the method enabled the detection of each event and validation of the specificity using other LM RMs. The efficiency of multiplex PCR was further verified using a random RM mixture. Based on the LOD, the method identified 25 ng of template DNA in a single reaction. In summary, we developed a multiplex PCR method for simultaneous detection of four LM cotton varieties, for possible application in LM volunteer analysis.

Retrieval of Player Event in Golf Videos Using Spoken Content Analysis (음성정보 내용분석을 통한 골프 동영상에서의 선수별 이벤트 구간 검색)

  • Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.7
    • /
    • pp.674-679
    • /
    • 2009
  • This paper proposes a method of player event retrieval using combination of two functions: detection of player name in speech information and detection of sound event from audio information in golf videos. The system consists of indexing module and retrieval module. At the indexing time audio segmentation and noise reduction are applied to audio stream demultiplexed from the golf videos. The noise-reduced speech is then fed into speech recognizer, which outputs spoken descriptors. The player name and sound event are indexed by the spoken descriptors. At search time, text query is converted into phoneme sequences. The lists of each query term are retrieved through a description matcher to identify full and partial phrase hits. For the retrieval of the player name, this paper compares the results of word-based, phoneme-based, and hybrid approach.

The Design of a Complex Event Model for Effective Service Monitoring in Enterprise Systems (엔터프라이즈 시스템에서 효과적인 서비스 모니터링을 위한 복합 이벤트 모델의 설계)

  • Kum, Deuk-Kyu;Lee, Nam-Yong
    • The KIPS Transactions:PartD
    • /
    • v.18D no.4
    • /
    • pp.261-274
    • /
    • 2011
  • In recent competitive business environment each enterprise has to be agile and flexible. For these purposes run-time monitoring ofservices provided by an enterprise and early decision making through this becomes core competition of the enterprise. In addition, in order to process various innumerable events which are generated on enterprise systems techniques which make filtering of meaningful data are needed. However, the existing study related with this is nothing but discovering of service faults by monitoring depending upon API of BPEL engine or middleware, or is nothing but processing of simple events based on low-level events. Accordingly, there would be limitations to provide useful business information. In this paper, through situation detection an extended complex event model is presented, which is possible to provide more valuable and useful business information. Concretely, first of all an event processing architecture in an enterprise system is proposed, and event meta-model which is suitable to the proposed architecture is going to be defined. Based on the defined meta-model, It is presented that syntax and semantics of constructs in our event processing language including various and progressive event operators, complex event pattern, key, etc. In addition, an event context mechanism is proposed to analyze more delicate events. Finally, through application studies application possibility of this study would be shown and merits of this event model would be present through comparison with other event model.

Soccer Video Highlight Building Algorithm using Structural Characteristics of Broadcasted Sports Video (스포츠 중계 방송의 구조적 특성을 이용한 축구동영상 하이라이트 생성 알고리즘)

  • 김재홍;낭종호;하명환;정병희;김경수
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.727-743
    • /
    • 2003
  • This paper proposes an automatic highlight building algorithm for soccer video by using the structural characteristics of broadcasted sports video that an interesting (or important) event (such as goal or foul) in sports video has a continuous replay shot surrounded by gradual shot change effect like wipe. This shot editing rule is used in this paper to analyze the structure of broadcated soccer video and extracts shot involving the important events to build a highlight. It first uses the spatial-temporal image of video to detect wipe transition effects and zoom out/in shot changes. They are used to detect the replay shot. However, using spatial-temporal image alone to detect the wipe transition effect requires too much computational resources and need to change algorithm if the wipe pattern is changed. For solving these problems, a two-pass detection algorithm and a pixel sub-sampling technique are proposed in this paper. Furthermore, to detect the zoom out/in shot change and replay shots more precisely, the green-area-ratio and the motion energy are also computed in the proposed scheme. Finally, highlight shots composed of event and player shot are extracted by using these pre-detected replay shot and zoom out/in shot change point. Proposed algorithm will be useful for web services or broadcasting services requiring abstracted soccer video.

Head Mouse System Based on A Gyro and Opto Sensors (각속도 및 광센서를 이용한 헤드 마우스)

  • Park, Min-Je;Yoo, Jae-Ha;Kim, Soo-Chan
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.4
    • /
    • pp.70-76
    • /
    • 2009
  • We proposed the device to control a computer mouse with only head movements and eye blinks so that disabilities by car or other accidents can use a computer. The mouse position were estimated from a gyro-sensor which can measure head movements, and the mouse events such as click/double click were from opto sensors which can detect the eyes flicker, respectively. The sensor was mounted on the goggle in order not to disturb the visual field. There was no difference in movement speed between ours and a general mouse, but it required 3$\sim$4 more times in the result of the experiment to evaluate spatial movements and events detection of the proposed mouse because of the low accuracy. We could eliminate cumbersome work to periodically remove the accumulated error and intuitively control the mouse using non-linear relative point method with dead zones. Optical sensors are used in the event detection circuitry designed to remove the influence of the ambient light changes, therefore it was not affected in the change of external light source.

Human Activity Recognition using Model-based Gaze Direction Estimation (모델 기반의 시선 방향 추정을 이용한 사람 행동 인식)

  • Jung, Do-Joon;Yoon, Jeong-Oh
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.9-18
    • /
    • 2011
  • In this paper, we propose a method which recognizes human activity using model-based gaze direction estimation in an indoor environment. The method consists of two steps. First, we detect a head region and estimate its gaze direction as prior information in the human activity recognition. We use color and shape information for the detection of head region and use Bayesian Network model representing relationships between a head and a face for the estimation of gaze direction. Second, we recognize event and scenario describing the human activity. We use change of human state for the event recognition and use a rule-based method with combination of events and some constraints. We define 4 types of scenarios related to the gaze direction. We show performance of the gaze direction estimation and human activity recognition with results of experiments.