• Title/Summary/Keyword: 장면 분석

Search Result 507, Processing Time 0.026 seconds

Consistency of ANS Responses Induced by Emotions and Emotion-Specific ANS Responses (정서에 의해 유발된 자율신경계 반응의 일관성 및 정서.특정적 자율 신경계 반응 패턴 확인)

  • 이경화;장은혜;석지아;손진훈;방석원;김경환;이미희
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2001.11a
    • /
    • pp.104-111
    • /
    • 2001
  • 정서와 생리반응 (자율신경계 반응) 간의 관계에 관하여 성인을 대상으로 최근가지 많은 연구가 행해져 왔다. 본 연구에서는 동일한 실험참여자를 대상으로 일정 기간 동안 여러 회의 반복실험을 통해 정서(기쁨, 슬픔, 분노, 공포, 혐오)에 따른 자율신경계의 반응의 일관성과 정서별 자율신경계 반응 패턴을 규명하고자 하였다. 본 실험에 앞서 저서를 유발하기 우한 도구인 정서유발자극세트와 정서에 대한 심리반응을 평가하기 위한 정서평가척도가 제작되었다. 정서유발자극세트는 2-4분 정도의 각 정서 장면이 포함된 총 5개의 동영상 장면들이다. 예비실험을 통해 70% 이상의 적합성 및 효과성을 가진 4개의 세트를 추출하여 본 실험에 사용하였다. 본 실험은 남녀 대학생 12명을 대상으로 4회 반복해서 실시되었다. 실험참여자들은 각 정서 장면을 시청 후, 유발된 정서에 대한 심리적인 평가를 하였다. 측정한 자율신경계 생리반응 변수는 ECG, PPG, EDA, SKT이었다. 연구 결과, 심리반응에서 정서유발자극세트는 75% 이상의 적합성 및 효과성을 보였다. 생리반응(ECG, EDA) 분석 결과, 정서에 따른 자율신경계 반응은 회기별로 일관적이었으며, 각 정서별로 특정적인 생리반응 패턴을 가지는 것으로 나타났다.

  • PDF

2D-3D Conversion Method Based on Scene Space Reconstruction (장면의 공간 재구성 기법을 이용한 2D-3D 변환 방법)

  • Kim, Myungha;Hong, Hyunki
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.7
    • /
    • pp.1-9
    • /
    • 2014
  • Previous 2D-3D conversion methods to generate 3D stereo images from 2D sequence consist of labor-intensive procedures in their production pipelines. This paper presents an efficient 2D-3D conversion system based on scene structure reconstruction from image sequence. The proposed system reconstructs a scene space and produces 3D stereo images with texture re-projection. Experimental results show that the proposed method can generate precise 3D contents based on scene structure information. By using the proposed reconstruction tool, the stereographer can collaborate efficiently with workers in production pipeline for 3D contents production.

Effective Scene Change Detection Method for MuIUmedia Bata as Video Images using Mean Squared Error (평균오차를 이용한 멀티미디어 동영상 데이터를 위한 효율적인 장면전환 검출)

  • Jung, Chang-Ryul;Koh, Jin-Gwang;Lee, Joon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.6
    • /
    • pp.951-957
    • /
    • 2002
  • When retrieving voluminous capacity of video image data, it is necessary to provide synopsized frame lists of video image data for indexing and replaying at the exact point where the user want to retrieve. We apply Mean Squared Error method to extract certain pixel value from diagonal direction of a frame. The RGB value of a pixel extracted from each frame is saved in a matrix form, and this frame is retrievedas a scene change point if the compared value of two points met the certain condition. Also implement the algorithm and provide a way to seize entire structure of video image and the point of scene changes. finally, we analyze and prove that our method has better performance compared with the others.

Automatic Parsing of MPEG-Compressed Video (MPEG 압축된 비디오의 자동 분할 기법)

  • Kim, Ga-Hyeon;Mun, Yeong-Sik
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.868-876
    • /
    • 1999
  • In this paper, an efficient automatic video parsing technique on MPEG-compressed video that is fundamental for content-based indexing is described. The proposed method detects scene changes, regardless of IPB picture composition. To detect abrupt changes, the difference measure based on the dc coefficient in I picture and the macroblock reference feature in P and B pictures are utilized. For gradual scene changes, we use the macroblock reference information in P and B pictures. the process of scene change detection can be efficiently handled by extracting necessary data without full decoding of MPEG sequence. The performance of the proposed algorithm is analyzed based on precision and recall. the experimental results verified the effectiveness of the method for detecting scene changes of various MPEG sequences.

  • PDF

Speech Segmentation using Weighted Cross-correlation in CASA System (계산적 청각 장면 분석 시스템에서 가중치 상호상관계수를 이용한 음성 분리)

  • Kim, JungHo;Kang, ChulHo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.5
    • /
    • pp.188-194
    • /
    • 2014
  • The feature extraction mechanism of the CASA(Computational Auditory Scene Analysis) system uses time continuity and frequency channel similarity to compose a correlogram of auditory elements. In segmentation, we compose a binary mask by using cross-correlation function, mask 1(speech) has the same periodicity and synchronization. However, when there is delay between autocorrelation signals with the same periodicity, it is determined as a speech, which is considered to be a drawback. In this paper, we proposed an algorithm to improve discrimination of channel similarity using Weighted Cross-correlation in segmentation. We conducted experiments to evaluate the speech segregation performance of the CASA system in background noise(siren, machine, white, car, crowd) environments by changing SNR 5dB and 0dB. In this paper, we compared the proposed algorithm to the conventional algorithm. The performance of the proposed algorithm has been improved as following: improvement of 2.75dB at SNR 5dB and 4.84dB at SNR 0dB for background noise environment.

Face Detection Method Based on Color Constancy and Geometrical Analysis (색 항등성과 기하학적 분석 기반 얼굴 검출 기법)

  • Lee, Woo-Ram;Hwang, Dong-Guk;Jun, Byoung-Min
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.59-66
    • /
    • 2011
  • In this paper, we propose a face detection method based on color constancy and geometrical analysis. With the problem about the various colors of skin under scene illuminant, a color constancy method is applied to input images and geometrical analysis is used to detect face regions. At first, the candidates of face or hair are extracted from the image that a color constancy method is applied to, and are classified by some geometrical criterions. And then, face candidates which have some intersectional regions whose total is over a certain size, with hair candidates are selected as faces. Caltech Face DB was used to compare the performance of our method. Also, performance about scene illuminant was evaluated by images which have some illumination effects. The experiment results show that the proposed face detection method was applicable to various facial images because of high true-positive and low false-negative ration.

A Study on Frequency, Type, and Context of Violence in School-Life Webtoon (학원물 웹툰에 나타난 폭력의 양태와 맥락에 대한 내용분석)

  • Kim, Youn-jong;Mun, Anna
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.1
    • /
    • pp.245-258
    • /
    • 2020
  • The study analyzed the frequency, type, and context of violence in 10 school-life webtoon published on Korean portal-site, Naver. Results of content analysis showed 2.15 PAT(Perpetrator-Action-Target) per 1 episode. As for the types of violence, physical violence accounted for 73.2 percent of PATs. As for the characteristics of characters, 53.6 percent of those who committed violence were set to have good-looking. 35.9% of those who committed of violence were heros and 37.3% were villains. The case that perpetrator and target were friends accounted for 60.8%. The case that a travesty is made of the violent scenes adopting excessive expressions, overtures, and balloons accounted for 66.7%. The most common motive for violence was the means for the interests and beliefs of individuals and groups (29.4%), followed by fun (20.9%). The case that the punishment or compensation for violence is absent accounted for 79.9%.

Implementation of Video-Forensic System for Extraction of Violent Scene in Elevator (엘리베이터 내의 폭행 추출을 위한 영상포렌식 시스템 구현)

  • Shin, Kwang-Seong;Shin, Seong-Yoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2427-2432
    • /
    • 2014
  • Color-$X^2$ is used as a method for scene change detection. It extracts a violent scene in an elevator and then could be used for real-time surveillance of criminal acts. The scene could be also used to secure after-discovered evidences and to prove analysis processes. Video Forensic is defined as a research on various methods to efficiently analyze evidences upon crime-related visual images in the field of digital forensic. The method to use differences of color-histogram detects the difference values of histogram for RGB color from two frames respectively. Our paper uses Color-$X^2$ histogram that is composed of merits of color histogram and ones of $X^2$ histogram, in order to efficiently extract violent scenes in elevator. Also, we use a threshold so as to find out key frame, by use of existing Color-$X^2$ histogram. To increase the probability that discerns whether a real violent scene or not, we take advantage of statistical judgments with 20 sample visual images.

A Case Study of Fluid Simulation in the Film 'Sector 7' (사례연구: 영화 '7광구'의 유체 시뮬레이션)

  • Kim, Sun-Tae;Lee, Jeong-Hyun;Kim, Dae-yeong;Park, Yeong-Su;Jang, Seong-Ho;Hong, Jeong-Mo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.3
    • /
    • pp.17-27
    • /
    • 2012
  • In this paper, we describe a case study of the film 'Sector 7' which was produced by technologies applied fluid simulation. For the CG scenes in the movie which include highly detailed fluid motions, we used smoothed particle hydrodynamics(SPH) technique to express subtle movements of seawater from a crashed huge tank, and used hybrid simulation method of particles and levelsets to describe bursting water from a submarine's broken canopy. We also used detonation shock dynamics(DSD) technique for detailed flame simulations to produce a burning monster, the film"s main character. At this point, the divergence-free vortex particle method was applied to conserve the incompressible property of fluids. In addition, we used an upsampling method to achieve more efficient video production. Consequently, we could produce the high-quality visual effects by using the domestic technologies.

Design and Implementation of the Perception Mechanism for the Agent in the Virtual World (가상 세계 거주자의 지각 메커니즘 설계 및 구현)

  • Park, Jae-Woo;Jung, Geun-Jae;Park, Jong-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.8
    • /
    • pp.1-13
    • /
    • 2011
  • In order to create an intelligent autonomous agent in virtual world, we need a sophisticated design for perception, recognition, judgement and behavior. We develop the perception and recognition functions for such an autonomous agent. Our perception mechanism identifies lines based on differences in color, the primitive visible data, and exploits those lines to grasp shapes and regions in the scene. We develop an inferencing algorithm that can infer the original shape from a damaged or partially hidden shape using its characteristics from the ontology in order to intelligently recognize the perceived shape. Several individually recognized 2D shapes and their spatial relations form 3D shapes and those 3D shapes in turn constitute a scene. Each 3D shape occupies its respective region, and an agent analyzes the associated objects and relevant scenes to recognize things and phenomena. We also develop a mechanism by which an agent uses this recognition function to accumulate and use her knowledge on the scene in the historical context. We implement these functions presented above against an example situation to demonstrate their sophistication and realism.