• Title/Summary/Keyword: 시각장면

Search Result 184, Processing Time 0.027 seconds

Dynamic Scene Management for Hi-Fi Out-of-Core Terrain Rendering (고 충실도의 대규모 지형 렌더링을 위한 장면처리 기법)

  • 김상희;원광연
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10d
    • /
    • pp.430-432
    • /
    • 2002
  • 위성기술의 발달로 대규모의 고해상도 지형정보 생성이 가속화되고, 사실적 지형묘사 요구는 더욱 높아지고 있으므로 대규모 지형자료를 효율적으로 처리하여 실제감을 주는 고 충실도 렌더링 기법이 필수적이다. 본 연구에서는 지형 셀의 쿼드트리 구조를 기반으로 지형특성을 고려한 고속의 시계범위 컬링기법, 시각 거슬림 현상을 최소화하기 위한 연속적인 다단계 상세도 기법 및 기하모핑 기법, 일정한 폴리곤 수에 맞도륵 렌더링하는 프레임 균일화 기법 및 텍스쳐 관리기법 등의 효율적인 장면처리 기법을 제안한다.

  • PDF

Listenable Explanation for Heatmap in Acoustic Scene Classification (음향 장면 분류에서 히트맵 청취 분석)

  • Suh, Sangwon;Park, Sooyoung;Jeong, Youngho;Lee, Taejin
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.727-731
    • /
    • 2020
  • 인공신경망의 예측 결과에 대한 원인을 분석하는 것은 모델을 신뢰하기 위해 필요한 작업이다. 이에 컴퓨터 비전 분야에서는 돌출맵 또는 히트맵의 형태로 모델이 어떤 내용을 근거로 예측했는지 시각화 하는 모델 해석 방법들이 제안되었다. 하지만 오디오 분야에서는 스펙트로그램 상의 시각적 해석이 직관적이지 않으며, 실제 어떤 소리를 근거로 판단했는지 이해하기 어렵다. 따라서 본 연구에서는 히트맵의 청취 분석 시스템을 제안하고, 이를 활용한 음향 장면 분류 모델의 히트맵 청취 분석 실험을 진행하여 인공신경망의 예측 결과에 대해 사람이 이해할 수 있는 설명을 제공할 수 있는지 확인한다.

  • PDF

The neural mechanism of distributed and focused attention and their relation to statistical representation of visual displays (분산주의와 초점주의의 신경기제 및 시각 통계표상과의 관계)

  • Chong, Sang-Chul;Joo, Sung-Jun
    • Korean Journal of Cognitive Science
    • /
    • v.18 no.4
    • /
    • pp.399-415
    • /
    • 2007
  • Many objects are always present in a visual scene. Since the visual system has limited capacity to process multiple stimuli at a time, how to cope with this informational overload is one of the important problems to solve in visual perception. This study investigated the suppressive interactions among multiple stimuli when attention was directed to either one of the stimuli or all of them. The results indicate that suppressive interactions among multiple circles were reduced in V4 when subjects paid attention to one of the four locations, as compared to the unattended condition. However, suppressive interactions were not reduced when they paid attention to all four items as a set, in order to compute their mean size. These results suggest that whereas focused attention serves to later out irrelevant information, distributed attention provides an average representation of multiple stimuli.

  • PDF

Detecting Dissolve Cut for Multidimensional Analysis in an MPEG compressed domain : Using DCT-R of I, P Frames (MPEG의 다차원 분석을 통한 디졸브 구간 검출 : I, P프레임의 DCT-R값을 이용)

  • Heo, Jung;Park, Sang-Sung;Jang, Dong-Sik
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.3
    • /
    • pp.34-40
    • /
    • 2003
  • The paper presents a method to detect dissolve shots of video scene change detections in an MPEG compressed domain. The proposed algorithm uses color-R DCT coefficients of Ⅰ, P-frames for a fast operation and accurate detection and a minimum decoding process in MPEG sequences. The paper presents a method to detect dissolve shot for three-dimensional visualization and analysis of Image in order to recognize easily in computer as a human detects accurately shots of scene change. First, Color-R DCT coefficients for 8*8 units are obtained and the features are summed in a row. Second, Four-step analysis are Performed for differences of the sum in the frame sequences. The experimental results showed that the algorithm has better detection performance, such as precision and recall rate, than the existing method using an average for all DC image by performing four step analysis. The algorithm has the advantage of speed, simplicity and accuracy. In addition. it requires less amount of storage.

  • PDF

A Study on Frame Directing of 'Mononoke Princess' (원령공주'의 장면 연출에 관한 연구 -등장인물의 동태(동태)를 중심으로-)

  • 오정석;윤호창
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.05a
    • /
    • pp.139-145
    • /
    • 2003
  • When a director produces motion frame with angle of camera, a director has intention of framework about each frame of animation. Charactors who appear in scene are element that have embodied intended motion of writer most faithfully and their motion acts important role to understand relevant work. Characters of frame are elements. In "Princess Mononoke", We study the characteristic of analyzing the characters'motion through structure of psychological conflict in animation.animation.

  • PDF

Edit Method Using Representative Frame on Video (비디오에서의 대표 프레임을 이용한 편집기법)

  • 유현수;이지현
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.420-423
    • /
    • 1999
  • In this paper, we propose the method which efficiently obtain information through edit and retrieval of video data easily and rapidly. To support this method, extract the candidate representative frame using existing scene change detection method and the user selects representative frame for video segmentation at his desire, and then visualization indexing methods supported by logical-links enable users to freely merge and split each scene.

  • PDF

Detection of ROIs using the Bottom-Up Saliency Model for Selective Visual Attention (관심영역 검출을 위한 상향식 현저함 모델 기반의 선택적 주의 집중 연구)

  • Kim, Jong-Bae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.314-317
    • /
    • 2011
  • 본 논문은 상향식 현저함 모델을 이용하여 입력 영상으로부터 시각적 주의를 갖는 영역들을 자동으로 검출하는 방법을 제안한다. 제안한 방법에서는 인간의 시각 시스템과 같이 사전 지식 없이 시각정보의 공간적인 분포에 근거하여 장면을 해석하는 상향식 현저함 모델 방법을 입력 영상에 적용하여 관심 물체 영역을 검출하는 연구이다. 상향식 현저함 방법은 Treisman의 세부특징이론 연구에서 제시한 바와 같이 시각적 주의를 갖는 영역은 시각정보의 현격한 대비차이를 가지는 영역으로 집중되어 배경에서 관심영역을 구분할 수 있다. 입력 영상에서 현저함 모델을 통해 3차원 현저함 맵을 생성한다. 그리고 생성된 현저함 맵으로부터 실제 관심영역들을 검출하기 위해 제안한 방법에서는 적응적 임계치 방법을 적용하여 관심영역을 검출한다. 제안한 방법을 관심영역 분할에 적용한 결과, 영역 분할 정확도 및 정밀도가 약 88%와 89%로 제시되어 관심 영상분할 시스템에 적용이 가능함을 알 수 있다.

The Study on the Lighting Directing of Animation - Focusing on the Emotional Vocabulary that Appears in the 3D Animation Scene (애니메이션의 조명 연출에 대한 연구 - 3D 애니메이션 장면에서 나타나는 정서적 어휘를 중심으로)

  • Lee, Jong Han
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.349-374
    • /
    • 2014
  • The light is the language. Directors have to describe the scene component effectively his intention to configure the scene as an appropriately. After this act of the character, the layout of the props and scene lights will enter to the scene components. Those things help to audiences can understand narrative of work and emotion that producer want to send. Expressing their emotions especially using the lights by adjusting the colors and contrast makes audience to concentrate on work and understand naturally. This lighting technique clearly appears on early year theaters stage of England and Rembrandt's paintings. Properly dividing and controlling the lights dramatically increases the beauty of the work elements to express a variety of emotions such as worries and fear. Therefore, it can be evolve depending on director's intent of using lights on his work. Lights can increase involvement of human emotion through basic features that cognition of object, visualization of space-time and by artistic method in the product. This study will examine the role and how to use lighting to express the proper sentiment based on the narrative of the work. Making research named "Lighting Research of 3D animated film which applying light features to express emotion" previous study and have to combine emotional vocabulary and emotion-based theory for classifying the emotional language that can be applied on 3D animation. And choosing most emotional scene from 3D animation for analyze how they used lighting to expressing emotions. Directors trying to show up about the light role through light method that matched perfectly with an emotional language. Expecting this research work of directing 3D animations light for expressing emotional feelings will be continue successfully.

Hydrodynamic scene separation from video imagery of ocean wave using autoencoder (오토인코더를 이용한 파랑 비디오 영상에서의 수리동역학적 장면 분리 연구)

  • Kim, Taekyung;Kim, Jaeil;Kim, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.4
    • /
    • pp.9-16
    • /
    • 2019
  • In this paper, we propose a hydrodynamic scene separation method for wave propagation from video imagery using autoencoder. In the coastal area, image analysis methods such as particle tracking and optical flow with video imagery are usually applied to measure ocean waves owing to some difficulties of direct wave observation using sensors. However, external factors such as ambient light and weather conditions considerably hamper accurate wave analysis in coastal video imagery. The proposed method extracts hydrodynamic scenes by separating only the wave motions through minimizing the effect of ambient light during wave propagation. We have visually confirmed that the separation of hydrodynamic scenes is reasonably well extracted from the ambient light and backgrounds in the two videos datasets acquired from real beach and wave flume experiments. In addition, the latent representation of the original video imagery obtained through the latent representation learning by the variational autoencoder was dominantly determined by ambient light and backgrounds, while the hydrodynamic scenes of wave propagation independently expressed well regardless of the external factors.

Image Based Human Action Recognition System to Support the Blind (시각장애인 보조를 위한 영상기반 휴먼 행동 인식 시스템)

  • Ko, ByoungChul;Hwang, Mincheol;Nam, Jae-Yeal
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.138-143
    • /
    • 2015
  • In this paper we develop a novel human action recognition system based on communication between an ear-mounted Bluetooth camera and an action recognition server to aid scene recognition for the blind. First, if the blind capture an image of a specific location using the ear-mounted camera, the captured image is transmitted to the recognition server using a smartphone that is synchronized with the camera. The recognition server sequentially performs human detection, object detection and action recognition by analyzing human poses. The recognized action information is retransmitted to the smartphone and the user can hear the action information through the text-to-speech (TTS). Experimental results using the proposed system showed a 60.7% action recognition performance on the test data captured in indoor and outdoor environments.