• Title/Summary/Keyword: Scene Flow

Search Result 63, Processing Time 0.021 seconds

Optical Flow Estimation Using the Hierarchical Hopfield Neural Networks (계층적 Hopfield 신경 회로망을 이용한 Optical Flow 추정)

  • 김문갑;진성일
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.3
    • /
    • pp.48-56
    • /
    • 1995
  • This paper presents a method of implementing efficient optical flow estimation for dynamic scene analysis using the hierarchical Hopfield neural networks. Given the two consequent inages, Zhou and Chellappa suggested the Hopfield neural network for computing the optical flow. The major problem of this algorithm is that Zhou and Chellappa's network accompanies self-feedback term, which forces them to check the energy change every iteration and only to accept the case where the lower the energy level is guaranteed. This is not only undesirable but also inefficient in implementing the Hopfield network. The another problem is that this model cannot allow the exact computation of optical flow in the case that the disparities of the moving objects are large. This paper improves the Zhou and Chellapa's problems by modifying the structure of the network to satisfy the convergence condition of the Hopfield model and suggesting the hierarchical algorithm, which enables the computation of the optical flow using the hierarchical structure even in the presence of large disparities.

  • PDF

Creating Simultaneous Story Arcs Using Constraint Based Narrative Structure (제약 조건 기반 서술구조를 이용한 동시 진행 이야기의 생성)

  • Moon, Sung-Hyun;Kim, Seok-Kyoo;Hong, Euy-Seok;Han, Sang-Yong
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.107-114
    • /
    • 2010
  • A nonlinear story is generated through the interactivity with users using the interactive storytelling system. In a play or movie, audiences can watch one scene at a time, and in order to watch next scene, they should wait for the end of current scene. In the real world, however, various events can simultaneously happen at different places, and even those events performed by characters may dramatically affect the flow of the story. This paper suggests Constraint Based narrative structure to create such story, known as "Simultaneous Story Arcs", and "Multi Viewpoint" to simultaneously lead the direction of the stories in each place.

ESTIMATION OF PEDESTRIAN FLOW SPEED IN SURVEILLANCE VIDEOS

  • Lee, Gwang-Gook;Ka, Kee-Hwan;Kim, Whoi-Yul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.330-333
    • /
    • 2009
  • This paper proposes a method to estimate the flow speed of pedestrians in surveillance videos. In the proposed method, the average moving speed of pedestrians is measured by estimating the size of real-world motion from the observed motion vectors. For this purpose, pixel-to-meter conversion factors are calculated from camera geometry. Also, the height information, which is missing because of camera projection, is predicted statistically from simulation experiments. Compared to the previous works for flow speed estimation, our method can be applied to various camera views because it separates scene parameters explicitly. Experiments are performed on both simulation image sequences and real video. In the experiments on simulation videos, the proposed method estimated the flow speed with average error of about 0.1m/s. The proposed method also showed a promising result for the real video.

  • PDF

Visualization using Emotion Information in Movie Script (영화 스크립트 내 감정 정보를 이용한 시각화)

  • Kim, Jinsu
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.11
    • /
    • pp.69-74
    • /
    • 2018
  • Through the convergence of Internet technology and various information technologies, it is possible to collect and process vast amount of information and to exchange various knowledge according to user's personal preference. Especially, there is a tendency to prefer intimate contents connected with the user's preference through the flow of emotional changes contained in the movie media. Based on the information presented in the script, the user seeks to visualize the flow of the entire emotion, the flow of emotions in a specific scene, or a specific scene in order to understand it more quickly. In this paper, after obtaining the raw data from the movie web page, it transforms it into a standardized scenario format after refining process. After converting the refined data into an XML document to easily obtain various information, various sentences are predicted by inputting each paragraph into the emotion prediction system. We propose a system that can easily understand the change of the emotional state between the characters in the whole or a specific part of the various emotions required by the user by mixing the predicted emotions flow and the amount of information included in the script.

Estimation of Moving Information for Tracking of Moving Objects

  • Park, Jong-An;Kang, Sung-Kwan;Jeong, Sang-Hwa
    • Journal of Mechanical Science and Technology
    • /
    • v.15 no.3
    • /
    • pp.300-308
    • /
    • 2001
  • Tracking of moving objects within video streams is a complex and time-consuming process. Large number of moving objects increases the time for computation of tracking the moving objects. Because of large computations, there are real-time processing problems in tracking of moving objects. Also, the change of environment causes errors in estimation of tracking information. In this paper, we present a new method for tracking of moving objects using optical flow motion analysis. Optical flow represents an important family of visual information processing techniques in computer vision. Segmenting an optical flow field into coherent motion groups and estimating each underlying motion are very challenging tasks when the optical flow field is projected from a scene of several moving objects independently. The problem is further complicated if the optical flow data are noisy and partially incorrect. Optical flow estimation based on regulation method is an iterative method, which is very sensitive to the noisy data. So we used the Combinatorial Hough Transform (CHT) and Voting Accumulation for finding the optimal constraint lines. To decrease the operation time, we used logical operations. Optical flow vectors of moving objects are extracted, and the moving information of objects is computed from the extracted optical flow vectors. The simulation results on the noisy test images show that the proposed method finds better flow vectors and more correctly estimates the moving information of objects in the real time video streams.

  • PDF

Optical Flow Measurement Based on Boolean Edge Detection and Hough Transform

  • Chang, Min-Hyuk;Kim, Il-Jung;Park, Jong an
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.119-126
    • /
    • 2003
  • The problem of tracking moving objects in a video stream is discussed in this pa-per. We discussed the popular technique of optical flow for moving object detection. Optical flow finds the velocity vectors at each pixel in the entire video scene. However, optical flow based methods require complex computations and are sensitive to noise. In this paper, we proposed a new method based on the Hough transform and on voting accumulation for improving the accuracy and reducing the computation time. Further, we applied the Boo-lean based edge detector for edge detection. Edge detection and segmentation are used to extract the moving objects in the image sequences and reduce the computation time of the CHT. The Boolean based edge detector provides accurate and very thin edges. The difference of the two edge maps with thin edges gives better localization of moving objects. The simulation results show that the proposed method improves the accuracy of finding the optical flow vectors and more accurately extracts moving objects' information. The process of edge detection and segmentation accurately find the location and areas of the real moving objects, and hence extracting moving information is very easy and accurate. The Combinatorial Hough Transform and voting accumulation based optical flow measures optical flow vectors accurately. The direction of moving objects is also accurately measured.

Frame-Layer H.264 Rate Control for Scene-Change Video at Low Bit Rate (저 비트율 장면 전환 영상에 대한 향상된 H.264 프레임 단위 데이터율 제어 알고리즘)

  • Lee, Chang-Hyun;Jung, Yun-Ho;Kim, Jae-Seok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.11
    • /
    • pp.127-136
    • /
    • 2007
  • An abrupt scene-change frame is one that is hardly correlated with the previous frames. In that case, because an intra-coded frame has less distortion than an inter-coded one, almost all macroblocks are encoded in intra mode. This breaks up the rate control flow and increases the number of bits used. Since the reference software for H.264 takes no special action for a scene-change frame, several studies have been conducted to solve the problem using the quadratic R-D model. However, since this model is more suitable for inter frames, the existing schemes are unsuitable for computing the QP of the scene-change intra frame. In this paper, an improved rate control scheme accounting for the characteristics of intra coding is proposed for scene-change frames. The proposed scheme was validated using 16 test sequences. The results showed that the proposed scheme performed better than the existing H.264 rate control schemes. The PSNR was improved by an average of 0.4-0.6 dB and a maximum of 1.1-1.6 dB. The PSNR fluctuation was also in proved by an average of 18.6 %.

Dense Optical flow based Moving Object Detection at Dynamic Scenes (동적 배경에서의 고밀도 광류 기반 이동 객체 검출)

  • Lim, Hyojin;Choi, Yeongyu;Nguyen Khac, Cuong;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.5
    • /
    • pp.277-285
    • /
    • 2016
  • Moving object detection system has been an emerging research field in various advanced driver assistance systems (ADAS) and surveillance system. In this paper, we propose two optical flow based moving object detection methods at dynamic scenes. Both proposed methods consist of three successive steps; pre-processing, foreground segmentation, and post-processing steps. Two proposed methods have the same pre-processing and post-processing steps, but different foreground segmentation step. Pre-processing calculates mainly optical flow map of which each pixel has the amplitude of motion vector. Dense optical flows are estimated by using Farneback technique, and the amplitude of the motion normalized into the range from 0 to 255 is assigned to each pixel of optical flow map. In the foreground segmentation step, moving object and background are classified by using the optical flow map. Here, we proposed two algorithms. One is Gaussian mixture model (GMM) based background subtraction, which is applied on optical map. Another is adaptive thresholding based foreground segmentation, which classifies each pixel into object and background by updating threshold value column by column. Through the simulations, we show that both optical flow based methods can achieve good enough object detection performances in dynamic scenes.

Study of User Reuse Intention for Gamified Interactive Movies upon Flow Experience

  • Han, Zhe;Lee, Hyun-Seok
    • Journal of Multimedia Information System
    • /
    • v.7 no.4
    • /
    • pp.281-293
    • /
    • 2020
  • As Christine Daley suggested, "interaction-image" is considered to be typical in the age of "Cinema 3.0", which integrates the interactivity of game art and obscures the boundary between producers and customers. In this case, users are allowed to involve actively in the scene as "players" to manage the tempo of the story to some extent, it, thus, makes users pleased to watch interactive movies repeatedly for trying a diverse option to unlock more branch lines. Accordingly, this paper aims to analyze the contributory factors and effect mechanism of users' reuse intention for gamified interactive movies and offer specific concepts to improve the reuse intention from the interactive film production and operation perspectives. Upon integrating the Flow theory and Technology Acceptance Model (TAM) and separating the intrinsic and extrinsic motivations of key factors based on Stimulus-Organism-Response (S-O-R), the research builds an empirical analysis model for users' reuse intention with cognition, design, attitude emotional experience and conducts an empirical analysis on 425 pieces of valid sample data applying SPSS22 and Amos23. The results show that user satisfaction and flow experience impact users' reuse intention highly and perceived usefulness, perceived ease of use, perceived enjoyment, remote perception, interactivity, and flow experience have significant positive influence on user satisfaction experience.

Hydrodynamic scene separation from video imagery of ocean wave using autoencoder (오토인코더를 이용한 파랑 비디오 영상에서의 수리동역학적 장면 분리 연구)

  • Kim, Taekyung;Kim, Jaeil;Kim, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.4
    • /
    • pp.9-16
    • /
    • 2019
  • In this paper, we propose a hydrodynamic scene separation method for wave propagation from video imagery using autoencoder. In the coastal area, image analysis methods such as particle tracking and optical flow with video imagery are usually applied to measure ocean waves owing to some difficulties of direct wave observation using sensors. However, external factors such as ambient light and weather conditions considerably hamper accurate wave analysis in coastal video imagery. The proposed method extracts hydrodynamic scenes by separating only the wave motions through minimizing the effect of ambient light during wave propagation. We have visually confirmed that the separation of hydrodynamic scenes is reasonably well extracted from the ambient light and backgrounds in the two videos datasets acquired from real beach and wave flume experiments. In addition, the latent representation of the original video imagery obtained through the latent representation learning by the variational autoencoder was dominantly determined by ambient light and backgrounds, while the hydrodynamic scenes of wave propagation independently expressed well regardless of the external factors.