• Title/Summary/Keyword: 그랩-컷

Search Result 4, Processing Time 0.016 seconds

Authoring of Dynamic Information in Augmented Reality Using Video Object Definition (비디오 객체 정의에 의한 동적 증강 정보 저작)

  • Nam, Yang-Hee;Lee, Seo-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • It is generally required to use modeling or animation tools for inserting dynamic objects into augmented reality, and this process demands high expertise and complexity. This paper proposes a video object based authoring method that enables augmentation with dynamic video objects without such process. Integrated grab-cut and grow-cut method strips initial area of video target off the existing video clips, and snap-cut method is then applied to track objects' boundaries over frames so as to augment real world with continuous motion frames. Experiment shows video cut-out and authoring results achieved by only a few menu selections and simple correcting sketch.

A Study on Facial Wrinkle Detection using Active Appearance Models (AAM을 이용한 얼굴 주름 검출에 관한 연구)

  • Lee, Sang-Bum;Kim, Tae-Mook
    • Journal of Digital Convergence
    • /
    • v.12 no.7
    • /
    • pp.239-245
    • /
    • 2014
  • In this paper, a weighted value wrinkle detection method is suggested based on the analysis on the entire facial features such as face contour, face size, eyes and ears. Firstly, the main facial elements are detected with AAM method entirely from the input screen images. Such elements are mainly composed of shape-based and appearance methods. These are used for learning the facial model and for matching the face from new screen images based on the learned models. Secondly, the face and background are separated in the screen image. Four points with the biggest possibilities for wrinkling are selected from the face and high wrinkle weighted values are assigned to them. Finally, the wrinkles are detected by applying Canny edge algorithm for the interested points of weighted value. The suggested algorithm adopts various screen images for experiment. The experiments display the excellent results of face and wrinkle detection in the most of the screen images.

IR Image Segmentation using GrabCut (GrabCut을 이용한 IR 영상 분할)

  • Lee, Hee-Yul;Lee, Eun-Young;Gu, Eun-Hye;Choi, Il;Choi, Byung-Jae;Ryu, Gang-Soo;Park, Kil-Houm
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.260-267
    • /
    • 2011
  • This paper proposes a method for segmenting objects from the background in IR(Infrared) images based on GrabCut algorithm. The GrabCut algorithm needs the window encompassing the interesting known object. This procedure is processed by user. However, to apply it for object recognition problems in image sequences. the location of window should be determined automatically. For this, we adopted the Otsu' algorithm for segmenting the interesting but unknown objects in an image coarsely. After applying the Otsu' algorithm, the window is located automatically by blob analysis. The GrabCut algorithm needs the probability distributions of both the candidate object region and the background region surrounding closely the object for estimating the Gaussian mixture models(GMMs) of the object and the background. The probability distribution of the background is computed from the background window, which has the same number of pixels within the candidate object region. Experiments for various IR images show that the proposed method is proper to segment out the interesting object in IR image sequences. To evaluate performance of proposed segmentation method, we compare other segmentation methods.

Poisson Video Composition Using Shape Matching (형태 정합을 이용한 포아송 동영상 합성)

  • Heo, Gyeongyong;Choi, Hun;Kim, Jihong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.4
    • /
    • pp.617-623
    • /
    • 2018
  • In this paper, we propose a novel seamless video composition method based on shape matching and Poisson equation. Video composition method consists of video segmentation process and video blending process. In the video segmentation process, the user first sets a trimap for the first frame, and then performs a grab-cut algorithm. Next, considering that the performance of video segmentation may be reduced if the color, brightness and texture of the object and the background are similar, the object region segmented in the current frame is corrected through shape matching between the objects of the current frame and the previous frame. In the video blending process, the object of source video and the background of target video are blended seamlessly using Poisson equation, and the object is located according to the movement path set by the user. Simulation results show that the proposed method has better performance not only in the naturalness of the composite video but also in computational time.