• Title/Summary/Keyword: Non-photorealistic Rendering

Search Result 85, Processing Time 0.02 seconds

Exaggerated Cartooning using a Reference Image (참조 이미지를 이용한 과장된 카투닝)

  • Han, Myoung-Hun;Seo, Sang-Hyun;Ryoo, Seung-Taek;Yoon, Kyung-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.1
    • /
    • pp.33-38
    • /
    • 2011
  • This paper proposes the method of image cartooning, that makes cartoon-like images of a target, using reference images. We deform a target image using pre-defined reference images. For this deformation, we extract feature points from the target image by Active Appearance Model(AAM) and apply the warping method to the target using feature points of target and feature points of reference image as a basis of warping function. We create simplified cartoon-like images by abstraction of the deformed target image and drawing of edges and quantization of luminance of the abstracted image. Two main concept of cartoon(exaggeration and simplification) is inhered in this method when we use a exaggerated cartoon image as a reference image. It is possible for this method to create various results by control of warping and change of reference image.

Stylized Facial Illustration (스타일화된 얼굴 일러스트레이션)

  • Son, Min-Jung;Cho, Sung-Hyun;Lee, Seung-Wook;Koo, Bon-Ki;Lee, Seung-Yong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.2
    • /
    • pp.27-33
    • /
    • 2008
  • We propose a stylized facial illustration method that expresses important features of a target highly abstractly but effectively from a human facial picture. Our method first detects facial components such as eyes and their associated regions from an input image, and then uses the detected results to render a stylized portrait. Our illustration method mainly consists of two key components and additional components: a tonal illustration component to draw simple tones, a line illustration component to draw a set of lines, and additional illustration components for hair, clothes. etc. The illustration part of the proposed method aims at illustrating features of a target effectively in a highly abstracted way like hand-drawn paintings. In order to achieve this goal, our method adopts an oriental black-ink painting style, which expresses objects effectively with empty spaces and simple expressions such as abstracted lines.

  • PDF

Visualizing Motion Data as Sequential Images in Comic Book Layout (만화책 형식 동작 데이터 시각화)

  • Lee, Kang-Hoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.4
    • /
    • pp.31-40
    • /
    • 2009
  • Captured motion data is widely used today in a variety of areas including film production, game development, sports analysis, and medical rehabilitation. The ability of analyzing and processing motion data has increased rapidly for last decades. However, it is still difficult for users to quickly understand the contents of motion data consisting of a series of time-varying poses. One typical approach is to visualize consecutive poses in sequence while adjusting three-dimensional view, which is often time-consuming and laborious especially when users need to repeatedly control time and view in order to search for desired motions. We present a method of visualizing motion data as a sequence of images in comic book layout so that users can rapidly understand the overall flows of motion data, and easily identify their desired motions. The usefulness of our approach is demonstrated by visualizing various kinds of motion data including locomotion, boxing, and interaction with environments.

  • PDF

Auto Frame Extraction Method for Video Cartooning System (동영상 카투닝 시스템을 위한 자동 프레임 추출 기법)

  • Kim, Dae-Jin;Koo, Ddeo-Ol-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.28-39
    • /
    • 2011
  • While the broadband multimedia technologies have been developing, the commercial market of digital contents has also been widely spreading. Most of all, digital cartoon market like internet cartoon has been rapidly large so video cartooning continuously has been researched because of lack and variety of cartoon. Until now, video cartooning system has been focused in non-photorealistic rendering and word balloon. But the meaningful frame extraction must take priority for cartooning system when applying in service. In this paper, we propose new automatic frame extraction method for video cartooning system. At frist, we separate video and audio from movie and extract features parameter like MFCC and ZCR from audio data. Audio signal is classified to speech, music and speech+music comparing with already trained audio data using GMM distributor. So we can set speech area. In the video case, we extract frame using general scene change detection method like histogram method and extract meaningful frames in the cartoon using face detection among the already extracted frames. After that, first of all existent face within speech area image transition frame extract automatically. Suitable frame about movie cartooning automatically extract that extraction image transition frame at continuable period of time domain.

IMToon: Image-based Cartoon Authoring System using Image Processing (IMToon: 영상처리를 활용한 영상기반 카툰 저작 시스템)

  • Seo, Banseok;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.11-22
    • /
    • 2017
  • This study proposes IMToon(IMage-based carToon) which is an image-based cartoon authoring system using an image processing algorithm. The proposed IMToon allows general users to easily and efficiently produce frames comprising cartoons based on image. The authoring system is designed largely with two functions: cartoon effector and interactive story editor. Cartoon effector automatically converts input images into a cartoon-style image, which consists of image-based cartoon shading and outline drawing steps. Image-based cartoon shading is to receive images of the desired scenes from users, separate brightness information from the color model of the input images, simplify them to a shading range of desired steps, and recreate them as cartoon-style images. Then, the final cartoon style images are created through the outline drawing step in which the outlines of the shaded images are applied through edge detection. Interactive story editor is used to enter text balloons and subtitles in a dialog structure to create one scene of the completed cartoon that delivers a story such as web-toon or comic book. In addition, the cartoon effector, which converts images into cartoon style, is expanded to videos so that it can be applied to videos as well as still images. Finally, various experiments are conducted to verify the possibility of easy and efficient production of cartoons that users want based on images with the proposed IMToon system.