• Title/Summary/Keyword: Non-Photorealistic

Search Result 91, Processing Time 0.026 seconds

Cartoon Character Rendering based on Shading Capture of Concept Drawing (원화의 음영 캡쳐 기반 카툰 캐릭터 렌더링)

  • Byun, Hae-Won;Jung, Hye-Moon
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.8
    • /
    • pp.1082-1093
    • /
    • 2011
  • Traditional rendering of cartoon character cannot revive the feeling of concept drawings properly. In this paper, we propose capture technology to get toon shading model from the concept drawings and with this technique, we provide a new novel system to render 3D cartoon character. Benefits of this system is to cartoonize the 3D character according to saliency to emphasize the form of 3D character and further support the sketch-based user interface for artists to edit shading by post-production. For this, we generate texture automatically by RGB color sorting algorithm to analyze color distribution and rates of selected region. In the cartoon rendering process, we use saliency as a measure to determine visual importance of each area of 3d mesh and we provide a novel cartoon rendering algorithm based on the saliency of 3D mesh. For the fine adjustments of shading style, we propose a user interface that allow the artists to freely add and delete shading to a 3D model. Finally, this paper shows the usefulness of the proposed system through user evaluation.

A Study on Aerial Perspective on Painterly Rendering (회화적 렌더링에서의 대기원근법의 표현에 관한 연구)

  • Jang, Jae-Ni;Ryoo, Seung-Taek;Seo, Sang-Hyun;Lee, Ho-Chang;Yoon, Kyung-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.10
    • /
    • pp.1474-1486
    • /
    • 2010
  • In this paper, we propose an algorithm which represents the distance depiction technique of real painting that named "Aerial Perspective" in painterly rendering. It is a painting technique that depicts the attenuations of light in the atmosphere, and the scattering effect is changed by the distance, altitude and density of atmospheres. For the reflection of these natures, we use the depth information corresponding to an input image and user-defined parameters, so that user changes the effect level. We calculate the distance and altitude of every pixel with the depth information and parameters about shot information, and control the scattering effects by expression parameters. Additionally, we accentuate the occluding edges detected by the depth information to clarify the sense of distance between fore and back-ground. We apply our algorithm on various landscape scenes, and generate the distance-emphasized results compared to existing works.

Research on the Productions of Analog Pens within the Smart Media (스마트 미디어에서의 아날로그 펜화기법 제작 연구 -어플리케이션 "스케치 플러스"를 중심으로)

  • Yoon, Dong-Joon;Oh, Seung-Hwan
    • Journal of Digital Convergence
    • /
    • v.14 no.12
    • /
    • pp.413-421
    • /
    • 2016
  • This research, mainly focused on the development of the iPhone exclusive app 'Sketch Plus', is on the production of pens within the smart media. Our goal is to develop and reproduce pens with surreal rendering as a base and during that process, describe the fusing process of design perspectives and algorithms. This research comprehends the concept of surreal rendering, which is a technique that mimics traditional art forms, and suggests 15 pen techniques and ways to display them by analyzing previous research on smart media. We have described the process of using hatching patterns and resused to solve lag problems that occur during the reproduction of pen techniques due to the limitations of smart devices and organized the conversion process of pen patterns into 4 steps: rough sketch, contrast, applying and mimicking patterns, and applying color. We hope that this research on the reproduction of analog pen techniques can be used as an example for production on fused surreal rendering.

A Stylized Font Rendering System for Black/White Comic Book Generation (흑백 만화 제작을 위한 스타일 폰트 설계 시스템)

  • Lee, Jeong-Won;Ryu, Dong-Sung;Park, Soo-Hyun;Cho, Hwan-Gue
    • The KIPS Transactions:PartA
    • /
    • v.15A no.2
    • /
    • pp.75-86
    • /
    • 2008
  • Black/white comic rendering is one of the researches in the field of non-photorealistic rendering(NPR). Black/white comics have been produced manually as yet. But these previous systems require lots of time and manual work. So we propose the COmics Rendering system on VIdeo Stream (CORVIS) which transforms video streams into black/white comic cuts. Stylized font, one of comic representations, can be used to express onomatopoeic words and mimetic dialogue exaggeratively. But current comic generation systems do not provide enough effects of stylized font. This paper proposes a model for stylized fonts to express various effects. Effects of stylized fonts we proposed include geometric deformations. Thus we could represent stylized fonts on the still cut of movies and the background texture on a cuts of plain black/white comics. The final quality of our system produced is good enough to compare with manual black/white comics.

Visualizing Motion Data as Sequential Images in Comic Book Layout (만화책 형식 동작 데이터 시각화)

  • Lee, Kang-Hoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.4
    • /
    • pp.31-40
    • /
    • 2009
  • Captured motion data is widely used today in a variety of areas including film production, game development, sports analysis, and medical rehabilitation. The ability of analyzing and processing motion data has increased rapidly for last decades. However, it is still difficult for users to quickly understand the contents of motion data consisting of a series of time-varying poses. One typical approach is to visualize consecutive poses in sequence while adjusting three-dimensional view, which is often time-consuming and laborious especially when users need to repeatedly control time and view in order to search for desired motions. We present a method of visualizing motion data as a sequence of images in comic book layout so that users can rapidly understand the overall flows of motion data, and easily identify their desired motions. The usefulness of our approach is demonstrated by visualizing various kinds of motion data including locomotion, boxing, and interaction with environments.

  • PDF

A Study on Pointillistic Rendering Based on User Defined Palette (사용자 정의 팔레트에 기반한 점묘화 렌더링에 관한 연구)

  • Seo, Sang-Hyun;Yoon, Kyung-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.4
    • /
    • pp.554-565
    • /
    • 2008
  • The French neo-impressionist painter, George Seurat, introduced pointillism under the theory that the individual pigments of colors on the canvas are reconstructed on the human retina. Pointillism is a painting technique in which many small brush strokes are combined to form a picture in the canvas. When such a painting is seen from a far, the individual stroke color are unnoticeable and they are seen as intermixed colors. This is called juxtaposed color mixture. In this paper, we present a painterly rendering method for generating the pointillism images. For expressing countless separate dots which shown in the pointillism works, we propose a hierarchical points structure using Wang The method. Also a user defined palette is constructed based on the usage that Neo-Impressionist painter works on his palette. Lastly, based on this, a probability algorithm will be introduced, which divides the colors in the image(sampled by hierarchical point structure) into juxtaposed colors. A hierarchical points set which undergone juxtaposed color division algorithm is converted into brush strokes.

  • PDF

Artificial Neural Network Method Based on Convolution to Efficiently Extract the DoF Embodied in Images

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.51-57
    • /
    • 2021
  • In this paper, we propose a method to find the DoF(Depth of field) that is blurred in an image by focusing and out-focusing the camera through a efficient convolutional neural network. Our approach uses the RGB channel-based cross-correlation filter to efficiently classify the DoF region from the image and build data for learning in the convolutional neural network. A data pair of the training data is established between the image and the DoF weighted map. Data used for learning uses DoF weight maps extracted by cross-correlation filters, and uses the result of applying the smoothing process to increase the convergence rate in the network learning stage. The DoF weighted image obtained as the test result stably finds the DoF region in the input image. As a result, the proposed method can be used in various places such as NPR(Non-photorealistic rendering) rendering and object detection by using the DoF area as the user's ROI(Region of interest).

Auto Frame Extraction Method for Video Cartooning System (동영상 카투닝 시스템을 위한 자동 프레임 추출 기법)

  • Kim, Dae-Jin;Koo, Ddeo-Ol-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.28-39
    • /
    • 2011
  • While the broadband multimedia technologies have been developing, the commercial market of digital contents has also been widely spreading. Most of all, digital cartoon market like internet cartoon has been rapidly large so video cartooning continuously has been researched because of lack and variety of cartoon. Until now, video cartooning system has been focused in non-photorealistic rendering and word balloon. But the meaningful frame extraction must take priority for cartooning system when applying in service. In this paper, we propose new automatic frame extraction method for video cartooning system. At frist, we separate video and audio from movie and extract features parameter like MFCC and ZCR from audio data. Audio signal is classified to speech, music and speech+music comparing with already trained audio data using GMM distributor. So we can set speech area. In the video case, we extract frame using general scene change detection method like histogram method and extract meaningful frames in the cartoon using face detection among the already extracted frames. After that, first of all existent face within speech area image transition frame extract automatically. Suitable frame about movie cartooning automatically extract that extraction image transition frame at continuable period of time domain.

Extended Cartoon Rendering using 3D Texture (3차원 텍스처를 이용한 카툰 렌더링의 만화적 스타일 다양화)

  • Byun, Hae-Won;Jung, Hye-Moon
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.8
    • /
    • pp.123-133
    • /
    • 2011
  • In this paper, we propose a new method for toon shading using 3D texture which renders 3d objects in a cartoon style. The conventional toon shading using 1D texture displays shading tone by computing the relative position and orientation between a light vector and surface normal. The 1D texture alone has limits to express the various tone change according to any viewing condition. Therefore Barla et. al. replaces a 1D texture with a 2D texture whose the second dimension corresponds to the view-dependent effects such as level-of-abstraction, depthof-field. The proposed scheme extends 2D texture to 3D texture by adding one dimension with the geometric information of 3D objects such as curvature, saliency, and coordinates. This approach supports two kinds of extensions for cartoon style diversification. First, we support "shape exaggeration effect" to emphasize silhouette or highlight according to the geometric information of 3D objects. Second, we further incorporate "cartoon specific effect", which is examples of screen tone and out focusing frequently appeared in cartoons. We demonstrate the effectiveness of our approach through examples that include a number of 3d objects rendered in various cartoon style.

IMToon: Image-based Cartoon Authoring System using Image Processing (IMToon: 영상처리를 활용한 영상기반 카툰 저작 시스템)

  • Seo, Banseok;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.11-22
    • /
    • 2017
  • This study proposes IMToon(IMage-based carToon) which is an image-based cartoon authoring system using an image processing algorithm. The proposed IMToon allows general users to easily and efficiently produce frames comprising cartoons based on image. The authoring system is designed largely with two functions: cartoon effector and interactive story editor. Cartoon effector automatically converts input images into a cartoon-style image, which consists of image-based cartoon shading and outline drawing steps. Image-based cartoon shading is to receive images of the desired scenes from users, separate brightness information from the color model of the input images, simplify them to a shading range of desired steps, and recreate them as cartoon-style images. Then, the final cartoon style images are created through the outline drawing step in which the outlines of the shaded images are applied through edge detection. Interactive story editor is used to enter text balloons and subtitles in a dialog structure to create one scene of the completed cartoon that delivers a story such as web-toon or comic book. In addition, the cartoon effector, which converts images into cartoon style, is expanded to videos so that it can be applied to videos as well as still images. Finally, various experiments are conducted to verify the possibility of easy and efficient production of cartoons that users want based on images with the proposed IMToon system.