• Title/Summary/Keyword: 세일리언시

Search Result 8, Processing Time 0.025 seconds

Saliency Detection Using Entropy Weight and Weber's Law (엔트로피 가중치와 웨버 법칙을 이용한 세일리언시 검출)

  • Lee, Ho Sang;Moon, Sang Whan;Eom, Il Kyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.88-95
    • /
    • 2017
  • In this paper, we present a saliency detection method using entropy weight and Weber contrast in the wavelet transform domain. Our method is based on the commonly exploited conventional algorithms that are composed of the local bottom-up approach and global top-down approach. First, we perform the multi-level wavelet transform for the CIE Lab color images, and obtain global saliency by adding the local Weber contrasts to the corresponding low-frequency wavelet coefficients. Next, the local saliency is obtained by applying Gaussian filter that is weighted by entropy of wavelet high-frequency subband. The final saliency map is detected by non-lineally combining the local and global saliencies. To evaluate the proposed saliency detection method, we perform computer simulations for two image databases. Simulations results show the proposed method represents superior performance to the conventional algorithms.

A Study on Saliency-based Stroke LOD for Painterly Rendering (회화적 렌더링을 위한 세일리언시 기반의 스트로크 단계별 세부묘사 제어에 관한 연구)

  • Lee, Ho-Chang;Seo, Sang-Hyun;Yoon, Kyung-Hyun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.3
    • /
    • pp.199-209
    • /
    • 2009
  • In this paper, we suggest a stroke level of detail (LOD) based on a saliency density. On painter]y rendering, the stroke LOD has an advantage of making the observer concentrate on the main object and improving accuracy of expression. For the stroke LOD, it is necessary to classify the detailed and abstracted area. We divide the area on the basis of saliency distribution and the level of detailed expression is controlled based on the saliency information. 'We define that the area of which the saliency distribution is high is a major subject that an artist tries to express, it is described in detail. The area of which the saliency distribution is low is abstractly described. Each divided area has the abstraction level. And by adapting the brushes of which sizes are appropriate to each level, it is possible to express the area which needs to be expressed in details from the one which needs to be expressed abstractly.

Saliency Detection using Mutual Information of Wavelet Subbands (웨이블릿 부밴드의 상호 정보량을 이용한 세일리언시 검출)

  • Moon, Sang Whan;Lee, Ho Sang;Moon, Yong Ho;Eom, Il Kyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.6
    • /
    • pp.72-79
    • /
    • 2017
  • In this paper, we present a new saliency detection algorithm using the mutual information of wavelet subbands. Our method constructs an intermediate saliency map using the power operation and Gaussian blurring for high-frequency wavelet coefficients. After combining three intermediate saliency maps according to the direction of wavelet subband, we find the main directional components using entropy measure. The amount of mutual information of each subband is obtained centering on the subband having the minimum entropy The final saliency map is detected using Minkowski sum based on weights calculated by the mutual information. As a result of the experiment on CAT2000 and ECSSD databases, our method showed good detection results in terms of ROC and AUC with few computation times compared with the conventional methods.

Cartoon Character Rendering based on Shading Capture of Concept Drawing (원화의 음영 캡쳐 기반 카툰 캐릭터 렌더링)

  • Byun, Hae-Won;Jung, Hye-Moon
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.8
    • /
    • pp.1082-1093
    • /
    • 2011
  • Traditional rendering of cartoon character cannot revive the feeling of concept drawings properly. In this paper, we propose capture technology to get toon shading model from the concept drawings and with this technique, we provide a new novel system to render 3D cartoon character. Benefits of this system is to cartoonize the 3D character according to saliency to emphasize the form of 3D character and further support the sketch-based user interface for artists to edit shading by post-production. For this, we generate texture automatically by RGB color sorting algorithm to analyze color distribution and rates of selected region. In the cartoon rendering process, we use saliency as a measure to determine visual importance of each area of 3d mesh and we provide a novel cartoon rendering algorithm based on the saliency of 3D mesh. For the fine adjustments of shading style, we propose a user interface that allow the artists to freely add and delete shading to a 3D model. Finally, this paper shows the usefulness of the proposed system through user evaluation.

Efficient Image Segmentation Algorithm Based on Improved Saliency Map and Superpixel (향상된 세일리언시 맵과 슈퍼픽셀 기반의 효과적인 영상 분할)

  • Nam, Jae-Hyun;Kim, Byung-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.7
    • /
    • pp.1116-1126
    • /
    • 2016
  • Image segmentation is widely used in the pre-processing stage of image analysis and, therefore, the accuracy of image segmentation is important for performance of an image-based analysis system. An efficient image segmentation method is proposed, including a filtering process for super-pixels, improved saliency map information, and a merge process. The proposed algorithm removes areas that are not equal or of small size based on comparison of the area of smoothed superpixels in order to maintain generation of a similar size super pixel area. In addition, application of a bilateral filter to an existing saliency map that represents human visual attention allows improvement of separation between objects and background. Finally, a segmented result is obtained based on the suggested merging process without any prior knowledge or information. Performance of the proposed algorithm is verified experimentally.

Extended Cartoon Rendering using 3D Texture (3차원 텍스처를 이용한 카툰 렌더링의 만화적 스타일 다양화)

  • Byun, Hae-Won;Jung, Hye-Moon
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.8
    • /
    • pp.123-133
    • /
    • 2011
  • In this paper, we propose a new method for toon shading using 3D texture which renders 3d objects in a cartoon style. The conventional toon shading using 1D texture displays shading tone by computing the relative position and orientation between a light vector and surface normal. The 1D texture alone has limits to express the various tone change according to any viewing condition. Therefore Barla et. al. replaces a 1D texture with a 2D texture whose the second dimension corresponds to the view-dependent effects such as level-of-abstraction, depthof-field. The proposed scheme extends 2D texture to 3D texture by adding one dimension with the geometric information of 3D objects such as curvature, saliency, and coordinates. This approach supports two kinds of extensions for cartoon style diversification. First, we support "shape exaggeration effect" to emphasize silhouette or highlight according to the geometric information of 3D objects. Second, we further incorporate "cartoon specific effect", which is examples of screen tone and out focusing frequently appeared in cartoons. We demonstrate the effectiveness of our approach through examples that include a number of 3d objects rendered in various cartoon style.

Cel Shading for Apparent Shape (명확한 형태 표현을 위한 셀 쉐이딩)

  • Chung, Jae-Min;Seo, Sang-Hyun;Park, Young-Sup;Yoon, Kyung-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.4
    • /
    • pp.19-25
    • /
    • 2008
  • In this paper, we present new cel shading technique using local light that increases local contrast for apparent shape depiction. To convey detail and overall shape, we employ virtual local light to represent cubic effect and local shape. Moreover, we control increase of local contrast using curvature as complexity and curvature as importance to adapt depicted local shape to feature of each area. Our technique depicts shape well wherever global light is.

  • PDF

Generating Extreme Close-up Shot Dataset Based On ROI Detection For Classifying Shots Using Artificial Neural Network (인공신경망을 이용한 샷 사이즈 분류를 위한 ROI 탐지 기반의 익스트림 클로즈업 샷 데이터 셋 생성)

  • Kang, Dongwann;Lim, Yang-mi
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.983-991
    • /
    • 2019
  • This study aims to analyze movies which contain various stories according to the size of their shots. To achieve this, it is needed to classify dataset according to the shot size, such as extreme close-up shots, close-up shots, medium shots, full shots, and long shots. However, a typical video storytelling is mainly composed of close-up shots, medium shots, full shots, and long shots, it is not an easy task to construct an appropriate dataset for extreme close-up shots. To solve this, we propose an image cropping method based on the region of interest (ROI) detection. In this paper, we use the face detection and saliency detection to estimate the ROI. By cropping the ROI of close-up images, we generate extreme close-up images. The dataset which is enriched by proposed method is utilized to construct a model for classifying shots based on its size. The study can help to analyze the emotional changes of characters in video stories and to predict how the composition of the story changes over time. If AI is used more actively in the future in entertainment fields, it is expected to affect the automatic adjustment and creation of characters, dialogue, and image editing.