• Title/Summary/Keyword: 컬러보간

Search Result 49, Processing Time 0.03 seconds

Information Hiding Method based on Interpolation using Max Difference of RGB Pixel for Color Images (컬러 영상의 RGB 화소 최대차분 기반 보간법을 이용한 정보은닉 기법)

  • Lee, Joon-Ho;Kim, Pyung-Han;Jung, Ki-Hyun;Yoo, Kee-Young
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.4
    • /
    • pp.629-639
    • /
    • 2017
  • Interpolation based information hiding methods are widely used to get information security. Conventional interpolation methods use the neighboring pixel value and simple calculation like average to embed secret bit stream into the image. But these information hiding methods are not appropriate to color images like military images because the characteristics of military images are not considered and these methods are restricted in grayscale images. In this paper, the new information hiding method based on interpolation using RGB pixel values of color image is proposed and the effectiveness is analyzed through experiments.

Edge-adaptive demosaicking method for complementary color filter array of digital video cameras (디지털 비디오 카메라용 보색 필터를 위한 에지 적응적 색상 보간 방법)

  • Han, Young-Seok;Kang, Hee;Kang, Moon-Gi
    • Journal of Broadcast Engineering
    • /
    • v.13 no.1
    • /
    • pp.174-184
    • /
    • 2008
  • Complementary color filter array (CCFA) is widely used in consumer-level digital video cameras, since it not only has high sensitivity and good signal-to-noise ratio in low-light condition but also is compatible with the interlaced scanning used in broadcast systems. However, the full-color images obtained from CCFA suffer from the color artifacts such as false color and zipper effects. These artifacts can be removed with edge-adaptive demosaicking (ECD) approaches which are generally used in rrimary color filter array (PCFA). Unfortunately, the unique array pattern of CCFA makes it difficult that CCFA adopts ECD approaches. Therefore, to apply ECD approaches suitable for CCFA to demosaicking is one of the major issues to reconstruct the full-color images. In this paper, we propose a new ECD algorithm for CCFA. To estimate an edge direction precisely and enhance the quality of the reconstructed image, a function of spatial variances is used as a weight, and new color conversion matrices are presented for considering various edge directions. Experimental results indicate that the proposed algorithm outperforms the conventional method with respect to both objective and subjective criteria.

Super Resolution Algorithm Based on Edge Map Interpolation and Improved Fast Back Projection Method in Mobile Devices (모바일 환경을 위해 에지맵 보간과 개선된 고속 Back Projection 기법을 이용한 Super Resolution 알고리즘)

  • Lee, Doo-Hee;Park, Dae-Hyun;Kim, Yoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.103-108
    • /
    • 2012
  • Recently, as the prevalence of high-performance mobile devices and the application of the multimedia content are expanded, Super Resolution (SR) technique which reconstructs low resolution images to high resolution images is becoming important. And in the mobile devices, the development of the SR algorithm considering the operation quantity or memory is required because of using the restricted resources. In this paper, we propose a new single frame fast SR technique suitable for mobile devices. In order to prevent color distortion, we change RGB color domain to HSV color domain and process the brightness information V (Value) considering the characteristics of human visual perception. First, the low resolution image is enlarged by the improved fast back projection considering the noise elimination. And at the same time, the reliable edge map is extracted by using the LoG (Laplacian of Gaussian) filtering. Finally, the high definition picture is reconstructed by using the edge information and the improved back projection result. The proposed technique removes effectually the unnatural artefact which is generated during the super resolution restoration, and the edge information which can be lost is amended and emphasized. The experimental results indicate that the proposed algorithm provides better performance than conventional back projection and interpolation methods.

Reconstruction of the Lost Hair Depth for 3D Human Actor Modeling (3차원 배우 모델링을 위한 깊이 영상의 손실된 머리카락 영역 복원)

  • Cho, Ji-Ho;Chang, In-Yeop;Lee, Kwan-H.
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.1-9
    • /
    • 2007
  • In this paper, we propose a reconstruction technique of the lost hair region for 3D human actor modeling. An active depth sensor system can simultaneously capture both color and geometry information of any objects in real-time. However, it cannot acquire some regions whose surfaces are shiny and dark. Therefore, to get a natural 3D human model, the lost region in depth image should be recovered, especially human hair region. The recovery is performed using both color and depth images. We find out the hair region using color image first. After the boundary of hair region is estimated, the inside of hair region is estimated using an interpolation technique and closing operation. A 3D mesh model is generated after performing a series of operations including adaptive sampling, triangulation, mesh smoothing, and texture mapping. The proposed method can generate recovered 3D mesh stream automatically. The final 3D human model allows the user view interaction or haptic interaction in realistic broadcasting system.

  • PDF

3D Facial Model Expression Creation with Head Motion (얼굴 움직임이 결합된 3차원 얼굴 모델의 표정 생성)

  • Kwon, Oh-Ryun;Chun, Jun-Chul;Min, Kyong-Pil
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1012-1018
    • /
    • 2007
  • 본 논문에서는 비전 기반 3차원 얼굴 모델의 자동 표정 생성 시스템을 제안한다. 기존의 3차원 얼굴 애니메이션에 관한 연구는 얼굴의 움직임을 나타내는 모션 추정을 배제한 얼굴 표정 생성에 초점을 맞추고 있으며 얼굴 모션 추정과 표정 제어에 관한 연구는 독립적으로 이루어지고 있다. 제안하는 얼굴 모델의 표정 생성 시스템은 크게 얼굴 검출, 얼굴 모션 추정, 표정 제어로 구성되어 있다. 얼굴 검출 방법으로는 얼굴 후보 영역 검출과 얼굴 영역 검출 과정으로 구성된다. HT 컬러 모델을 이용하며 얼굴의 후보 영역을 검출하며 얼굴 후보 영역으로부터 PCA 변환과 템플릿 매칭을 통해 얼굴 영역을 검출하게 된다. 검출된 얼굴 영역으로부터 얼굴 모션 추정과 얼굴 표정 제어를 수행한다. 3차원 실린더 모델의 투영과 LK 알고리즘을 이용하여 얼굴의 모션을 추정하며 추정된 결과를 3차원 얼굴 모델에 적용한다. 또한 영상 보정을 통해 강인한 모션 추정을 할 수 있다. 얼굴 모델의 표정을 생성하기 위해 특징점 기반의 얼굴 모델 표정 생성 방법을 적용하며 12개의 얼굴 특징점으로부터 얼굴 모델의 표정을 생성한다. 얼굴의 구조적 정보와 템플릿 매칭을 이용하여 눈썹, 눈, 입 주위의 얼굴 특징점을 검출하며 LK 알고리즘을 이용하여 특징점을 추적(Tracking)한다. 추적된 특징점의 위치는 얼굴의 모션 정보와 표정 정보의 조합으로 이루어져있기 때문에 기하학적 변환을 이용하여 얼굴의 방향이 정면이었을 경우의 특징점의 변위인 애니메이션 매개변수를 획득한다. 애니메이션 매개변수로부터 얼굴 모델의 제어점을 이동시키며 주위의 정점들은 RBF 보간법을 통해 변형한다. 변형된 얼굴 모델로부터 얼굴 표정을 생성하며 모션 추정 결과를 모델에 적용함으로써 얼굴 모션 정보가 결합된 3차원 얼굴 모델의 표정을 생성한다.

  • PDF

Hierarchical Non-Rigid Registration by Bodily Tissue-based Segmentation : Application to the Visible Human Cross-sectional Color Images and CT Legs Images (조직 기반 계층적 non-rigid 정합: Visible Human 컬러 단면 영상과 CT 다리 영상에 적용)

  • Kim, Gye-Hyun;Lee, Ho;Kim, Dong-Sung;Kang, Heung-Sik
    • Journal of Biomedical Engineering Research
    • /
    • v.24 no.4
    • /
    • pp.259-266
    • /
    • 2003
  • Non-rigid registration between different modality images with shape deformation can be used to diagnosis and study for inter-patient image registration, longitudinal intra-patient registration, and registration between a patient image and an atlas image. This paper proposes a hierarchical registration method using bodily tissue based segmentation for registration between color images and CT images of the Visible Human leg areas. The cross-sectional color images and the axial CT images are segmented into three distinctive bodily tissue regions, respectively: fat, muscle, and bone. Each region is separately registered hierarchically. Bounding boxes containing bodily tissue regions in different modalities are initially registered. Then, boundaries of the regions are globally registered within range of searching space. Local boundary segments of the regions are further registered for non-rigid registration of the sampled boundary points. Non-rigid registration parameters for the un-sampled points are interpolated linearly. Such hierarchical approach enables the method to register images efficiently. Moreover, registration of visibly distinct bodily tissue regions provides accurate and robust result in region boundaries and inside the regions.

The YIQ Model of Computed Tomography Color Image Variable Block with Fractal Image Coding (전산화단층촬영 칼라영상의 YIQ모델을 가변블록 이용한 프랙탈 영상 부호화)

  • Park, Jae-Hong;Park, Cheol-Woo
    • Journal of the Korean Society of Radiology
    • /
    • v.10 no.4
    • /
    • pp.263-270
    • /
    • 2016
  • This paper suggests techniques to enhance coding time which is a problem in traditional fractal compression and to improve fidelity of reconstructed images by determining fractal coefficient through adaptive selection of block approximation formula. First, to reduce coding time, we construct a linear list of domain blocks of which characteristics is given by their luminance and variance and then we control block searching time according to the first permissible threshold value. Next, when employing three-level block partition, if a range block of minimum partition level cannot find a domain block which has a satisfying approximation error, There applied to 24-bpp color image compression and image techniques. The result did not occur a loss in the image quality of the image when using the encoding method, such as almost to the color in the YIQ image compression rate and image quality, such as RGB images and showed good.

Multi License Plate Recognition System using High Resolution 360° Omnidirectional IP Camera (고해상도 360° 전방위 IP 카메라를 이용한 다중 번호판 인식 시스템)

  • Ra, Seung-Tak;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.412-415
    • /
    • 2017
  • In this paper, we propose a multi license plate recognition system using high resolution $360^{\circ}$ omnidirectional IP camera. The proposed system consists of a planar division part of $360^{\circ}$ circular image and a multi license plate recognition part. The planar division part of the $360^{\circ}$ circular image are divided into a planar image with enhanced image quality through processes such as circular image acquisition, circular image segmentation, conversion to plane image, pixel correction using color interpolation, color correction and edge correction in a high resolution $360^{\circ}$ omnidirectional IP Camera. Multi license plate recognition part is through the multi-plate extraction candidate region, a multi-plate candidate area normalized and restore, multiple license plate number, character recognition using a neural network in the process of recognizing a multi-planar imaging plates. In order to evaluate the multi license plate recognition system using the proposed high resolution $360^{\circ}$ omnidirectional IP camera, we experimented with a specialist in the operation of intelligent parking control system, and 97.8% of high plate recognition rate was confirmed.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.