• Title/Summary/Keyword: RGB 영상

Search Result 716, Processing Time 0.027 seconds

Color Image Watermarking Using Human Visual System (인간시각시스템을 고려한 칼라 영상 워터마킹)

  • Lee, Joo-Shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.6 no.2
    • /
    • pp.65-70
    • /
    • 2013
  • In this paper, we proposed color image watermarking using human visual system. A watermark is embedded by transforming a color image of RGB coordinate into a color image of HSI coordinate with considering that chromatic components are less sensitive than achromatic components. Watermark is embedded in the frequency domain of the chromatic channels by using discrete cosine transform. Watermark is extracted from watermarked image by using inverse discrete cosine transform. To verify the proposed method, a standard image and a fingerprint image are used for the original image and the watermark image, respectively. Simulation results are satisfied with invisibility and robustness from attacks as image compression.

Algorithm of Face Region Detection in the TV Color Background Image (TV컬러 배경영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.4
    • /
    • pp.672-679
    • /
    • 2011
  • In this paper, detection algorithm of face region based on skin color of in the TV images is proposed. In the first, reference image is set to the sampled skin color, and then the extracted of face region is candidated using the Euclidean distance between the pixels of TV image. The eye image is detected by using the mean value and standard deviation of the component forming color difference between Y and C through the conversion of RGB color into CMY color model. Detecting the lips image is calculated by utilizing Q component through the conversion of RGB color model into YIQ color space. The detection of the face region is extracted using basis of knowledge by doing logical calculation of the eye image and lips image. To testify the proposed method, some experiments are performed using front color image down loaded from TV color image. Experimental results showed that face region can be detected in both case of the irrespective location & size of the human face.

Enhancement of Atmospherically Degraded Images Using Color Analysis (영상의 색상분석을 사용한 대기 열화 영상의 가시성 향상)

  • Yoon, In-Hye;Kim, Dong-Gyun;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.1
    • /
    • pp.67-72
    • /
    • 2012
  • In this paper, we present an image enhancement method for atmospherically degraded images using atmospheric light and transmission based on color analysis. We first generate a normalized image using maximum value of each RGB color channel. Then, each atmospheric light is estimated from RGB color channel respectively by calculating reflectance of an image. We also, generate a transmission using gamma coefficients from the Y channel of the image. We can significantly enhance the visibility of an image by using the estimated atmospheric light and the transmission. The proposed algorithm can remove atmospheric degradation components better than existing techniques because the color prevents color distortion which is common problem of existing techniques. Experimental results demonstrate that the proposed algorithm can improve visibility be removing fog, smoke, and dust.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Robust Watermarking toward Compression Attack in Color Image (압축공격에 강인한 칼라영상의 워터마킹)

  • Kim Yoon-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.3
    • /
    • pp.616-621
    • /
    • 2005
  • In this paper. digital watermarking algorithm based on human visual system and transform domain is presented. Firstly, original image is separated into RGB thannels, watermark is embedded into the coefficients of DCT so as to consider a contrast sensitivity and texture degrees. In preprocessing, DCT domain based transform is involved and binary image of visually recognizable patterns is utilized as a watermark. Consequently, experimental results showed that proposed algorithm is robust and imperceptibility such destruction attack as JPEG compression.

Design and Implementation of JPEG Image Display Board Using FFGA (FPGA를 이용한 JPEG Image Display Board 설계 및 구현)

  • Kwon Byong-Heon;Seo Burm-Suk
    • Journal of Digital Contents Society
    • /
    • v.6 no.3
    • /
    • pp.169-174
    • /
    • 2005
  • In this paper we propose efficient design and implementation of JPEG image display board that can display JPEG image on TV. we used NAND Flash Memory to save the compressed JPEG bit stream and video encoder to display the decoded JPEG mage on TV. Also we convert YCbCr to RGB to super impose character on JPEG image. The designed B/D is implemented using FPGA.

  • PDF

Image Retrieval System based on RGB Array and Color Gray-Level (RGB 배열과 칼라 그레이-레벨에 기반한 영상검색 시스템)

  • Kim, Tae-Ohk;Kim, Hyung-Bum;Choung, Young-Chul;Rhee, Seung-Hak;Park, Jong-An
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.273-274
    • /
    • 2006
  • 칼라기반 영상 검색에서 칼라의 색상 정보를 이용하는 기법에 많은 연구가 진행되고 있다. 본 논문에서는 칼라의 색상 정보와 명암 정보인 Gray-level의 특징자를 이용해서 영상을 검색하는 시스템을 제안한다. 칼라영상의 RGB 각각의 픽셀 값들을 R값, G값, B값의 크기순으로 배열하고 칼라 그레이-레벨을 구한 뒤 양자화 한다. 이러한 칼라의 특징 정보를 사용함으로써 이미지의 확대, 축소, 회전에도 강인한 검색을 할 수 있음을 실험을 통하여 성능의 우수함을 보였다.

  • PDF

Robust Real-Time Visual Odometry Estimation from RGB-D Images (RGB-D 영상을 이용한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, Hye-Suk;Kim, Dong-Ha;Kim, In-Cheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.825-828
    • /
    • 2014
  • 본 논문에서는 3차원 공간에서 6자유도로 움직이는 카메라의 실시간 포즈를 추적하기 위해, RGB-D 입력 영상들로부터 카메라의 실시간 주행 거리를 효과적으로 계산할 수 있는 시각 주행 거리 측정기를 제안한다. 본 논문에서 제안하는 시각 주행 거리 측정기에서는 컬러 영상과 깊이 영상의 풍부한 정보를 충분히 활용하면서도 실시간 계산량을 줄이기 위해, 특징점 위주의 저밀도 주행 거리 계산 방법을 사용한다. 또한, 본 시스템에서는 정확도 향상을 위해, 정합된 특징점들에 대한 추가적인 정상 집합정제 과정과 이들을 이용한 주행 거리 정제 작업을 반복하도록 설계하였다. TUM 대학의 벤치마크 데이터 집합을 이용하여 다양한 성능 분석 실험을 수행하였고, 이를 통해 본 논문에서 제안하는 시각 주행 거리 측정기의 높은 성능을 확인할 수 있었다.

Image Generator Design for OLED Panel Test (OLED 패널 테스트를 위한 영상 발생기 설계)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In this paper, we propose an image generator for OLED panel test that can compensate for color coordinates and luminance by using panel defect inspection and optical measurement while displaying images on OLED panel. The proposed image generator consists of two processes: the image generation process and the process of compensating color coordinates and luminance using optical measurement. In the image generating process, the panel is set to receive the panel information to drive the panel, and the image is output by adjusting the output setting of the image generator according to the panel information. The output form of the image is configured by digital RGB method. The pattern generation algorithm inside the image generator outputs color and gray image data by transmitting color data to a 24-bit data line based on a synchronization signal according to the resolution of the panel. The process of compensating color coordinates and luminance using optical measurement outputs an image to an OLED panel in an image generator, and compensates for a portion where color coordinates and luminance data measured by an optical module differ from reference data. To evaluate the accuracy of the image generator for the OLED panel test proposed in this paper, Xilinx's Spartan 6 series XC6SLX25-FG484 FPGA was used and the design tool was ISE 14.5. The output of the image generation process was confirmed that the target setting value and the simulation result value for the digital RGB output using the oscilloscope matched. Compensating the color coordinates and luminance using optical measurements showed accuracy within the error rate suggested by the panel manufacturer.

A Reference Frame Selection Method Using RGB Vector and Object Feature Information of Immersive 360° Media (실감형 360도 미디어의 RGB 벡터 및 객체 특징정보를 이용한 대표 프레임 선정 방법)

  • Park, Byeongchan;Yoo, Injae;Lee, Jaechung;Jang, Seyoung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1050-1057
    • /
    • 2020
  • Immersive 360-degree media has a problem of slowing down the video recognition speed when the video is processed by the conventional method using a variety of rendering methods, and the video size becomes larger with higher quality and extra-large volume than the existing video. In addition, in most cases, only one scene is captured by fixing the camera in a specific place due to the characteristics of the immersive 360-degree media, it is not necessary to extract feature information from all scenes. In this paper, we propose a reference frame selection method for immersive 360-degree media and describe its application process to copyright protection technology. In the proposed method, three pre-processing processes such as frame extraction of immersive 360 media, frame downsizing, and spherical form rendering are performed. In the rendering process, the video is divided into 16 frames and captured. In the central part where there is much object information, an object is extracted using an RGB vector per pixel and deep learning, and a reference frame is selected using object feature information.