• Title/Summary/Keyword: 색상기울기

Search Result 46, Processing Time 0.029 seconds

Color-Depth Combined Semantic Image Segmentation Method (색상과 깊이정보를 융합한 의미론적 영상 분할 방법)

  • Kim, Man-Joung;Kang, Hyun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.687-696
    • /
    • 2014
  • This paper presents a semantic object extraction method using user's stroke input, color, and depth information. It is supposed that a semantically meaningful object is surrounded with a few strokes from a user, and has similar depths all over the object. In the proposed method, deciding the region of interest (ROI) is based on the stroke input, and the semantically meaningful object is extracted by using color and depth information. Specifically, the proposed method consists of two steps. The first step is over-segmentation inside the ROI using color and depth information. The second step is semantically meaningful object extraction where over-segmented regions are classified into the object region and the background region according to the depth of each region. In the over-segmentation step, we propose a new marker extraction method where there are two propositions, i.e. an adaptive thresholding scheme to maximize the number of the segmented regions and an adaptive weighting scheme for color and depth components in computation of the morphological gradients that is required in the marker extraction. In the semantically meaningful object extraction, we classify over-segmented regions into the object region and the background region in order of the boundary regions to the inner regions, the average depth of each region being compared to the average depth of all regions classified into the object region. In experimental results, we demonstrate that the proposed method yields reasonable object extraction results.

Estimation of the Medium Transmission Using Graph-based Image Segmentation and Visibility Restoration (그래프 기반 영역 분할 방법을 이용한 매체 전달량 계산과 가시성 복원)

  • Kim, Sang-Kyoon;Park, Jong-Hyun;Park, Soon-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.4
    • /
    • pp.163-170
    • /
    • 2013
  • In general, images of outdoor scenes often contain degradation due to dust, water drop, haze, fog, smoke and so on, as a result they cause the contrast reduction and color fading. Haze removal is not easier problem due to the inherent ambiguity between the haze and the underlying scene. So, we propose a novel method to solve single scene dehazing problem using the region segmentation based on graph algorithm that has used a gradient value as a cost function. We segment the scene into different regions according to depth-related information and then estimate the global atmospheric light. The medium transmission can be directly estimated by the threshold function of graph-based segmentation algorithm. After estimating the medium transmission, we can restore the haze-free scene. We evaluated the degree of the visibility restoration between the proposed method and the existing methods by calculating the gradient of the edge between the restored scene and the original scene. Results on a variety of outdoor haze scene demonstrated the powerful haze removal and enhanced image quality of the proposed method.

Real-Time Virtual-View Image Synthesis Algorithm Using Kinect Camera (키넥트 카메라를 이용한 실시간 가상 시점 영상 생성 기법)

  • Lee, Gyu-Cheol;Yoo, Jisang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.5
    • /
    • pp.409-419
    • /
    • 2013
  • Kinect released by Microsoft in November 2010 is a motion sensing camera in xbox360 and gives depth and color images. However, Kinect camera also generates holes and noise around object boundaries in the obtained images because it uses infrared pattern. Also, boundary flickering phenomenon occurs. Therefore, we propose a real-time virtual-view video synthesis algorithm which results in a high-quality virtual view by solving these problems. In the proposed algorithm, holes around the boundary are filled by using the joint bilateral filter. Color image is converted into intensity image and then flickering pixels are searched by analyzing the variation of intensity and depth images. Finally, boundary flickering phenomenon can be reduced by converting values of flickering pixels into the maximum pixel value of a previous depth image and virtual views are generated by applying 3D warping technique. Holes existing on regions that are not part of occlusion region are also filled with a center pixel value of the highest reliability block after the final block reliability is calculated by using a block based gradient searching algorithm with block reliability. The experimental results show that the proposed algorithm generated the virtual view image in real-time.

Automatic Depth Generation Using Laws' Texture Filter (로스 텍스처 필터 기반 영상의 자동 깊이 생성 기법)

  • Jo, Cheol-Yong;Kim, Je-Dong;Jang, Sung-Eun;Choi, Chang-Yeol;Kim, Man-Bae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.11a
    • /
    • pp.87-90
    • /
    • 2009
  • 영상의 깊이 정보를 추출하는 것은 매우 어려운 연구이다. 다양한 유형의 영상 구조의 분석이 필요하지만 많은 경우에 주관적인 판단의 도움이 필요하다. 본 논문에서는 로스 텍스처 필터를 기반으로 정지 영상의 깊이를 자동으로 생성하는 방법을 제안한다. 로스 텍스처 필터는 단안 비전에서 3D 깊이를 얻기 위한 방법으로 활용되었는데, 실제 2D 영상에서 깊이를 예측하기 위해 텍스처 편차, 텍스처 기울기, 색상 등을 활용한다. 로스 필터는 $1{\times}5$ 벡터로부터 콘볼루션을 이용하여, 20여개의 $5{\times}5$ 콘볼루션 필터가 구해지는데, 영상에 필터를 적용하여 로스 에너지를 계산한다. 구해진 에너지를 깊이 맵으로 변환하고, 깊이 맵에서 특징 점을 구하고, 특징 점들로부터 델러노이 삼각화를 이용하여 삼각형 깊이 메쉬를 얻는다. 구해진 깊이 맵의 성능을 측정하기 위해 카메라 시점을 변경하면서 영상의 3D 구조를 분석하였으며, 입체영상을 생성하여 3D 입체 시청 결과를 분석하였다. 실험에서는 로스 텍스처 필터를 이용하는 깊이 생성 방법이 좋은 효과를 얻는 것을 확인하였다.

  • PDF

Study of the coloration factors of tinted lenses (착색렌즈의 색비율 정량화에 관한 연구)

  • Lim, Yong-Mu;Shim, Moon-Sik;Jung, Ju-Hyun
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.6 no.2
    • /
    • pp.37-46
    • /
    • 2001
  • In this study, We compared the properties of tinted and colored lenses with the standards m the ordinary optical properties, coloration, UV, IR and luminous transmittance, color acceptance for traffic signal, and chromaticity. The UV, lR and luminous transmittance was analysed in the range of 175 nm to 3000 nm by using a UV-Vis-IR spectrophotometer and was used in the synthesis of spectrum of various color lens. A experimental function was made from the slope obtained from the transmittance change as a function of coloration time. We could expect the ordinary optical properties and color acceptance for traffic signal of various color lenses with the requirements of US standard(ANSI Z-80.3).

  • PDF

Graphic Hardware Based Visualization of Three Dimensional Object Boundaries in Volume Data Set Using Three Dimensional Textures (그래픽 하드웨어기반의 3차원 질감을 사용한 볼륨 데이터의 3차원 객체 경계 가시화)

  • Kim, Hong-Jae;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.5
    • /
    • pp.623-632
    • /
    • 2008
  • In this paper, we used the color transfer function and the opacity transfer function for the internal 3D object visualization of an image volume data. In transfer function, creating values of between boundaries generally is ambiguous. We concentrated to extract boundary features for segmenting the visual volume rendering objects. Consequently we extracted an image gradient feature in spatial domain and created a multi-dimensional transfer function according to the GPU efficient improvement. Finally using these functions we obtained a good research result as an implementing object boundary visualization of the graphic hardware based 3D texture mapping.

  • PDF

Disparity Estimation Method using Smooth Filtering based Adaptive Weighting (평활화 필터 기반 적응적 가중치를 이용한 변위 추정 방법)

  • Mun, Ji-Hun;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.92-93
    • /
    • 2016
  • 정확한 변위정보를 추정하기 위해 다양한 비용 값 계산함수 또는 비용 값 합산 방법들이 개발되었다. 본 논문에서는 비용 값 계산을 위해 좌, 우영상의 기울기와 SAD(Sum of Absolute Differences)를 이용하며 비용 값 합산을 위해 가이드 영상 필터링을 사용한다. 가이드 영상 필터링은 가이드 영상의 종류에 따라 필터링결과가 크게 변하게 되는데, 스테레오 정합에 사용된 원본 입력 영상을 가이드 영상으로 사용할 경우 정확한 화소 값을 가지고 있기 때문에 경계영역을 보존하며 필터링 수행이 가능하다. 하지만 가이드 필터링은 가이드 영상으로부터 미리 지정해준 이웃한 화소와의 거리와 색상차이의 분산 값만을 고려하여 필터링을 수행하기 때문에 설정 변수 값에 매우 의존적인 특성을 갖는다. 가이드 필터링 과정에서 변수에 대한 의존성을 낮추고 경계영역의 정확도를 높이기 위해 우선 평활화 필터를 이용하여 경계영역을 추출한다. 원본 입력영상을 사용하여 경계영역을 추출할 경우 객체 내부의 많은 텍스처 영역의 정보까지 추출되지만, 평활화 필터를 이용할 경우 정확한 경계 영역의 정보만을 추출 할 수 있다. 추출된 경계영역에 대해서만 높은 가중치를 사용한 뒤 기존의 가이드 영상 필터링과 혼합하여 최종 비용 값을 합산한다. 제안한 방법을 사용하여 경계영역의 정확도가 향상된 최종 변위 지도를 획득할 수 있었다.

  • PDF

Automatic Source Classification Algorithm using Mean-Shift Clustering and stepwise merging in Color Image (컬러영상에서 Mean-Shift 군집화와 단계별 병합 방법을 이용한 자동 원료 선별 알고리즘)

  • Kim, Sang-Jun;Jang, JiHyeon;Ko, ByoungChul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1597-1599
    • /
    • 2015
  • 본 논문에서는 곡물이나 광석 등의 원료들 중에서 양품 및 불량품을 검출하기 위해, Color CCD 카메라로 촬영한 원료영상에서 Mean-Shift 클러스터링 알고리즘과 단계별 병합 방법을 제안하고 있다. 먼저 원료 학습 영상에서 배경을 제거하고 영상 색 분포정도를 기준으로 모폴로지를 이용하여 영상의 전경맵을 얻는다. 전경맵 영상에 대해서 Mean-Shift 군집화 알고리즘을 적용하여 영상을 N개의 군집으로 나누고, 단계별로 위치 근접성, 색상대푯값 유사성을 비교하여 비슷한 군집끼리 통합한다. 이렇게 통합된 원료 객체는 영상채널마다의 연관관계를 반영할 수 있도록 RG/GB/BR의 2차원 컬러분포도로 표현한다. 원료 객체별로 변환된 2차원 컬러 분포도에서 분포의 주성분의 기울기와 타원들을 생성한다. 객체별 분포 타원은 테스트 원료 영상데이터에서 양품과 불량품을 검출하는 임계값이 된다. 본 논문에서 제안한 방법으로 다양한 원료영상에 실험한 결과, 기존 선별방식에 비해 사용자의 인위적 조작이 적고 정확한 원료 선별 결과를 얻을 수 있었다.

The Fabrication of HCD Ion Plating Apparatus and XPS Analysis on the Fine Color Changes of TiN Films on Stainless Steel (HCD 이온플레이팅 장치 제작 및 Stainless Steel 위에 TiN 박막의 미세색상변화에 따른 XPS분석)

  • Park, Moon Chan;Lee, Jong Geun;Choi, Kwang Ho;Cha, Jung Won;Kim, Eung Soon;Park, Jin Hong
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.15 no.4
    • /
    • pp.361-366
    • /
    • 2010
  • Purpose: HCD ion plating apparatus by hollow cathod discharge method was fabricated and TiN films were deposited on stainless steel by this apparatus with increasing in $N_2$ gas flow and the fine color changes of TiN films were analyzed. Methods: The spectroradiometer and spectrophotometer were used to observe optically the fine color changes of TiN thin films, and XPS was used to analyze the compositions of TiN thin films with increasing in $N_2$ gas flow. Results: The color coordinate of TiN thin film with $N_2$ 120 sccm gas flow showed (0.382, 0.372) which had the mixed colors of gold and silver, and the color coordinate changed to the increasing value of (x,y) with increasing in $N_2$ gas flow which indicated the deep gold color. It was found that the slopes of the reflectances at 550nm were increased with increasing in $N_2$ gas flow. And from the Ti scans using XPS, it was found that the peak heights of 455 eV derived from TiN composition were increased with increasing in $N_2$ gas flow, while the peak heights of 459 eV from $TiO_2$ composition were decreased. Conclusions: The results obtained above were that the color of TiN film with 120 sccm $N_2$ gas flow had been observed from the mixed color of silver and gold due to TiC, $N_2$, TiN on the surface and TiN, $N_2$ inside film, and the color of TiN films changed a deep gold color with increasing in $N_2$ gas flow due to increasing TiN composition.

A Robust License Plate Extraction Method for Low Quality Images (저화질 영상에서 강건한 번호판 추출 방법)

  • Lee, Yong-Woo;Kim, Hyun-Soo;Kang, Woo-Yun;Kim, Gyeong-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.2
    • /
    • pp.8-17
    • /
    • 2008
  • This paper proposes a robust license plate extraction method from images taken under unconstrained environments. Utilization of the color and the edge information in complementary fashion makes the proposed method deal with not only various lighting conditions, hilt blocking artifacts frequently observed in compressed images. Computational complexity is significantly reduced by applying Hough transform to estimate the skew angle, and subsequent do-skewing procedure only to the candidate regions. The true plate region is determined from the candidates under examination using clues including the aspect ratio, the number of zero crossings from vertical scan lines, and the number of connected components. The performance of the proposed method is evaluated using compressed images collected under various realistic circumstances. The experimental results show 94.9% of correct license plate extraction rate.