• Title/Summary/Keyword: color depth

Search Result 647, Processing Time 0.023 seconds

Synthesis and Application of Color Depth Black Disperse Dyes for PET Fabric (PET 직물용 심색성 분산염료의 합성과 Black 염색)

  • Kim, Hye-Jin;Kim, Jae-Ho;Kim, Dong-Uk;Hong, Seung-Pyo;Kim, Sang-Jin;Kim, Hee-Dong;Kim, Hyun-Ah;Huh, Man-Woo
    • Textile Coloration and Finishing
    • /
    • v.26 no.4
    • /
    • pp.290-296
    • /
    • 2014
  • In order to produce black disperse dye which has high heat resistance and depth color for polyester(PET), an orange disperse dye was designed and synthesized with pyridine based derivatives to get high heat resistance. Disperse blue dye adopts heterocycles structure for high molar extinction coefficient and long wavelength absorption. Synthesized disperse dye is micronized to an particle size of $0.7{\mu}m$. The mixing condition for black color using commercial disperse violet 93 is blue dye 30%, red dye 21%, and orange dye 21%, respectively. Dyed PET fabric with synthesized dye has quiet good color fastness to sublimation(grade 3-4) and has excellent rubbing, washing and light fastness grade 4-5.

Stereoscopic Video Compositing with a DSLR and Depth Information by Kinect (키넥트 깊이 정보와 DSLR을 이용한 스테레오스코픽 비디오 합성)

  • Kwon, Soon-Chul;Kang, Won-Young;Jeong, Yeong-Hu;Lee, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.920-927
    • /
    • 2013
  • Chroma key technique which composes images by separating an object from its background in specific color has restrictions on color and space. Especially, unlike general chroma key technique, image composition for stereo 3D display requires natural image composition method in 3D space. The thesis attempted to compose images in 3D space using depth keying method which uses high resolution depth information. High resolution depth map was obtained through camera calibration between the DSLR and Kinect sensor. 3D mesh model was created by the high resolution depth information and mapped with RGB color value. Object was converted into point cloud type in 3D space after separating it from its background according to depth information. The image in which 3D virtual background and object are composed obtained and played stereo 3D images using a virtual camera.

Color-Depth Combined Semantic Image Segmentation Method (색상과 깊이정보를 융합한 의미론적 영상 분할 방법)

  • Kim, Man-Joung;Kang, Hyun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.687-696
    • /
    • 2014
  • This paper presents a semantic object extraction method using user's stroke input, color, and depth information. It is supposed that a semantically meaningful object is surrounded with a few strokes from a user, and has similar depths all over the object. In the proposed method, deciding the region of interest (ROI) is based on the stroke input, and the semantically meaningful object is extracted by using color and depth information. Specifically, the proposed method consists of two steps. The first step is over-segmentation inside the ROI using color and depth information. The second step is semantically meaningful object extraction where over-segmented regions are classified into the object region and the background region according to the depth of each region. In the over-segmentation step, we propose a new marker extraction method where there are two propositions, i.e. an adaptive thresholding scheme to maximize the number of the segmented regions and an adaptive weighting scheme for color and depth components in computation of the morphological gradients that is required in the marker extraction. In the semantically meaningful object extraction, we classify over-segmented regions into the object region and the background region in order of the boundary regions to the inner regions, the average depth of each region being compared to the average depth of all regions classified into the object region. In experimental results, we demonstrate that the proposed method yields reasonable object extraction results.

A Robust Depth Map Upsampling Against Camera Calibration Errors (카메라 보정 오류에 강건한 깊이맵 업샘플링 기술)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.8-17
    • /
    • 2011
  • Recently, fusion camera systems that consist of depth sensors and color cameras have been widely developed with the advent of a new type of sensor, time-of-flight (TOF) depth sensor. The physical limitation of depth sensors usually generates low resolution images compared to corresponding color images. Therefore, the pre-processing module, such as camera calibration, three dimensional warping, and hole filling, is necessary to generate the high resolution depth map that is placed in the image plane of the color image. However, the result of the pre-processing step is usually inaccurate due to errors from the camera calibration and the depth measurement. Therefore, in this paper, we present a depth map upsampling method robust these errors. First, the confidence of the measured depth value is estimated by the interrelation between the color image and the pre-upsampled depth map. Then, the detailed depth map can be generated by the modified kernel regression method which exclude depth values having low confidence. Our proposed algorithm guarantees the high quality result in the presence of the camera calibration errors. Experimental comparison with other data fusion techniques shows the superiority of our proposed method.

Direct Depth and Color-based Environment Modeling and Mobile Robot Navigation (스테레오 비전 센서의 깊이 및 색상 정보를 이용한 환경 모델링 기반의 이동로봇 주행기술)

  • Park, Soon-Yong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.194-202
    • /
    • 2008
  • This paper describes a new method for indoor environment mapping and localization with stereo camera. For environmental modeling, we directly use the depth and color information in image pixels as visual features. Furthermore, only the depth and color information at horizontal centerline in image is used, where optical axis passes through. The usefulness of this method is that we can easily build a measure between modeling and sensing data only on the horizontal centerline. That is because vertical working volume between model and sensing data can be changed according to robot motion. Therefore, we can build a map about indoor environment as compact and efficient representation. Also, based on such nodes and sensing data, we suggest a method for estimating mobile robot positioning with random sampling stochastic algorithm. With basic real experiments, we show that the proposed method can be an effective visual navigation algorithm.

  • PDF

m-Aramid Films in Diverse Coagulants

  • Kim, Ji-Young;Jung, Ji-Won;Kim, Sam-Soo;Lee, Jae-Woong
    • Textile Coloration and Finishing
    • /
    • v.21 no.4
    • /
    • pp.63-67
    • /
    • 2009
  • m-Aramid dissolved in N,N-dimethylacetamide (DMAc), were coagulated in different coagulants such as water, methanol, ethanol, propanol and butanol. Various concentrations and temperatures of the coagulants were also used to evaluate dyeing properties of coagulated m-aramid films. Field emission scanning electron microscopy (FE-SEM) was employed to investigate the surface morphology of m-aramid films. Wide angle X-ray diffraction (WAXD) was conducted in order to measure crystallinity change of mcaramid fibers and films. WAXD patterns showed that crystallinity of m-aramid fibers was reduced after film formation. In addition, color depth (K/S value) was measured and the results revealed that the film coagulated in water possessed fairly enhanced color depth.

A Color Navigation System for Effective Perceived Structure: Focused on Hierarchical Menu Structure in Small Display (지각된 정보구조의 효과적 형성을 위한 색공간 네비게이션 시스템 연구 - 작은 디스플레이 화면상의 위계적 정보구조를 중심으로 -)

  • 경소영;박경욱;박준아;김진우
    • Archives of design research
    • /
    • v.15 no.3
    • /
    • pp.167-180
    • /
    • 2002
  • This study investigates effective ways to help users form a correct mental model of the hierarchical information space (HIS) in small display. The focus is the effect of color cue on understanding the structure and navigating the information space. The concept of color space (CS) corresponds well to the HIS - one color has a unique position in the CS as a piece of information does in HIS. In this study, we empirically examined two types of color cue, namely, categorical and depth cue. Hue was used as a categorical cue and tone was used as a depth cue. In our experiment, we evaluate the effectiveness of the color cues in the mobile internet system. Subjects were asked to perform four searching tasks and four comparison tasks. The results of experiment reveal that the categorical cues significantly improve the user's mental model whereas decrease navigation performances. The depth cues cannot aid in understanding the HIS as well as improve navigation performances. This study concludes with limitations of the study and descriptions of future studies.

  • PDF

Iterative Deep Convolutional Grid Warping Network for Joint Depth Upsampling (반복적인 격자 워핑 기법을 이용한 깊이 영상 초해상화 기술)

  • Kim, Dongsin;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.965-972
    • /
    • 2020
  • Depth maps have distance information of objects. They play an important role in organizing 3D information. Color and depth images are often simultaneously obtained. However, depth images have lower resolution than color images due to limitation in hardware technology. Therefore, it is useful to upsample depth maps to have the same resolution as color images. In this paper, we propose a novel method to upsample depth map by shifting the pixel position instead of compensating pixel value. This approach moves the position of the pixel around the edge to the center of the edge, and this process is carried out in several steps to restore blurred depth map. The experimental results show that the proposed method improves both quantitative and visual quality compared to the existing methods.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Dense-Depth Map Estimation with LiDAR Depth Map and Optical Images based on Self-Organizing Map (라이다 깊이 맵과 이미지를 사용한 자기 조직화 지도 기반의 고밀도 깊이 맵 생성 방법)

  • Choi, Hansol;Lee, Jongseok;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.283-295
    • /
    • 2021
  • This paper proposes a method for generating dense depth map using information of color images and depth map generated based on lidar based on self-organizing map. The proposed depth map upsampling method consists of an initial depth prediction step for an area that has not been acquired from LiDAR and an initial depth filtering step. In the initial depth prediction step, stereo matching is performed on two color images to predict an initial depth value. In the depth map filtering step, in order to reduce the error of the predicted initial depth value, a self-organizing map technique is performed on the predicted depth pixel by using the measured depth pixel around the predicted depth pixel. In the process of self-organization map, a weight is determined according to a difference between a distance between a predicted depth pixel and an measured depth pixel and a color value corresponding to each pixel. In this paper, we compared the proposed method with the bilateral filter and k-nearest neighbor widely used as a depth map upsampling method for performance comparison. Compared to the bilateral filter and the k-nearest neighbor, the proposed method reduced by about 6.4% and 8.6% in terms of MAE, and about 10.8% and 14.3% in terms of RMSE.