• Title/Summary/Keyword: RGB Vector

Search Result 63, Processing Time 0.024 seconds

Tip-over Terrain Detection Method based on the Support Inscribed Circle of a Mobile Robot (지지내접원을 이용한 이동 로봇의 전복 지형 검출 기법)

  • Lee, Sungmin;Park, Jungkil;Park, Jaebyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.10
    • /
    • pp.1057-1062
    • /
    • 2014
  • This paper proposes a tip-over detection method for a mobile robot using a support inscribed circle defined as an inscribed circle of a support polygon. A support polygon defined by the contact points between the robot and the terrain is often used to analyze the tip-over. For a robot moving on uneven terrain, if the intersection between the extended line of gravity from the robot's COG and the terrain is inside the support polygon, tip-over will not occur. On the contrary, if the intersection is outside, tip-over will occur. The terrain is detected by using an RGB-D sensor. The terrain is locally modeled as a plane, and thus the normal vector can be obtained at each point on the terrain. The support polygon and the terrain's normal vector are used to detect tip-over. However, tip-over cannot be detected in advance since the support polygon is determined depending on the orientation of the robot. Thus, the support polygon is approximated as its inscribed circle to detect the tip-over regardless of the robot's orientation. To verify the effectiveness of the proposed method, the experiments are carried out using a 4-wheeled robot, ERP-42, with the Xtion RGB-D sensor.

Vehicle Color Recognition Using Neural-Network (신경회로망을 이용한 차량의 색상 인식)

  • Kim, Tae-hyung;Lee, Jung-hwa;Cha, Eui-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.731-734
    • /
    • 2009
  • In this paper, we propose the method the vehicle color recognizing in the image including a vehicle. In an image, the color feature vector of a vehicle is extracted and by using the backpropagation learning algorithm, that is the multi-layer perceptron, the recognized vehicle color. By using the RGB and HSI color model the feature vector used as the input of the backpropagation learning algorithm is the feature of the color used as the input of the neural network. The color of a vehicle recognizes as the white, the silver color, the black, the red, the yellow, the blue, and the green among the color of the vehicle most very much found out as 7 colors. By using the image including a vehicle for the performance evaluation of the method proposing, the color recognition performance was experimented.

  • PDF

Development of Tongue Diagnosis System Using ASM and SVM (ASM과 SVM을 이용한 설진 시스템 개발)

  • Park, Jin-Woong;Kang, Sun-Kyung;Kim, Young-Un;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.4
    • /
    • pp.45-55
    • /
    • 2013
  • In this study, we propose a tongue diagnosis system which detects the tongue from face image and divides the tongue area into six areas, and finally generates tongue fur ratio of each area. To detect the tongue area from face image, we use ASM as one of the active shape models. Detected tongue area is divided into six areas and the distribution of tongue coating of six areas is examined by SVM. For SVM, we use a 3-dimensional vector calculated by PCA from a 12-dimensional vector consisting of RGB, HSV, Lab, and Luv. As a result, we stably detected the tongue area using ASM. Furthermore, we recognized that PCA and SVM helped to raise the ratio of tongue coating detection.

Cloud Masked Daily Vegetation Index (구름 제거한 일별 식생지수)

  • Kang, Yong-Q.
    • Proceedings of the KSRS Conference
    • /
    • 2009.03a
    • /
    • pp.82-86
    • /
    • 2009
  • 원격탐사 근적외선(NIR)과 Red 밴드의 반사도로부터 계산되는 정규식생지수(NDVI)는 구름에 오염된 곳에서는 실제보다 낮은 값으로 계산된다. 식생지수에서 구름오염 문제를 극복하는 기존의 대표적인 방법에는 보름 정도 장기간 식생지수 값 중에서 최대인 값을 취하는 MVC(Maximum Value Composite) 방법이 있다. 하지만 MVC 방법으로는 식생지수의 단기간 변동을 파악할 수 없으며, 장기간 계속 구름으로 오염된 곳은 잘못된 식생지수 값으로 계산되는 문제점이 있다. 가시광 RGB 자료로부터 snapshot 영상자료의 구름을 마스크(mask)하는 새로운 방법인 CIM(Color Index Manipulation) 알고리즘을 개발하였다. 이 알고리즘을 사용하면 snapshot 영상자료에서 구름에 오염된 곳은 제외하고 오염되지 않은 곳에 대한 식생지수를 계산할 수 있다. RGB 자료에 대한 정규색상지수 NCI (Normalized Color Index) 3개 성분을 $120^{\circ}$ 간격으로 벌어진 3개 축상의 좌표로 나타낸 후 이들 3개 값의 벡터합(vector sum) 정보를 이용하여 구름을 식별하는 CIM 방법으로 위성영상에서 두꺼운 구름과 않은 구름을 구분하여 식별할 수 있다. 이 구름식별 기법을 MODIS snapshot 위성영상 자료에 적용하여 한반도의 일별(daily) 식생지수 자료를 계산하였다. 그리고 수년간의 일별 식생지수 자료로부터 한반도 식생지수의 계절적 변동을 조사하였다.

  • PDF

Integrated Navigation Algorithm using Velocity Incremental Vector Approach with ORB-SLAM and Inertial Measurement (속도증분벡터를 활용한 ORB-SLAM 및 관성항법 결합 알고리즘 연구)

  • Kim, Yeonjo;Son, Hyunjin;Lee, Young Jae;Sung, Sangkyung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.68 no.1
    • /
    • pp.189-198
    • /
    • 2019
  • In recent years, visual-inertial odometry(VIO) algorithms have been extensively studied for the indoor/urban environments because it is more robust to dynamic scenes and environment changes. In this paper, we propose loosely coupled(LC) VIO algorithm that utilizes the velocity vectors from both visual odometry(VO) and inertial measurement unit(IMU) as a filter measurement of Extended Kalman filter. Our approach improves the estimation performance of a filter without adding extra sensors while maintaining simple integration framework, which treats VO as a black box. For the VO algorithm, we employed a fundamental part of the ORB-SLAM, which uses ORB features. We performed an outdoor experiment using an RGB-D camera to evaluate the accuracy of the presented algorithm. Also, we evaluated our algorithm with the public dataset to compare with other visual navigation systems.

Mobile Object Tracking Algorithm Using Particle Filter (Particle filter를 이용한 이동 물체 추적 알고리즘)

  • Kim, Se-Jin;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.586-591
    • /
    • 2009
  • In this paper, we propose the mobile object tracking algorithm based on the feature vector using particle filter. To do this, first, we detect the movement area of mobile object by using RGB color model and extract the feature vectors of the input image by using the KLT-algorithm. And then, we get the first feature vectors by matching extracted feature vectors to the detected movement area. Second, we detect new movement area of the mobile objects by using RGB and HSI color model, and get the new feature vectors by applying the new feature vectors to the snake algorithm. And then, we find the second feature vectors by applying the second feature vectors to new movement area. So, we design the mobile object tracking algorithm by applying the second feature vectors to particle filter. Finally, we validate the applicability of the proposed method through the experience in a complex environment.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

A Study on Vehicle License Plate Recognition System (차량 번호판 인식 시스템에 관한 연구)

  • 한수환;우영운;박성대
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.05c
    • /
    • pp.346-351
    • /
    • 2002
  • 본 연구에서는 차량 번호판에서 추출된 문자영역의 DCT(Digital Cosine Transform) 계수와 LVQ (Learning Vector Quantization) 신경회로망을 이용하여 차량 번호판 인식 시스템을 구성하였다. 입력된 차량영상의 RGB 칼라정보를 이용하여 번호판 영역을 추출하고 추출된 번호판의 히스토그램과 문자의 상대적 위치정보를 병합하여 문자영역을 추출하였다. 이렇게 추출된 문자영역의 명암도 영상에 DCT를 적용하여 얻은 특징 벡터는 LVQ 신경회로망의 입력으로 사용되어 인식 과정을 수행한다. 제안된 시스템의 검증을 위하여 다양한 환경에서 촬영된 109대의 자가용 차량영상에 대하여 실험하여 상대적으로 높은 번호판 영역 추출율과 인식률을 보였다.

  • PDF

Segmentation of the Lip Region by Color Gamut Compression and Feature Projection (색역 압축과 특징치 투영을 이용한 입술영역 분할)

  • Kim, Jeong Yeop
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.11
    • /
    • pp.1279-1287
    • /
    • 2018
  • In this paper, a new type of color coordinate conversion is proposed as modified CIEXYZ from RGB to compress the color gamut. The proposed segmentation includes principal component analysis for the optimal projection of a feature vector into a one-dimensional feature. The final step adopted for lip segmentation is Otsu's threshold for a two-class problem. The performance of the proposed method was better than that of conventional methods, especially for the chromatic feature.

Color Image Segmentation Using Adaptive Quantization and Sequential Region-Merging Method (적응적 양자화와 순차적 병합 기법을 사용한 컬러 영상 분할)

  • Kwak, Nae-Joung;Kim, Young-Gil;Kwon, Dong-Jin;Ahn, Jae-Hyeong
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.4
    • /
    • pp.473-481
    • /
    • 2005
  • In this paper, we propose an image segmentation method preserving object's boundaries by using the number of quantized colors and merging regions using adaptive threshold values. First of all, the proposed method quantizes an original image by a vector quantization and the number of quantized colors is determined differently using PSNR each image. We obtain initial regions from the quantized image, merge initial regions in CIE Lab color space and RGB color space step by step and segment the image into semantic regions. In each merging step, we use color distance between adjacent regions as similarity-measure. Threshold values for region-merging are determined adaptively according to the global mean of the color difference between the original image and its split-regions and the mean of those variations. Also, if the segmented image of RGB color space doesn't split into semantic objects, we merge the image again in the CIE Lab color space as post-processing. Whether the post-processing is done is determined by using the color distance between initial regions of the image and the segmented image of RGB color space. Experiment results show that the proposed method splits an original image into main objects and boundaries of the segmented image are preserved. Also, the proposed method provides better results for objective measure than the conventional method.

  • PDF