• Title/Summary/Keyword: RGB Vector

Search Result 63, Processing Time 0.026 seconds

Development of High-resolution 3-D PIV Algorithm by Cross-correlation (고해상도 3차원 상호상관 PIV 알고리듬 개발)

  • Kim, Mi-Young;Choi, Jang-Woon;Lee, Hyun;Lee, Young-Ho
    • Proceedings of the KSME Conference
    • /
    • 2001.11b
    • /
    • pp.410-416
    • /
    • 2001
  • An algorithm of 3-D particle image velocimetry(3D-PIV) was developed for the measurement of 3-D velocity field of complex flows. The measurement system consists of two or three CCD camera and one RGB image grabber. In this study, stereo photogrammetty was applied for the 3-D matching of tracer particles. Epipolar line was used to decect the stereo pair. 3-D CFD data was used to estimate algorithm. 3-D position data of the first frame and the second frame was used to find velocity vector. Continuity equation was applied to extract error vector. The algorithm result involved error vecotor of about 0.13 %. In Pentium III 450MHz processor, the calculation time of cross-correlation for 1500 particles needed about 1 minute.

  • PDF

Face Feature Extraction Method ThroughStereo Image's Matching Value (스테레오 영상의 정합값을 통한 얼굴특징 추출 방법)

  • Kim, Sang-Myung;Park, Chang-Han;Namkung, Jae-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.4
    • /
    • pp.461-472
    • /
    • 2005
  • In this paper, we propose face feature extraction algorithm through stereo image's matching value. The proposed algorithm detected face region by change the RGB color space of skin color information to the YCbCr color space. Applying eye-template from extracted face region geometrical feature vector of feature about distance and lean, nose and mouth between eye extracted. And, Proposed method could do feature of eyes, nose and mouth through stereo image's matching as well as 2D feature information extract. In the experiment, the proposed algorithm shows the consistency rate of 73% in distance within about 1m and the consistency rate of 52%in distance since about 1m.

  • PDF

Automatic Generation of Pointillist Representation-like Image from Natural Image (자연 화상에서 점묘화풍 화상으로의 자동생성)

  • Do, Hyeon-Suk;Jo, Pyeong-Dong;Choe, Yeong-Jin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.1
    • /
    • pp.130-136
    • /
    • 1995
  • This paper is on the development of tools to generate pointillist representation-like images automatically by computer. Pointillist representation -like effects on the generated images are enforced by steps as follows. First, the position of brush stroke is determined from the gradient vector so that the brush touches look more natural. Second, pointillist representation-like coloring is endorsed by changing saturation and value using the RGB components of image. Our approach combines image processing techniques with computer graphics techniques for more faithful pointillist representation-like images and a couple of sample images are presented to show the effectiveness.

  • PDF

An Object Recognition Method Based on Depth Information for an Indoor Mobile Robot (실내 이동로봇을 위한 거리 정보 기반 물체 인식 방법)

  • Park, Jungkil;Park, Jaebyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.10
    • /
    • pp.958-964
    • /
    • 2015
  • In this paper, an object recognition method based on the depth information from the RGB-D camera, Xtion, is proposed for an indoor mobile robot. First, the RANdom SAmple Consensus (RANSAC) algorithm is applied to the point cloud obtained from the RGB-D camera to detect and remove the floor points. Next, the removed point cloud is classified by the k-means clustering method as each object's point cloud, and the normal vector of each point is obtained by using the k-d tree search. The obtained normal vectors are classified by the trained multi-layer perceptron as 18 classes and used as features for object recognition. To distinguish an object from another object, the similarity between them is measured by using Levenshtein distance. To verify the effectiveness and feasibility of the proposed object recognition method, the experiments are carried out with several similar boxes.

An Efficient Color Edge Detection Using the Mahalanobis Distance

  • Khongkraphan, Kittiya
    • Journal of Information Processing Systems
    • /
    • v.10 no.4
    • /
    • pp.589-601
    • /
    • 2014
  • The performance of edge detection often relies on its ability to correctly determine the dissimilarities of connected pixels. For grayscale images, the dissimilarity of two pixels is estimated by a scalar difference of their intensities and for color images, this is done by using the vector difference (color distance) of the three-color components. The Euclidean distance in the RGB color space typically measures a color distance. However, the RGB space is not suitable for edge detection since its color components do not coincide with the information human perception uses to separate objects from backgrounds. In this paper, we propose a novel method for color edge detection by taking advantage of the HSV color space and the Mahalanobis distance. The HSV space models colors in a manner similar to human perception. The Mahalanobis distance independently considers the hue, saturation, and lightness and gives them different degrees of contribution for the measurement of color distances. Therefore, our method is robust against the change of lightness as compared to previous approaches. Furthermore, we will introduce a noise-resistant technique for determining image gradients. Various experiments on simulated and real-world images show that our approach outperforms several existing methods, especially when the images vary in lightness or are corrupted by noise.

Development of a Single-Arm Robotic System for Unloading Boxes in Cargo Truck (간선화물의 상자 하차를 위한 외팔 로봇 시스템 개발)

  • Jung, Eui-Jung;Park, Sungho;Kang, Jin Kyu;Son, So Eun;Cho, Gun Rae;Lee, Youngho
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.417-424
    • /
    • 2022
  • In this paper, the developed trunk cargo unloading automation system is introduced, and the RGB-D sensor-based box loading situation recognition method and unloading plan applied to this system are suggested. First of all, it is necessary to recognize the position of the box in a truck. To do this, we first apply CNN-based YOLO, which can recognize objects in RGB images in real-time. Then, the normal vector of the center of the box is obtained using the depth image to reduce misrecognition in parts other than the box, and the inner wall of the truck in an image is removed. And a method of classifying the layers of the boxes according to the distance using the recognized depth information of the boxes is suggested. Given the coordinates of the boxes on the nearest layer, a method of generating the optimal path to take out the boxes the fastest using this information is introduced. In addition, kinematic analysis is performed to move the conveyor to the position of the box to be taken out of the truck, and kinematic analysis is also performed to control the robot arm that takes out the boxes. Finally, the effectiveness of the developed system and algorithm through a test bed is proved.

Automatic $St{\ddot{o}}ckigt$ Sizing Test Using Hue Value Variation of a Droplet

  • Kim, Jae-Ok;Kim, Chul-Hwan;Lee, Young-Min;Kim, Gyeong-Yun;Shin, Tae-Gi;Park, Chong-Yawl
    • Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference
    • /
    • 2006.06b
    • /
    • pp.227-230
    • /
    • 2006
  • The $St{\ddot{o}}ckigt$ sizing test of the most-commonly used sizing tests is easily influenced by the individual testers' bias in recognizing red coloration. Therefore the test had to be modified to improve its reliability and reproducibility by automated recognition of a coloration procedure during testing. In order to achieve this, all measured variables occurring during the $St{\ddot{o}}ckigt$ test was first be analyzed and then reflected in the new automatic system. Secondly, the most important principle applied was to transform the RGB values of the droplet image to hue (H), saturation (S) and value (V) respectively. This is because RGB cannot be used as a color standard, owing to RGB's peculiarity of being seriously affected by the observer's point of view. Therefore, the droplet color had to be separated into three distinct factors, namely the HSV values, in order to allow linear analysis of the droplet color. When the average values of the vectors calculated during color variation from yellow to brown were plotted against time, it was possible to determine the vector value of hue, the most sensitive factor among HSV, at the specific time by differentiation of a function when it exceeds the critical point. Then, the specific time consumed up to the critical point was regarded as the $St{\ddot{o}}ckigt$ sizing degree. The conventional method took more time to recognize an ending point of coloration than the automatic method, and in addition the error ranges of the conventional sizing degrees on the specific addition points of AKD were wider than those of the automatic method.

  • PDF

Development and Application of High-resolution 3-D Volume PIV System by Cross-Correlation (해상도 3차원 상호상관 Volume PIV 시스템 개발 및 적용)

  • Kim Mi-Young;Choi Jang-Woon;Lee Hyun;Lee Young-Ho
    • Proceedings of the KSME Conference
    • /
    • 2002.08a
    • /
    • pp.507-510
    • /
    • 2002
  • An algorithm of 3-D particle image velocimetry(3D-PIV) was developed for the measurement of 3-D velocity Held of complex flows. The measurement system consists of two or three CCD camera and one RGB image grabber. Flows size is $1500{\times}100{\times}180(mm)$, particle is Nylon12(1mm) and illuminator is Hollogen type lamp(100w). The stereo photogrammetry is adopted for the three dimensional geometrical mesurement of tracer particle. For the stereo-pair matching, the camera parameters should be decide in advance by a camera calibration. Camera parameter calculation equation is collinearity equation. In order to calculate the particle 3-D position based on the stereo photograrnrnetry, the eleven parameters of each camera should be obtained by the calibration of the camera. Epipolar line is used for stereo pair matching. The 3-D position of particle is calculated from the three camera parameters, centers of projection of the three cameras, and photographic coordinates of a particle, which is based on the collinear condition. To find velocity vector used 3-D position data of the first frame and the second frame. To extract error vector applied continuity equation. This study developed of various 3D-PIV animation technique.

  • PDF

Three-dimensional human activity recognition by forming a movement polygon using posture skeletal data from depth sensor

  • Vishwakarma, Dinesh Kumar;Jain, Konark
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.286-299
    • /
    • 2022
  • Human activity recognition in real time is a challenging task. Recently, a plethora of studies has been proposed using deep learning architectures. The implementation of these architectures requires the high computing power of the machine and a massive database. However, handcrafted features-based machine learning models need less computing power and very accurate where features are effectively extracted. In this study, we propose a handcrafted model based on three-dimensional sequential skeleton data. The human body skeleton movement over a frame is computed through joint positions in a frame. The joints of these skeletal frames are projected into two-dimensional space, forming a "movement polygon." These polygons are further transformed into a one-dimensional space by computing amplitudes at different angles from the centroid of polygons. The feature vector is formed by the sampling of these amplitudes at different angles. The performance of the algorithm is evaluated using a support vector machine on four public datasets: MSR Action3D, Berkeley MHAD, TST Fall Detection, and NTU-RGB+D, and the highest accuracies achieved on these datasets are 94.13%, 93.34%, 95.7%, and 86.8%, respectively. These accuracies are compared with similar state-of-the-art and show superior performance.

Traffic Sign Recognition using SVM and Decision Tree for Poor Driving Environment (SVM과 의사결정트리를 이용한 열악한 환경에서의 교통표지판 인식 알고리즘)

  • Jo, Young-Bae;Na, Won-Seob;Eom, Sung-Je;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.485-494
    • /
    • 2014
  • Traffic Sign Recognition(TSR) is an important element in an Advanced Driver Assistance System(ADAS). However, many studies related to TSR approaches only in normal daytime environment because a sign's unique color doesn't appear in poor environment such as night time, snow, rain or fog. In this paper, we propose a new TSR algorithm based on machine learning for daytime as well as poor environment. In poor environment, traditional methods which use RGB color region doesn't show good performance. So we extracted sign characteristics using HoG extraction, and detected signs using a Support Vector Machine(SVM). The detected sign is recognized by a decision tree based on 25 reference points in a Normalized RGB system. The detection rate of the proposed system is 96.4% and the recognition rate is 94% when applied in poor environment. The testing was performed on an Intel i5 processor at 3.4 GHz using Full HD resolution images. As a result, the proposed algorithm shows that machine learning based detection and recognition methods can efficiently be used for TSR algorithm even in poor driving environment.