• Title/Summary/Keyword: Vision data

Search Result 1,762, Processing Time 0.028 seconds

Implementation of Visual Data Compressor for Vision Sensor of Mobile Robot (이동로봇의 시각센서를 위한 동영상 압축기 구현)

  • Kim Hyung O;Cho Kyoung Su;Baek Moon Yeal;Kee Chang Doo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.9 s.174
    • /
    • pp.99-106
    • /
    • 2005
  • In recent years, vision sensors are widely used to mobile robot for navigation or exploration. The analog signal transmission of visual data being used in this area, however, has some disadvantages including noise weakness in view of the data storage. A large amount of data also makes it difficult to use this method for a mobile robot. In this paper, a digital data compressing technology based on MPEG4 which substitutes for analog technology is proposed to overcome the disadvantages by using DWT(Discreate Wavelet Transform) instead of DCT(Discreate Cosine Transform). The TI Company's DSP chip, TMS320C6711, is used for the image encoder, and the performance of the proposed method is evaluated by PSNR(Peake Signal to Noise Rates), QP(Quantization Parameter) and bitrate.

Real-Time Pipe Fault Detection System Using Computer Vision

  • Kim Hyoung-Seok;Lee Byung-Ryong
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.7 no.1
    • /
    • pp.30-34
    • /
    • 2006
  • Recently, there has been an increasing demand for computer-vision-based inspection and/or measurement system as a part of factory automation equipment. In general, it is almost impossible to check the fault of all parts, coming from part-feeding system, with only manual inspection because of time limitation. Therefore, most of manual inspection is applied to specific samples, not all coming parts, and manual inspection neither guarantee consistent measuring accuracy nor decrease working time. Thus, in order to improve the measuring speed and accuracy of the inspection, a computer-aided measuring and analysis method is highly needed. In this paper, a computer-vision-based pipe inspection system is proposed, where the front and side-view profiles of three different kinds of pipes, coming from a forming line, are acquired by computer vision. And the edge detection is processed by using Laplace operator. To reduce the vision processing time, modified Hough transform is used with clustering method for straight line detection. And the center points and diameters of inner and outer circle are found to determine eccentricity of the parts. Also, an inspection system has been built so that the data and images of faulted parts are stored as files and transferred to the server.

Feature Extraction for Vision Based Micromanipulation

  • Jang, Min-Soo;Lee, Seok-Joo;Park, Gwi-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.41.5-41
    • /
    • 2002
  • This paper presents a feature extraction algorithm for vision-based micromanipulation. In order to guarantee of the accurate micromanipulation, most of micromanipulation systems use vision sensor. Vision data from an optical microscope or high magnification lens have vast information, however, characteristics of micro image such as emphasized contour, texture, and noise are make it difficult to apply macro image processing algorithms to micro image. Grasping points extraction is very important task in micromanipulation because inaccurate grasping points can cause breakdown of micro gripper or miss of micro objects. To solve those problems and extract grasping points for micromanipulation...

  • PDF

Lightening of Human Pose Estimation Algorithm Using MobileViT and Transfer Learning

  • Kunwoo Kim;Jonghyun Hong;Jonghyuk Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.17-25
    • /
    • 2023
  • In this paper, we propose a model that can perform human pose estimation through a MobileViT-based model with fewer parameters and faster estimation. The based model demonstrates lightweight performance through a structure that combines features of convolutional neural networks with features of Vision Transformer. Transformer, which is a major mechanism in this study, has become more influential as its based models perform better than convolutional neural network-based models in the field of computer vision. Similarly, in the field of human pose estimation, Vision Transformer-based ViTPose maintains the best performance in all human pose estimation benchmarks such as COCO, OCHuman, and MPII. However, because Vision Transformer has a heavy model structure with a large number of parameters and requires a relatively large amount of computation, it costs users a lot to train the model. Accordingly, the based model overcame the insufficient Inductive Bias calculation problem, which requires a large amount of computation by Vision Transformer, with Local Representation through a convolutional neural network structure. Finally, the proposed model obtained a mean average precision of 0.694 on the MS COCO benchmark with 3.28 GFLOPs and 9.72 million parameters, which are 1/5 and 1/9 the number compared to ViTPose, respectively.

A study on the Vision-related Knowledge and Behaviors of the 1st and 2nd graders of Primary School and their Parents in a City (일 초등학교 1·2학년 아동과 학부모의 시력관련 지식 및 행태에 관한 조사)

  • Kim, Seol-Yi;Kang, Hae-Young
    • Journal of the Korean Society of School Health
    • /
    • v.15 no.1
    • /
    • pp.141-150
    • /
    • 2002
  • The purpose of this study was to investigate the visual acuity and the degree of vision-related knowledge and behaviors of the 1st and 2nd graders of primary school and their parents in a city. The research design was a descriptive study and the subjects were 579 pupils and their parents in Namwon City, Chonbuk province. Children's vision screening was conducted with Han's test by author, school nurse according to the guidelines. The data were analyzed by frequency, percentage, mean, S.D., t-test, ANOVA, Pearson's correlation coefficient, $x^2$-test with SAS program. Subnormal visual acuity group (SVAG) in children was 17.3%, and was higher in girls and in the 2nd graders, but there were no significant differences statistically. The mean score of the vision-related knowledge in children was 6.8 points out of 10 points and that of vision-related behaviors was 23.5 points out of 33 points. The mean scores of the vision-related knowledge were significantly higher in 2nd graders (p= .02), in girls (p= .02) and SVAG (p= .01) and the group of high scores in vision-related knowledge presented significantly high scores in vision-related behaviors (p= .001). The mean score of the vision-related knowledge in parents was 6.4 points out of 10 points and that of vision-related behaviors was 28.4 points out of 33 points. The group with high scores of vision-related knowledge in parents presented significantly high scores in vision-related behaviors (p= .003). As SVAG were higher in 2nd graders and the group of high scores of vision-related knowledge showed also high scores in vision-related behaviors in both children and their parents, From the above results, the author suggests a school-based visual health program for them.

An Weldability Estimation of Laser Welded Specimens (레이저 용접물의 용접성 평가)

  • Lee, Jeong-Ick
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.1
    • /
    • pp.60-68
    • /
    • 2007
  • It has been conducted by laser vision sensor for weldability estimation of front-bead after doing high speed butt laser welding of any condition. It has been developed a real time GUI(Graphic User Interface) system for weldability application in the basis of texts and field qualify levels. In the reference of bead imperfections, defects absolute position and defects intensity index of front-bead in the basis of formability reference, it has been produced a weldability estimation and defects intensity index of back-bead by back propagation neural network. In the result of by comparing measuring data by laser vision sensor of back-bead and data by back propagation neural network of one, it has been shown the similar results. Finally, under knowledge of welding condition in production line, it has been conducted a weldability estimation of back-bead only in knowledge of informations of front-bead data without using laser vision sensor or welding inspection experts and furthermore it can be used data for final inspection results of back-bead.

Enhancing Occlusion Robustness for Vision-based Construction Worker Detection Using Data Augmentation

  • Kim, Yoojun;Kim, Hyunjun;Sim, Sunghan;Ham, Youngjib
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.904-911
    • /
    • 2022
  • Occlusion is one of the most challenging problems for computer vision-based construction monitoring. Due to the intrinsic dynamics of construction scenes, vision-based technologies inevitably suffer from occlusions. Previous researchers have proposed the occlusion handling methods by leveraging the prior information from the sequential images. However, these methods cannot be employed for construction object detection in non-sequential images. As an alternative occlusion handling method, this study proposes a data augmentation-based framework that can enhance the detection performance under occlusions. The proposed approach is specially designed for rebar occlusions, the distinctive type of occlusions frequently happen during construction worker detection. In the proposed method, the artificial rebars are synthetically generated to emulate possible rebar occlusions in construction sites. In this regard, the proposed method enables the model to train a variety of occluded images, thereby improving the detection performance without requiring sequential information. The effectiveness of the proposed method is validated by showing that the proposed method outperforms the baseline model without augmentation. The outcomes demonstrate the great potential of the data augmentation techniques for occlusion handling that can be readily applied to typical object detectors without changing their model architecture.

  • PDF

A Study on the Determination of 3-D Object's Position Based on Computer Vision Method (컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구)

  • 김경석
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.8 no.6
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

A Study on Efficient Image Processing and CAD-Vision System Interface (효율적인 화상자료 처리와 시각 시스템과 CAD시스템의 인터페이스에 관한 연구)

  • Park, Jin-Woo;Kim, Ki-Dong
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.18 no.2
    • /
    • pp.11-22
    • /
    • 1992
  • Up to now, most researches on production automation have concentrated on local automation, e. g. CAD, CAM, robotics, etc. However, to achieve total automation it is required to link each local modules such as CAD, CAM into a unified and integrated system. One such missing link is between CAD and computer vision system. This thesis is an attempt to link the gap between CAD and computer vision system. In this paper, we propose algorithms that carry out edge detection, thinning and pruning from the image data of manufactured parts, which are obtained from video camera and then transmitted to computer. We also propose a feature extraction and surface determination algorithm which extract informations from the image data. The informations are compatible to IGES CAD data. In addition, we suggest a methodology to reduce search efforts for CAD data bases. The methodology is based on graph submatching algorithm in GEFG(Generalized Edge Face Graph) representation for each part.

  • PDF

Development of an algorithm for solving correspondence problem in stereo vision (스테레오 비젼에서 대응문제 해결을 위한 알고리즘의 개발)

  • Im, Hyuck-Jin;Gweon, Dae-Gab
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.10 no.1
    • /
    • pp.77-88
    • /
    • 1993
  • In this paper, we propose a stereo vision system to solve correspondence problem with large disparity and sudden change in environment which result from small distance between camera and working objects. First of all, a specific feature is divided by predfined elementary feature. And then these are combined to obtain coded data for solving correspondence problem. We use Neural Network to extract elementary features from specific feature and to have adaptability to noise and some change of the shape. Fourier transformation and Log-polar mapping are used for obtaining appropriate Neural Network input data which has a shift, scale, and rotation invariability. Finally, we use associative memory to obtain coded data of the specific feature from the combination of elementary features. In spite of specific feature with some variation in shapes, we could obtain satisfactory 3-dimensional data from corresponded codes.

  • PDF