• 제목/요약/키워드: Vision Model

검색결과 1,322건 처리시간 0.029초

반도체 칩의 높이 측정을 위한 스테레오 비전의 측정값 조정 알고리즘 (Adjustment Algorithms for the Measured Data of Stereo Vision Methods for Measuring the Height of Semiconductor Chips)

  • 김영두;조태훈
    • 반도체디스플레이기술학회지
    • /
    • 제10권2호
    • /
    • pp.97-102
    • /
    • 2011
  • Lots of 2D vision algorithms have been applied for inspection. However, these 2D vision algorithms have limitation in inspection applications which require 3D information data such as the height of semiconductor chips. Stereo vision is a well known method to measure the distance from the camera to the object to be measured. But it is difficult to apply for inspection directly because of its measurement error. In this paper, we propose two adjustment methods to reduce the error of the measured height data for stereo vision. The weight value based model is used to minimize the mean squared error. The average value based model is used with simple concept to reduce the measured error. The effect of these algorithms has been proved through the experiments which measure the height of semiconductor chips.

3-Dimension 영상을 이용한 카메라 초점측정 및 동일축 이동 모델의 영상 정합 (Camera Focal Length Measuring Method and 3-Dimension Image Correspondence of the Axial Motion Model on Stereo Computer Vision)

  • 정기룡
    • 한국항해학회지
    • /
    • 제16권3호
    • /
    • pp.77-85
    • /
    • 1992
  • Camera arrangement for depth and image correspondence is very important to the computer vision. Two conventional comera arrangements for stereo computer vision are lateral model and axial motion model. In this paper, using the axial motion stereo camera model, the algorithm for camera focal length measurement and the surface smoothness with the radiance-irradiance is proposed fro 3-dimensional image correspondence on stereo computer vision. By adapting the above algorithm, camera focal length can be measured precisely and the resolution of 3-dimensional image correspondence has been improved comparing to that of the axial motion model without the radiance-irradiance relation.

  • PDF

Detection of Traditional Costumes: A Computer Vision Approach

  • Marwa Chacha Andrea;Mi Jin Noh;Choong Kwon Lee
    • 스마트미디어저널
    • /
    • 제12권11호
    • /
    • pp.125-133
    • /
    • 2023
  • Traditional attire has assumed a pivotal role within the contemporary fashion industry. The objective of this study is to construct a computer vision model tailored to the recognition of traditional costumes originating from five distinct countries, namely India, Korea, Japan, Tanzania, and Vietnam. Leveraging a dataset comprising 1,608 images, we proceeded to train the cutting-edge computer vision model YOLOv8. The model yielded an impressive overall mean average precision (MAP) of 96%. Notably, the Indian sari exhibited a remarkable MAP of 99%, the Tanzanian kitenge 98%, the Japanese kimono 92%, the Korean hanbok 89%, and the Vietnamese ao dai 83%. Furthermore, the model demonstrated a commendable overall box precision score of 94.7% and a recall rate of 84.3%. Within the realm of the fashion industry, this model possesses considerable utility for trend projection and the facilitation of personalized recommendation systems.

Compensation of Installation Errors in a Laser Vision System and Dimensional Inspection of Automobile Chassis

  • Barkovski Igor Dunin;Samuel G.L.;Yang Seung-Han
    • Journal of Mechanical Science and Technology
    • /
    • 제20권4호
    • /
    • pp.437-446
    • /
    • 2006
  • Laser vision inspection systems are becoming popular for automated inspection of manufactured components. The performance of such systems can be enhanced by improving accuracy of the hardware and robustness of the software used in the system. This paper presents a new approach for enhancing the capability of a laser vision system by applying hardware compensation and using efficient analysis software. A 3D geometrical model is developed to study and compensate for possible distortions in installation of gantry robot on which the vision system is mounted. Appropriate compensation is applied to the inspection data obtained from the laser vision system based on the parameters in 3D model. The present laser vision system is used for dimensional inspection of car chassis sub frame and lower arm assembly module. An algorithm based on simplex search techniques is used for analyzing the compensated inspection data. The details of 3D model, parameters used for compensation and the measurement data obtained from the system are presented in this paper. The details of search algorithm used for analyzing the measurement data and the results obtained are also presented in the paper. It is observed from the results that, by applying compensation and using appropriate algorithms for analyzing, the error in evaluation of the inspection data can be significantly minimized, thus reducing the risk of rejecting good parts.

불확실한 환경에서 매니퓰레이터 위치제어를 위한 실시간 비젼제어기법에 관한 연구 (A Study on the Real-Time Vision Control Method for Manipulator's position Control in the Uncertain Circumstance)

  • 정완식;김경석;신광수;주철;김재확;윤현권
    • 한국정밀공학회지
    • /
    • 제16권12호
    • /
    • pp.87-98
    • /
    • 1999
  • This study is concentrated on the development of real-time estimation model and vision control method as well as the experimental test. The proposed method permits a kind of adaptability not otherwise available in that the relationship between the camera-space location of manipulable visual cues and the vector of manipulator joint coordinates is estimate in real time. This is done based on a estimation model ta\hat generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation as well as uncertainty of manipulator. This vision control method is roboust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the manipulator, and correct knowledge of position and orientation of CCD camera with respect to the manipulator base. Finally, evidence of the ability of real-time vision control method for manipulator's position control is provided by performing the thin-rod placement in space with 2 cues test model which is completed without a prior knowledge of camera or manipulator positions. This feature opens the door to a range of applications of manipulation, including a mobile manipulator with stationary cameras tracking and providing information for control of the manipulator event.

  • PDF

점 배치 작업 시 제시된 로봇 비젼 제어알고리즘의 가중행렬의 영향에 관한 연구 (A Study on the Effect of Weighting Matrix of Robot Vision Control Algorithm in Robot Point Placement Task)

  • 손재경;장완식;성윤경
    • 한국정밀공학회지
    • /
    • 제29권9호
    • /
    • pp.986-994
    • /
    • 2012
  • This paper is concerned with the application of the vision control algorithm with weighting matrix in robot point placement task. The proposed vision control algorithm involves four models, which are the robot kinematic model, vision system model, the parameter estimation scheme and robot joint angle estimation scheme. This proposed algorithm is to make the robot move actively, even if relative position between camera and robot, and camera's focal length are unknown. The parameter estimation scheme and joint angle estimation scheme in this proposed algorithm have form of nonlinear equation. In particular, the joint angle estimation model includes several restrictive conditions. For this study, the weighting matrix which gave various weighting near the target was applied to the parameter estimation scheme. Then, this study is to investigate how this change of the weighting matrix will affect the presented vision control algorithm. Finally, the effect of the weighting matrix of robot vision control algorithm is demonstrated experimentally by performing the robot point placement.

컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구 (A Study on the Determination of 3-D Object's Position Based on Computer Vision Method)

  • 김경석
    • 한국생산제조학회지
    • /
    • 제8권6호
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

3D object recognition using the CAD model and stereo vision

  • Kim, Sung-Il;Choi, Sung-Jun;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.669-672
    • /
    • 2003
  • 3D object recognition is difficult but important in computer vision. The important thing is to understand about the relationship between a geometric structure in three dimensions and its image projection. Most 3D recognition systems construct models either manually or by training the pose and orientation of the objects. But both approaches are not satisfactory. In this paper, we focus on a commercial CAD model as a third type of model building for vision. The models are expressed in Initial Graphics Exchanges Specification(IGES) output and reconstructed in a pinhole camera coordinate.

  • PDF

로봇 착유시스템의 3차원 유두위치인식을 위한 스테레오비젼 시스템 (A Stereo-Vision System for 3D Position Recognition of Cow Teats on Robot Milking System)

  • 김웅;민병로;이대원
    • Journal of Biosystems Engineering
    • /
    • 제32권1호
    • /
    • pp.44-49
    • /
    • 2007
  • A stereo vision system was developed for robot milking system (RMS) using two monochromatic cameras. An algorithm for inverse perspective transformation was developed for the 3-D information acquisition of all teats. To verify performance of the algorithm in the stereo vision system, indoor tests were carried out using a test-board and model teats. A real cow and a model cow were used to measure distance errors. The maximum distance errors of test-board, model teats and real teats were 0.5 mm, 4.9 mm and 6 mm, respectively. The average distance errors of model teats and real teats were 2.9 mm and 4.43 mm, respectively. Therefore, it was concluded that this algorithm was sufficient for the RMS to be applied.

스테레오 비젼 및 영상복원 과정의 통합을 위한 확률 모형 (Stochastic Model for Unification of Stereo Vision and Image Restoration)

  • 우운택;정홍
    • 전자공학회논문지B
    • /
    • 제29B권9호
    • /
    • pp.37-49
    • /
    • 1992
  • The standard definition of computational vision is a set of inverse problems of recovering surfaces from images. Thus the common characteristics of the most early vision problems are ill-posed. The main idea for solving ill-posed problems is to restrict the class of admissible solutions by introducing suitable a priori knowledge. Standard regurarization methods lead to satisfactory solutions of early vision problems but cannot deal effectively and directly with a few general problems, such as discontinuity and fusion of information from multiple modules. In this paper, we discuss limitations of standard regularization theory and present new stochastic method. We will outline a rigorous approach to overcome part of ill-posedness of image restoration, edge detection, and stereo vision problems, based on Bayes estimation and MRF(Markov random field) model, that effectively deals with the problems. This result makes one hope that this framework could be useful in the solution of other vision problems.

  • PDF