• Title/Summary/Keyword: vision model

Search Result 1,349, Processing Time 0.04 seconds

Optimal 3D Grasp Planning for unknown objects (임의 물체에 대한 최적 3차원 Grasp Planning)

  • 이현기;최상균;이상릉
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.462-465
    • /
    • 2002
  • This paper deals with the problem of synthesis of stable and optimal grasps with unknown objects by 3-finger hand. Previous robot grasp research has analyzed mainly with either unknown objects 2D by vision sensor or unknown objects, cylindrical or hexahedral objects, 3D. Extending the previous work, in this paper we propose an algorithm to analyze grasp of unknown objects 3D by vision sensor. This is archived by two steps. The first step is to make a 3D geometrical model of unknown objects by stereo matching which is a kind of 3D computer vision technique. The second step is to find the optimal grasping points. In this step, we choose the 3-finger hand because it has the characteristic of multi-finger hand and is easy to modeling. To find the optimal grasping points, genetic algorithm is used and objective function minimizing admissible farce of finger tip applied to the object is formulated. The algorithm is verified by computer simulation by which an optimal grasping points of known objects with different angles are checked.

  • PDF

Development of a Robot arm capable of recognizing 3-D object using stereo vision

  • Kim, Sungjin;Park, Seungjun;Park, Hongphyo;Sangchul Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.128.6-128
    • /
    • 2001
  • In this paper, we present a methodology of sensing and control for a robot system designed to be capable of grasping an object and moving it to target point Stereo vision system is employed to determine to depth map which represents the distance from the camera. In stereo vision system we have used a center-referenced projection to represent the discrete match space for stereo correspondence. This center-referenced disparity space contains new occlusion points in addition to the match points which we exploit to create a concise representation of correspondence an occlusion. And from the depth map we find the target object´s pose and position in 3-D space. To find the target object´s pose and position, we use the method of the model-based recognition.

  • PDF

Determination of Object Position Using Robot Vision (로보트 비전을 이용한 대상물체의 위치 결정에 관한 연구)

  • Park, K.T.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.9
    • /
    • pp.104-113
    • /
    • 1996
  • In robot system, the robot manipulation needs the information of task and objects to be handled in possessing a variaty of positions and orientations. In the current industrial robot system, determining position and orientation of objects under industrial environments is one of major problems. In order to pick up an object, the roblt needs the information about the position and orientation of object, and between objects and gripper. When sensing is accomplished by pinhole model camera, the mathematical relationship between object points and their images is expressed in terms of perspective, i.e., central projection. In this paper, a new approach to determine the information of the supporting points related to position and orientation of the object using the robot vision system is developed and testified in experimental setup. The result will be useful for the industrial, agricultural, and autonomous robot.

  • PDF

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.11 no.1
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.

F-Hessian SIFT-Based Railroad Level-Crossing Vision System (F-Hessian SIFT기반의 철도건널목 영상 감시 시스템)

  • Lim, Hyung-Sup;Yoon, Hak-Sun;Kim, Chel-Huan;Ryu, Deung-Ryeol;Cho, Hwang;Lee, Key-Seo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.2
    • /
    • pp.138-144
    • /
    • 2010
  • This paper presents the experimental analysis of a F-Hessian SIFT-Based Railroad Level-Crossing Safety Vision System. Region of surveillance, region of interests, data matching based on extracting feature points has been examined under the laboratory condition by the model rig on a small scale. Real-time system were observed by using SIFT based on F-Hessian feature tracking method and other common algorithm.

Learning Orientation Factors Affecting Company Innovation and Innovation Capability: Textile versus Non-textile Manufacturers

  • Yoh, Eun-Ah
    • International Journal of Human Ecology
    • /
    • v.10 no.1
    • /
    • pp.1-11
    • /
    • 2009
  • The effect of learning orientation on company innovation and innovation capability are explored based on survey data collected from 154 small and medium-sized manufacturing firms. The theoretical links between learning orientation and company innovation as well as innovation capability are investigated in four research models that compare textile and non-textile manufacturing firms. Learning orientation has a significant effect on company innovation and innovation capability in the model test. However, some of the three segmented factors (commitment to learning, shared vision, and open-mindedness) of learning orientation had no significant effect on company innovation and innovation capability. Company innovation and innovation capability of textile manufacturing firms are predicted by the commitment to learning and shared vision, whereas those of non-textile firms were determined by shared vision and open-mindedness. Differences show that firms may need to put weight on some distinctive aspects of learning orientation according to the business categories in order to enhance company innovation.

Selection and Allocation of Point Data with Wavelet Transform in Reverse Engineering (역공학에서 웨이브렛 변황을 이용한 점 데이터의 선택과 할당)

  • Ko, Tae-Jo;Kim, Hee-Sool
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.9
    • /
    • pp.158-165
    • /
    • 2000
  • Reverse engineering is reproducing products by directly extracting geometric information from physical objects such as clay model wooden mock-up etc. The fundamental work in the reverse engineering is to acquire the geometric data for modeling the objects. This research proposes a novel method for data acquisition aiming at unmanned fast and precise measurement. This is come true by the sensor fusion with CCD camera using structured light beam and touch trigger sensor. The vision system provides global information of the objects data. In this case the number of data and position allocation for touch sensor is critical in terms of the productivity since the number of vision data is very huge. So we applied wavelet transform to reduce the number of data and to allocate the position of the touch probe. The simulated and experimental results show this method is good enough for data reduction.

  • PDF

High Accuracy Vision-Based Positioning Method at an Intersection

  • Manh, Cuong Nguyen;Lee, Jaesung
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.114-124
    • /
    • 2018
  • This paper illustrates a vision-based vehicle positioning method at an intersection to support the C-ITS. It removes the minor shadow that causes the merging problem by simply eliminating the fractional parts of a quotient image. In order to separate the occlusion, it firstly performs the distance transform to analyze the contents of the single foreground object to find seeds, each of which represents one vehicle. Then, it applies the watershed to find the natural border of two cars. In addition, a general vehicle model and the corresponding space estimation method are proposed. For performance evaluation, the corresponding ground truth data are read and compared with the vision-based detected data. In addition, two criteria, IOU and DEER, are defined to measure the accuracy of the extracted data. The evaluation result shows that the average value of IOU is 0.65 with the hit ratio of 97%. It also shows that the average value of DEER is 0.0467, which means the positioning error is 32.7 centimeters.

Design of Clustering CoaT Vision Model Based on Transformer (Transformer 기반의 Clustering CoaT 모델 설계)

  • Bang, Ji-Hyeon;Park, Jun;Jung, Se-Hoon;Sim, Chun-Bo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.546-548
    • /
    • 2022
  • 최근 컴퓨터 비전 분야에서 Transformer를 도입한 연구가 활발히 연구되고 있다. 이 모델들은 Transformer의 구조를 거의 그대로 사용하기 때문에 확장성이 좋으며 large 스케일 학습에서 매우 우수한 성능을 보여주었다. 하지만 Transformer를 적용한 비전 모델은 inductive bias의 부족으로 학습 시 많은 데이터와 시간을 필요로 하였다. 그로 인하여 현재 많은 Vision Transformer 개선 모델들이 연구되고 있다. 본 논문에서도 Vision Transformer의 문제점을 개선한 Clustering CoaT 모델을 제안한다.

Automatic indoor progress monitoring using BIM and computer vision

  • Deng, Yichuan;Hong, Hao;Luo, Han;Deng, Hui
    • International conference on construction engineering and project management
    • /
    • 2017.10a
    • /
    • pp.252-259
    • /
    • 2017
  • Nowadays, the existing manual method for recording actual progress of the construction site has some drawbacks, such as great reliance on the experience of professional engineers, work-intensive, time consuming and error prone. A method integrating computer vision and BIM(Building Information Modeling) is presented for indoor automatic progress monitoring. The developed method can accurately calculate the engineering quantity of target component in the time-lapse images. Firstly, sample images of on-site target are collected for training the classifier. After the construction images are identified by edge detection and classifier, a voting algorithm based on mathematical geometry and vector operation will divide the target contour. Then, according to the camera calibration principle, the image pixel coordinates are conversed into the real world Coordinate and the real coordinates would be corrected with the help of the geometric information in BIM model. Finally, the actual engineering quantity is calculated.

  • PDF