• Title/Summary/Keyword: Vision Model

Search Result 1,314, Processing Time 0.029 seconds

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.

Vision-based Autonomous Semantic Map Building and Robot Localization (영상 기반 자율적인 Semantic Map 제작과 로봇 위치 지정)

  • Lim, Joung-Hoon;Jeong, Seung-Do;Suh, Il-Hong;Choi, Byung-Uk
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.86-88
    • /
    • 2005
  • An autonomous semantic-map building method is proposed, with the robot localized in the semantic-map. Our semantic-map is organized by objects represented as SIFT features and vision-based relative localization is employed as a process model to implement extended Kalman filters. Thus, we expect that robust SLAM performance can be obtained even under poor conditions in which localization cannot be achieved by classical odometry-based SLAM

  • PDF

Development of a Pig's Weight Estimating System Using Computer Vision (컴퓨터 시각을 이용한 돼지 무게 예측시스템의 개발)

  • 엄천일;정종훈
    • Journal of Biosystems Engineering
    • /
    • v.29 no.3
    • /
    • pp.275-280
    • /
    • 2004
  • The main objective of this study was to develop and evaluate a model for estimating pigs weight using computer vision for improving the management in Korean swine farms in Korea. This research was carried out in two steps: 1) to find a model that relates the projection area with the weight of a pig; 2) to implement the model in a computer vision system mainly consisted of a monochrome CCD camera, a frame grabber and a computer system for estimating the weight of pigs in a non-contact, real-time manner. The model was developed under an important assumption there were no observable genetic differences among the pigs. The main results were: 1) The relationship between the projection area and the weight of pigs was W = 0.0569 ${\times}$ A - 32.585($R^2$ = 0.953), where W is the weight in kg; A is the projection area of a pig in $\textrm{cm}^2$; 2) The model could estimate the weight of pigs with an error less than 3.5%.

A study on the rigid bOdy placement task of robot system based on the computer vision system (컴퓨터 비젼시스템을 이용한 로봇시스템의 강체 배치 실험에 대한 연구)

  • 장완식;유창규;신광수;김호윤
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1114-1119
    • /
    • 1995
  • This paper presents the development of estimation model and control method based on the new computer vision. This proposed control method is accomplished using a sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on a model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters,depending on each camers the joint angle of robot is estimated by the iteration method. The method is tested experimentally in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

A Study on Rigid body Placement Task of based on Robot Vision System (로봇 비젼시스템을 이용한 강체 배치 실험에 대한 연구)

  • 장완식;신광수;안철봉
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.11
    • /
    • pp.100-107
    • /
    • 1998
  • This paper presents the development of estimation model and control method based on the new robot vision. This proposed control method is accomplished using the sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on the model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters, depending on each camera the joint angle of robot is estimated by the iteration method. The method is experimentally tested in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

Suboptimal video coding for machines method based on selective activation of in-loop filter

  • Ayoung Kim;Eun-Vin An;Soon-heung Jung;Hyon-Gon Choo;Jeongil Seo;Kwang-deok Seo
    • ETRI Journal
    • /
    • v.46 no.3
    • /
    • pp.538-549
    • /
    • 2024
  • A conventional codec aims to increase the compression efficiency for transmission and storage while maintaining video quality. However, as the number of platforms using machine vision rapidly increases, a codec that increases the compression efficiency and maintains the accuracy of machine vision tasks must be devised. Hence, the Moving Picture Experts Group created a standardization process for video coding for machines (VCM) to reduce bitrates while maintaining the accuracy of machine vision tasks. In particular, in-loop filters have been developed for improving the subjective quality and machine vision task accuracy. However, the high computational complexity of in-loop filters limits the development of a high-performance VCM architecture. We analyze the effect of an in-loop filter on the VCM performance and propose a suboptimal VCM method based on the selective activation of in-loop filters. The proposed method reduces the computation time for video coding by approximately 5% when using the enhanced compression model and 2% when employing a Versatile Video Coding test model while maintaining the machine vision accuracy and compression efficiency of the VCM architecture.

Water Demand and Supply Stability Analysis Using Shared Vision Model (Shared Vision 모형을 이용한 용수수급의 안정성 분석)

  • Jeong, Sang-Man;Lee, Joo-Heon;Ahn, Joong-Kun
    • Journal of Korea Water Resources Association
    • /
    • v.37 no.7
    • /
    • pp.569-579
    • /
    • 2004
  • Recently, the extreme drought is often occurred due to the global warming and the serious weather changes. Also, the problems of the water pollution In the developed areas, the oppositions from people in the upper stream area and water concession from the local governments affect the national request to get more clean water resources in upper stream of the undeveloped areas. It also brings on the necessity of recognition for water supply managements. Therefore, as the water demand is rapidly changes in the metropolitan areas, the capability of water supply from the north Han river basin dams should be appropriately investigated. In this study, we developed a simulation system using STELLA (equation omitted) software environment, a shared vision model, to analyze the possibility of the stable water supply from north Han river basin dams. Also, three different rules are applied on this model by dividing the water level to minimum(Rule 1), medium(Rule 2) and maximum(Rule 3). Using the rules, the safety yield changes are analyzed for dam rule curve of the reservoir and hydropower release.

A Study on Visual Feedback Control of a Dual Arm Robot with Eight Joints

  • Lee, Woo-Song;Kim, Hong-Rae;Kim, Young-Tae;Jung, Dong-Yean;Han, Sung-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.610-615
    • /
    • 2005
  • Visual servoing is the fusion of results from many elemental areas including high-speed image processing, kinematics, dynamics, control theory, and real-time computing. It has much in common with research into active vision and structure from motion, but is quite different from the often described use of vision in hierarchical task-level robot control systems. We present a new approach to visual feedback control using image-based visual servoing with the stereo vision in this paper. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using a binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location but also at the other locations. The suggested technique can guide a robot manipulator to the desired location without giving such priori knowledge as the relative distance to the desired location or the model of an object even if the initial positioning error is large. This paper describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by the simulation and experimental results and compared with the case of conventional method for dual-arm robot made in Samsung Electronics Co., Ltd.

  • PDF

ADD-Net: Attention Based 3D Dense Network for Action Recognition

  • Man, Qiaoyue;Cho, Young Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.21-28
    • /
    • 2019
  • Recent years with the development of artificial intelligence and the success of the deep model, they have been deployed in all fields of computer vision. Action recognition, as an important branch of human perception and computer vision system research, has attracted more and more attention. Action recognition is a challenging task due to the special complexity of human movement, the same movement may exist between multiple individuals. The human action exists as a continuous image frame in the video, so action recognition requires more computational power than processing static images. And the simple use of the CNN network cannot achieve the desired results. Recently, the attention model has achieved good results in computer vision and natural language processing. In particular, for video action classification, after adding the attention model, it is more effective to focus on motion features and improve performance. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications. In this paper, we proposed a 3D dense convolutional network based on attention mechanism(ADD-Net), recognition of human motion behavior in the video.

A Study on the Point Placement Task of Robot System Based on the Vision System (비젼시스템을 이용한 로봇시스템의 점배치실험에 관한 연구)

  • Jang, Wan-Shik;You, Chang-gyou
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.8
    • /
    • pp.175-183
    • /
    • 1996
  • This paper presents three-dimensional robot task using the vision control method. A minimum of two cameras is required to place points on end dffectors of n degree-of-freedom manipulators relative to other bodies. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes known three-axis manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method.

  • PDF