• Title/Summary/Keyword: Vision-based

Search Result 3,459, Processing Time 0.029 seconds

A study on the rigid bOdy placement task of robot system based on the computer vision system (컴퓨터 비젼시스템을 이용한 로봇시스템의 강체 배치 실험에 대한 연구)

  • 장완식;유창규;신광수;김호윤
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1114-1119
    • /
    • 1995
  • This paper presents the development of estimation model and control method based on the new computer vision. This proposed control method is accomplished using a sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on a model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters,depending on each camers the joint angle of robot is estimated by the iteration method. The method is tested experimentally in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

Vision Based Map-Building Using Singular Value Decomposition Method for a Mobile Robot in Uncertain Environment

  • Park, Kwang-Ho;Kim, Hyung-O;Kee, Chang-Doo;Na, Seung-Yu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.101.1-101
    • /
    • 2001
  • This paper describes a grid mapping for a vision based mobile robot in uncertain indoor environment. The map building is a prerequisite for navigation of a mobile robot and the problem of feature correspondence across two images is well known to be of crucial Importance for vision-based mapping We use a stereo matching algorithm obtained by singular value decomposition of an appropriate correspondence strength matrix. This new correspondence strength means a correlation weight for some local measurements to quantify similarity between features. The visual range data from the reconstructed disparity image form an occupancy grid representation. The occupancy map is a grid-based map in which each cell has some value indicating the probability at that location ...

  • PDF

Three-Dimensional Pose Estimation of Neighbor Mobile Robots in Formation System Based on the Vision System (비전시스템 기반 군집주행 이동로봇들의 삼차원 위치 및 자세 추정)

  • Kwon, Ji-Wook;Park, Mun-Soo;Chwa, Dong-Kyoung;Hong, Suk-Kyo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.12
    • /
    • pp.1223-1231
    • /
    • 2009
  • We derive a systematic and iterative calibration algorithm, and position and pose estimation algorithm for the mobile robots in formation system based on the vision system. In addition, we develop a coordinate matching algorithm which calculates matched sequence of order in both extracted image coordinates and object coordinates for non interactive calibration and pose estimation. Based on the results of calibration, we also develop a camera simulator to confirm the results of calibration and compare the results of simulations with those of experiments in position and pose estimation.

Vision-Based Finger Action Recognition by Angle Detection and Contour Analysis

  • Lee, Dae-Ho;Lee, Seung-Gwan
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.415-422
    • /
    • 2011
  • In this paper, we present a novel vision-based method of recognizing finger actions for use in electronic appliance interfaces. Human skin is first detected by color and consecutive motion information. Then, fingertips are detected by a novel scale-invariant angle detection based on a variable k-cosine. Fingertip tracking is implemented by detected region-based tracking. By analyzing the contour of the tracked fingertip, fingertip parameters, such as position, thickness, and direction, are calculated. Finger actions, such as moving, clicking, and pointing, are recognized by analyzing these fingertip parameters. Experimental results show that the proposed angle detection can correctly detect fingertips, and that the recognized actions can be used for the interface with electronic appliances.

F-Hessian SIFT-Based Railroad Level-Crossing Vision System (F-Hessian SIFT기반의 철도건널목 영상 감시 시스템)

  • Lim, Hyung-Sup;Yoon, Hak-Sun;Kim, Chel-Huan;Ryu, Deung-Ryeol;Cho, Hwang;Lee, Key-Seo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.2
    • /
    • pp.138-144
    • /
    • 2010
  • This paper presents the experimental analysis of a F-Hessian SIFT-Based Railroad Level-Crossing Safety Vision System. Region of surveillance, region of interests, data matching based on extracting feature points has been examined under the laboratory condition by the model rig on a small scale. Real-time system were observed by using SIFT based on F-Hessian feature tracking method and other common algorithm.

A Study on Rigid body Placement Task of based on Robot Vision System (로봇 비젼시스템을 이용한 강체 배치 실험에 대한 연구)

  • 장완식;신광수;안철봉
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.11
    • /
    • pp.100-107
    • /
    • 1998
  • This paper presents the development of estimation model and control method based on the new robot vision. This proposed control method is accomplished using the sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on the model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters, depending on each camera the joint angle of robot is estimated by the iteration method. The method is experimentally tested in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

A Study on Creative Industry Development Vision based on Digital Contents (지식창출형 콘텐츠 기반 창조산업 육성방안)

  • Noh, Si-Choon;Bang, Kee-Chun
    • Journal of Digital Convergence
    • /
    • v.10 no.2
    • /
    • pp.47-53
    • /
    • 2012
  • Economic crisis, efforts to overcome the digital content industry development at home and abroad have been racing in the country's future lies in the digital content industry. Therefore, the digital content industry through vision, model identification knowledge-based global digital content market-based deployment is required. For research purposes the digital content industry to derive an alternative to national industrial development that will lay the groundwork. The deployment order for the first digital content industry, SWOT analysis performed to derive the Korean-specific model. As a result, measures based on the advancement of digital content industry as a long-term vision and specific goals are presented as staged. The age of convergence of the u-media content markets in government, corporations, consumers, and these form the structure of a virtuous cycle distribution systems for energy by being active, synergistic effects are obtained. Above all, based on the content industry to secure internal and external growth is key. Vision of the digital content, the growth momentum of the national social development policies to be used as a role model by changing the way a series of courses is required.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

Effect of Artificially Decreased Visual Acuity upon Eye-Hand Coordination using Lee-Ryan Eye-Hand Coordination Test (Lee-Ryan Eye-Hand Coordination Test를 이용한 인위적 시력저하가 눈-손 협응능력에 미치는 영향)

  • Lee, Ki-Seok
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.19 no.3
    • /
    • pp.371-376
    • /
    • 2014
  • Purpose: The aim of this study was to explore the effect of artificially decreased eye in normal vision on eye-hand coordination (EHC) when using the Lee-Ryan Eye-Hand Coordination Test recently reported. Methods: Eleven adults with normal vision aged $29.46{\pm}5.94$ years participated for this study where a non-dominant eye artificially induced moderate refractive amblyopic vision at near by adding a plus lens conducted EHC tasks and then did the test again under normal vision following 2 weeks. To investigate the ability of EHC, 7 tasks including individually different level of difficulty in the Lee-Ryan EHC Test were selected to compare and analyze EHC in terms of two independent variables such as time taken and the number of errors. Results: In time taken, subjects with artificially decreased vision took more time than normal vision under monocular conditions (p=0.013), while those with the decreased vision completed their tasks faster than normal vision under binocular conditions (p=0.001). In the number of errors, subjects with the decreased vision made more mistakes (p<0.001) as shown in time taken, whereas there was no difference between monocular and binocular viewing conditions in the decreased vision. Conclusions: Unlike previous EHC tests including limitations for application, deficit in EHC can be screened by the Lee-Ryan EHC Test developed based on simple computer-based system. Therefore, it is considered that further studies relevant to deficits in visual function such as amblyopia will be carried out in clinics as well as research.

The Components of the Child-care Teachers' Professional Vision Through the Video-based Learning Community: Focusing on Selective Attention and Knowledge-based Reasoning (비디오 활용 학습공동체를 통해 나타난 보육교사의 전문적 시각의 구성 요소: 선택적 주의와 인지 기반 추론을 중심으로)

  • Kim, Soo Jung;Lee, Young Shin;Lee, Min Joo
    • Korean Journal of Child Education & Care
    • /
    • v.19 no.1
    • /
    • pp.27-43
    • /
    • 2019
  • Objective: The purpose of this study is to investigate how the child care teachers experience about professional vision development through participating in video clubs with their peers while watching videos about their interactions with children in the classroom. Methods: We selected three child care teachers in a day care center in Seoul area and conducted the qualitative case study. Video clubs were designed to support the quality of teacher-child interaction by developing child-care teachers' professional vision. And the video clubs used the self-reflection process and cooperative self-reflection process as an important educational method. Results: Teachers were able to experience the change of attention in watching their interaction scene through the 4-time video club participation and to have opportunity for educational (knowledge based) reasoning. Particularly, through participation in the video club, the teacher could pay attention to teachers' intention, teachers' decision making process, and child's intention. In addition, through video club participation, the teachers experienced educational interpretation based on children's thinking and interest; and reasoning through reflective thinking about the results of teaching behavior. This change of professional vision was possible through mutual scaffolding through cooperative reflection among participating teachers. Conclusion/Implications: Based on the results of this study we discussed the importance of the professional vision development of the child care teachers and the effectiveness of the video club for supporting their professional vision development.