• Title/Summary/Keyword: vision-based technology

Search Result 1,022, Processing Time 0.028 seconds

A Study on the Vision Sensor Using Scanning Beam for Welding Process Automation (용접자동화를 위한 주사빔을 이용한 시각센서에 관한 연구)

  • You, Won-Sang;Na, Suck-Joo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.20 no.3
    • /
    • pp.891-900
    • /
    • 1996
  • The vision sensor which is based on the optical triangulation theory with the laser as an auxiliary light source can detect not only the seam position but the shape of seam. In this study, a vision sensor using the scanning laser beam was investigated. To design the vision sensor which considers the reflectivity of the sensing object and satisfies the desired resolution and measuring range, the equation of the focused laser beam which has a Gaussian irradiance profile was firstly formulated, Secondly, the image formaing sequence, and thirdly the relation between the displacement in the measuring surface and the displacement in the camera plane was formulated. Therefore, the focused beam diameter in the measuring range could be determined and the influence of the relative location between the laser and camera plane could be estimated. The measuring range and the resolution of the vision sensor which was based on the Scheimpflug's condition could also be calculated. From the results mentioned above a vision sensor was developed, and an adequate calibration technique was proposed. The image processing algorithm which and recognize the center of joint and its shape informaitons was investigated. Using the developed vision sensor and image processing algorithm, the shape informations was investigated. Using the developed vision sensor and image processing algorithm, the shape informations of the vee-, butt- and lap joint were extracted.

Image Enhanced Machine Vision System for Smart Factory

  • Kim, ByungJoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.7-13
    • /
    • 2021
  • Machine vision is a technology that helps the computer as if a person recognizes and determines things. In recent years, as advanced technologies such as optical systems, artificial intelligence and big data advanced in conventional machine vision system became more accurate quality inspection and it increases the manufacturing efficiency. In machine vision systems using deep learning, the image quality of the input image is very important. However, most images obtained in the industrial field for quality inspection typically contain noise. This noise is a major factor in the performance of the machine vision system. Therefore, in order to improve the performance of the machine vision system, it is necessary to eliminate the noise of the image. There are lots of research being done to remove noise from the image. In this paper, we propose an autoencoder based machine vision system to eliminate noise in the image. Through experiment proposed model showed better performance compared to the basic autoencoder model in denoising and image reconstruction capability for MNIST and fashion MNIST data sets.

Analysis of Requirements for Night Vision Imaging System (야시조명계통 요구도 분석)

  • Kwon, Jong-Kwang;Lee, Dae-Yearl;Kim, Whan-Woo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.10 no.3
    • /
    • pp.51-61
    • /
    • 2007
  • This paper concerns about the requirement analysis for night vision imaging system(NVIS), whose purpose is to intensify the available nighttime near infrared(IR) radiation sufficiently to be caught by the human eyes on a miniature green phosphor screen. The requirements for NVIS are NVIS radiance(NR), chromaticity, daylight legibility/readability, etc. The NR is a quantitative measure of night vision goggle (NVG) compatibility of a light source as viewed through goggles. The chromaticity is the quality of a color as determined by its purity and dominant wavelength. The daylight legibility/readability is the degree at which words are readable based on appearance and a measure of an instrument's ability to display incremental changes in its output value. In this paper, the requirements of NR, chromaticity, and daylight legibility/readability for Type I and Class B/C NVIS are analyzed. Also the rationale is shown with respect to those requirements.

Design of an Intelligent Robot Control System Using Neural Network (신경회로망을 이용한 지능형 로봇 제어 시스템 설계)

  • 정동연;서운학;한성현
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.279-279
    • /
    • 2000
  • In this paper, we have proposed a new approach to the design of robot vision system to develop the technology for the automatic test and assembling of precision mechanical and electronic parts fur the factory automation. In order to perform real time implementation of the automatic assembling tasks in the complex processes, we have developed an intelligent control algorithm based-on neural networks control theory to enhance the precise motion control. Implementing of the automatic test tasks has been performed by the real-time vision algorithm based-on TMS320C31 DSPs. It distinguishes correctly the difference between the acceptable and unacceptable defective item through pattern recognition of parts by the developed vision algorithm. Finally, the performance of proposed robot vision system has been illustrated by experiment for the similar model of fifth cell among the twelve cell fur automatic test and assembling in S company.

  • PDF

Inspection Algorithm for Screw Head Forming Punch Using Based on Machine Vision (머신비전을 이용한 나사 머리 성형 펀치의 검사 알고리즘)

  • Jeong, Ku Hyeon;Chung, Seong Youb
    • Journal of Institute of Convergence Technology
    • /
    • v.3 no.2
    • /
    • pp.31-37
    • /
    • 2013
  • This paper proposes a vision-based inspection algorithm for a punch which is used when forming the head of the small screws. To maintain good quality of punch, the precise inspection of its dimension and the depth of the punch head is important. A CCD camera and an illumination dome light are used to measure its dimensions. And a structured line laser is also used to measure the depth of the punch head. Resolution and visible area depend on setup between laser and camera which is determined using CAD-based simulation. The proposed method is successfully evaluated using experiment on #2 punch.

  • PDF

Monocular Vision-Based Guidance and Control for a Formation Flight

  • Cheon, Bong-kyu;Kim, Jeong-ho;Min, Chan-oh;Han, Dong-in;Cho, Kyeum-rae;Lee, Dae-woo;Seong, kie-jeong
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.16 no.4
    • /
    • pp.581-589
    • /
    • 2015
  • This paper describes a monocular vision-based formation flight technology using two fixed wing unmanned aerial vehicles. To measuring relative position and attitude of a leader aircraft, a monocular camera installed in the front of the follower aircraft captures an image of the leader, and position and attitude are measured from the image using the KLT feature point tracker and POSIT algorithm. To verify the feasibility of this vision processing algorithm, a field test was performed using two light sports aircraft, and our experimental results show that the proposed monocular vision-based measurement algorithm is feasible. Performance verification for the proposed formation flight technology was carried out using the X-Plane flight simulator. The formation flight simulation system consists of two PCs playing the role of leader and follower. When the leader flies by the command of user, the follower aircraft tracks the leader by designed guidance and a PI control law, and all the information about leader was measured using monocular vision. This simulation shows that guidance using relative attitude information tracks the leader aircraft better than not using attitude information. This simulation shows absolute average errors for the relative position as follows: X-axis: 2.88 m, Y-axis: 2.09 m, and Z-axis: 0.44 m.

A Real-Time NDGPS/INS Navigation System Based on Artificial Vision for Helicopter (인공시계기반 헬기용 3차원 항법시스템 구성)

  • Kim, Jae-Hyung;Lyou, Joon;Kwak, Hwy-Kuen
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.11 no.3
    • /
    • pp.30-39
    • /
    • 2008
  • An artificial vision aided NDGPS/INS system has been developed and tested in the dynamic environment of ground and flight vehicles to evaluate the overall system performance. The results show the significant advantages in position accuracy and situation awareness. Accuracy meets the CAT-I precision approach and landing using NDGPS/INS integration. Also we confirm the proposed system is effective enough to improve flight safety by using artificial vision. The system design, software algorithm, and flight test results are presented in details.

Intelligent Pattern Recognition Algorithms based on Dust, Vision and Activity Sensors for User Unusual Event Detection

  • Song, Jung-Eun;Jung, Ju-Ho;Ahn, Jun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.95-103
    • /
    • 2019
  • According to the Statistics Korea in 2017, the 10 leading causes of death contain a cardiac disorder disease, self-injury. In terms of these diseases, urgent assistance is highly required when people do not move for certain period of time. We propose an unusual event detection algorithm to identify abnormal user behaviors using dust, vision and activity sensors in their houses. Vision sensors can detect personalized activity behaviors within the CCTV range in the house in their lives. The pattern algorithm using the dust sensors classifies user movements or dust-generated daily behaviors in indoor areas. The accelerometer sensor in the smartphone is suitable to identify activity behaviors of the mobile users. We evaluated the proposed pattern algorithms and the fusion method in the scenarios.

Integrated System for Autonomous Proximity Operations and Docking

  • Lee, Dae-Ro;Pernicka, Henry
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.1
    • /
    • pp.43-56
    • /
    • 2011
  • An integrated system composed of guidance, navigation and control (GNC) system for autonomous proximity operations and the docking of two spacecraft was developed. The position maneuvers were determined through the integration of the state-dependent Riccati equation formulated from nonlinear relative motion dynamics and relative navigation using rendezvous laser vision (Lidar) and a vision sensor system. In the vision sensor system, a switch between sensors was made along the approach phase in order to provide continuously effective navigation. As an extension of the rendezvous laser vision system, an automated terminal guidance scheme based on the Clohessy-Wiltshire state transition matrix was used to formulate a "V-bar hopping approach" reference trajectory. A proximity operations strategy was then adapted from the approach strategy used with the automated transfer vehicle. The attitude maneuvers, determined from a linear quadratic Gaussian-type control including quaternion based attitude estimation using star trackers or a vision sensor system, provided precise attitude control and robustness under uncertainties in the moments of inertia and external disturbances. These functions were then integrated into an autonomous GNC system that can perform proximity operations and meet all conditions for successful docking. A six-degree of freedom simulation was used to demonstrate the effectiveness of the integrated system.

A Study on the Robot Vision Control Schemes of N-R and EKF Methods for Tracking the Moving Targets (이동 타겟 추적을 위한 N-R과 EKF방법의 로봇비젼제어기법에 관한 연구)

  • Hong, Sung-Mun;Jang, Wan-Shik;Kim, Jae-Meung
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.23 no.5
    • /
    • pp.485-497
    • /
    • 2014
  • This paper presents the robot vision control schemes based on the Newton-Raphson (N-R) and the Extended Kalman Filter (EKF) methods for the tracking of moving targets. The vision system model used in this study involves the six camera parameters. The difference is that refers to the uncertainty of the camera's orientation and focal length, and refers to the unknown relative position between the camera and the robot. Both N-R and EKF methods are employed towards the estimation of the six camera parameters. Based on the these six parameters estimated using three cameras, the robot's joint angles are computed with respect to the moving targets, using both N-R and EKF methods. The two robot vision control schemes are tested by tracking the moving target experimentally. Given the experimental results, the two robot control schemes are compared in order to evaluate their strengths and weaknesses.