• Title/Summary/Keyword: Machine vision camera

Search Result 219, Processing Time 0.025 seconds

Quality Inspection and Sorting in Eggs by Machine Vision

  • Cho, Han-Keun;Yang Kwon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.834-841
    • /
    • 1996
  • Egg production in Korea is becoming automated with a large scale farm. Although many operations in egg production have been and cracks are regraded as a critical problem. A computer vision system was built to generate images of a single , stationary egg. This system includes a CCD camera, a frame grabber board, a personal computer (IBM PC AT 486) and an incandescent back lighting system. Image processing algorithms were developed to inspect egg shell and to sort eggs. Those values of both gray level and area of dark spots in the egg image were used as criteria to detect holes in egg and those values of both area and roundness of dark spots in the egg and those values of both area and roundness of dark spots in the egg image were used to detect cracks in egg. Fro a sample of 300 eggs. this system was able to correctly analyze an egg for the presence of a defect 97.5% of the time. The weights of eggs were found to be linear to both the projected area and the perimeter of eggs v ewed from above. Those two values were used as criteria to sort eggs. Accuracy in grading was found to be 96.7% as compared with results from weight by electronic scale.

  • PDF

A Study on Real-time Control of Bead Height and Joint Tracking (비드 높이 및 조인트 추적의 실시간 제어 연구)

  • Lee, Jeong-Ick;Koh, Byung-Kab
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.6
    • /
    • pp.71-78
    • /
    • 2007
  • There have been continuous efforts to automate welding processes. This automation process could be said to fall into two categories, weld seam tracking and weld quality evaluation. Recently, the attempts to achieve these two functions simultaneously are on the increase. For the study presented in this paper, a vision sensor is made, and using this, the 3 dimensional geometry of the bead is measured in real time. For the application in welding, which is the characteristic of nonlinear process, a fuzzy controller is designed. And with this, an adaptive control system is proposed which acquires the bead height and the coordinates of the point on the bead along the horizontal fillet joint, performs seam tracking with those data, and also at the same time, controls the bead geometry to a uniform shape. A communication system, which enables the communication with the industrial robot, is designed to control the bead geometry and to track the weld seam. Experiments are made with varied offset angles from the pre-taught weld path, and they showed the adaptive system works favorable results.

Road Surface Marking Detection for Sensor Fusion-based Positioning System (센서 융합 기반 정밀 측위를 위한 노면 표시 검출)

  • Kim, Dongsuk;Jung, Hogi
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.7
    • /
    • pp.107-116
    • /
    • 2014
  • This paper presents camera-based road surface marking detection methods suited to sensor fusion-based positioning system that consists of low-cost GPS (Global Positioning System), INS (Inertial Navigation System), EDM (Extended Digital Map), and vision system. The proposed vision system consists of two parts: lane marking detection and RSM (Road Surface Marking) detection. The lane marking detection provides ROIs (Region of Interest) that are highly likely to contain RSM. The RSM detection generates candidates in the regions and classifies their types. The proposed system focuses on detecting RSM without false detections and performing real time operation. In order to ensure real time operation, the gating varies for lane marking detection and changes detection methods according to the FSM (Finite State Machine) about the driving situation. Also, a single template matching is used to extract features for both lane marking detection and RSM detection, and it is efficiently implemented by horizontal integral image. Further, multiple step verification is performed to minimize false detections.

Vision-based garbage dumping action detection for real-world surveillance platform

  • Yun, Kimin;Kwon, Yongjin;Oh, Sungchan;Moon, Jinyoung;Park, Jongyoul
    • ETRI Journal
    • /
    • v.41 no.4
    • /
    • pp.494-505
    • /
    • 2019
  • In this paper, we propose a new framework for detecting the unauthorized dumping of garbage in real-world surveillance camera. Although several action/behavior recognition methods have been investigated, these studies are hardly applicable to real-world scenarios because they are mainly focused on well-refined datasets. Because the dumping actions in the real-world take a variety of forms, building a new method to disclose the actions instead of exploiting previous approaches is a better strategy. We detected the dumping action by the change in relation between a person and the object being held by them. To find the person-held object of indefinite form, we used a background subtraction algorithm and human joint estimation. The person-held object was then tracked and the relation model between the joints and objects was built. Finally, the dumping action was detected through the voting-based decision module. In the experiments, we show the effectiveness of the proposed method by testing on real-world videos containing various dumping actions. In addition, the proposed framework is implemented in a real-time monitoring system through a fast online algorithm.

Primer Coating Inspection System Development for Automotive Windshield Assembly Automation Facilities (자동차 글라스 조립 자동화설비를 위한 프라이머 도포검사 비전시스템 개발)

  • Ju-Young Kim;Soon-Ho Yang;Min-Kyu Kim
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.2
    • /
    • pp.124-130
    • /
    • 2023
  • Implementing flexible production systems in domestic and foreign automotive design parts assembly has increased demand for automation and power reduction. Consequently, transition to a hybrid production method is observed where multiple vehicles are assembled in a single assembly line. Multiple robots, 3D vision sensors, mounting positions, and correction software have complex configurations in the automotive glass mounting system. Hence, automation is required owing to significant difficulty in the assembly process of automobile parts. This study presents a primer lighting and inspection algorithm that is robust to the assembly environment of real automotive design parts using high power 'ㄷ'-shaped LED inclined lighting. Furthermore, a 2D camera was developed in the primer coating inspection system-the core technology of the glass mounting system. A primer application demo line applicable to the actual automobile production line was established using the proposed high power lighting and algorithm. Furthermore, application inspection performance was verified using this demo system. Experimental results verified that the performance of the proposed system exceeded the level required to satisfy the automobile requirements.

Parking Lot Vehicle Counting Using a Deep Convolutional Neural Network (Deep Convolutional Neural Network를 이용한 주차장 차량 계수 시스템)

  • Lim, Kuoy Suong;Kwon, Jang woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.5
    • /
    • pp.173-187
    • /
    • 2018
  • This paper proposes a computer vision and deep learning-based technique for surveillance camera system for vehicle counting as one part of parking lot management system. We applied the You Only Look Once version 2 (YOLOv2) detector and come up with a deep convolutional neural network (CNN) based on YOLOv2 with a different architecture and two models. The effectiveness of the proposed architecture is illustrated using a publicly available Udacity's self-driving-car datasets. After training and testing, our proposed architecture with new models is able to obtain 64.30% mean average precision which is a better performance compare to the original architecture (YOLOv2) that achieved only 47.89% mean average precision on the detection of car, truck, and pedestrian.

Collision Avoidance for Indoor Mobile Robotics using Stereo Vision Sensor (스테레오 비전 센서를 이용한 실내 모바일 로봇 충돌 회피)

  • Kwon, Ki-Hyeon;Nam, Si-Byung;Lee, Se-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.5
    • /
    • pp.2400-2405
    • /
    • 2013
  • We detect the obstacle for the UGV(unmanned ground vehicle) from the compound image which is generated by stereo vision sensor masking the depth image and color image. Stereo vision sensor can gathers the distance information by stereo camera. The obstacle information from the depth compound image can be send to mobile robot and the robot can localize the indoor area. And, we test the performance of the mobile robot in terms of distance between the obstacle and the robot's position and also test the color, depth and compound image respectively. Moreover, we test the performance in terms of number of frame per second which is processed by operating machine. From the result, compound image shows the improved performance in distance and number of frames.

Classification between Intentional and Natural Blinks in Infrared Vision Based Eye Tracking System

  • Kim, Song-Yi;Noh, Sue-Jin;Kim, Jin-Man;Whang, Min-Cheol;Lee, Eui-Chul
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.601-607
    • /
    • 2012
  • Objective: The aim of this study is to classify between intentional and natural blinks in vision based eye tracking system. Through implementing the classification method, we expect that the great eye tracking method will be designed which will perform well both navigation and selection interactions. Background: Currently, eye tracking is widely used in order to increase immersion and interest of user by supporting natural user interface. Even though conventional eye tracking system is well focused on navigation interaction by tracking pupil movement, there is no breakthrough selection interaction method. Method: To determine classification threshold between intentional and natural blinks, we performed experiment by capturing eye images including intentional and natural blinks from 12 subjects. By analyzing successive eye images, two features such as eye closed duration and pupil size variation after eye open were collected. Then, the classification threshold was determined by performing SVM(Support Vector Machine) training. Results: Experimental results showed that the average detection accuracy of intentional blinks was 97.4% in wearable eye tracking system environments. Also, the detecting accuracy in non-wearable camera environment was 92.9% on the basis of the above used SVM classifier. Conclusion: By combining two features using SVM, we could implement the accurate selection interaction method in vision based eye tracking system. Application: The results of this research might help to improve efficiency and usability of vision based eye tracking method by supporting reliable selection interaction scheme.

A Study on the Improvement of Human Operators' Performance in Detection of External Defects in Visual Inspection (품질 검사자의 외관검사 검출력 향상방안에 관한 연구)

  • Han, Sung-Jae;Ham, Dong-Han
    • Journal of the Korea Safety Management & Science
    • /
    • v.21 no.4
    • /
    • pp.67-74
    • /
    • 2019
  • Visual inspection is regarded as one of the critical activities for quality control in a manufacturing company. it is thus important to improve the performance of detecting a defective part or product. There are three probable working modes for visual inspection: fully automatic (by automatic machines), fully manual (by human operators), and semi-automatic (by collaboration between human operators and automatic machines). Most of the current studies on visual inspection have been focused on the improvement of automatic detection performance by developing a better automatic machine using computer vision technologies. However, there are still a range of situations where human operators should conduct visual inspection with/without automatic machines. In this situation, human operators'performance of visual inspection is significant to the successful quality control. However, visual inspection of components assembled into a mobile camera module belongs to those situations. This study aims to investigate human performance issues in visual inspection of the components, paying more attention to human errors. For this, Abstraction Hierarchy-based work domain modeling method was applied to examine a range of direct or indirect factors related to human errors and their relationships in the visual inspection of the components. Although this study was conducted in the context of manufacturing mobile camera modules, the proposed method would be easily generalized into other industries.

Surface Inspection Algorighm using Oriented Bounding Box (회전 윤곽 상자를 이용한 표면 검사 알고리즘)

  • Hwang, Myun Joong;Chung, Seong Youb
    • Journal of Institute of Convergence Technology
    • /
    • v.6 no.1
    • /
    • pp.23-26
    • /
    • 2016
  • DC motor shafts have several defects such as double cut, deep scratch on surface, and defects in diameter and length. The deep scratches are due to collision among the other shafts. So the scratches are long and thin but their orientations are random. If the smallest enclosing box, i.e. oriented bounding box for a detective point group is found, then the size of the corresponding defect can be modeled as its diagonal length. This paper proposes an suface inspection algorithm for the DC motor shaft using the oriented bounding box. To evaluate the proposed algorithm, a test bed is made with a line scan CCD camera (4096 pixels/line) and two rollers mechanism to rotate the shaft. The experimental result on a pre-processed image with contrast streching algorithm, shows that the proposed algorithm sucessfully finds 150 surface defects and its computation time (0.291 msec) is enough fast for the requirement (4 seconds).