• Title/Summary/Keyword: Vision Processing Techniques

Search Result 182, Processing Time 0.026 seconds

Vision Sensing for the Ego-Lane Detection of a Vehicle (자동차의 자기 주행차선 검출을 위한 시각 센싱)

  • Kim, Dong-Uk;Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.137-141
    • /
    • 2018
  • Detecting the ego-lane of a vehicle (the lane on which the vehicle is currently running) is one of the basic techniques for a smart car. Vision sensing is a widely-used method for the ego-lane detection. Existing studies usually find road lane lines by detecting edge pixels in the image from a vehicle camera, and then connecting the edge pixels using Hough Transform. However, this approach takes rather long processing time, and too many straight lines are often detected resulting in false detections in various road conditions. In this paper, we find the lane lines by scanning only a limited number of horizontal lines within a small image region of interest. The horizontal image line scan replaces the edge detection process of existing methods. Automatic thresholding and spatiotemporal filtering procedures are also proposed in order to make our method reliable. In the experiments using real road images of different conditions, the proposed method resulted in high success rate.

The Application of the Welding Joint Tracking System (용접 이음 추적시스템의 응용)

  • Lee, Jeong-Ick;Koh, Byung-Kab
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.2
    • /
    • pp.92-99
    • /
    • 2007
  • Welding fabrication invariantly involves three district sequential steps: preparation, actual process execution and post-weld inspection. One of the major problems in automating these steps and developing autonomous welding systems, is the lack of proper sensing strategies. Conventionally, machine vision is used in robotic arc welding only for the correction of pre-taught welding paths in single pass. In this paper, novel presented, developed vision processing techniques are detailed, and their application in welding fabrication is covered. The software for joint tracking system is finally proposed.

Developement of a System for Glass Thickness Measurement (비접촉 유리 두께 측정 장치 개발)

  • Park, Jae-Beom;Lee, Eung-Suk;Lee, Min-Ki;Lee, Jong-Gun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.33 no.5
    • /
    • pp.529-535
    • /
    • 2009
  • This paper describes a measuring device of glass thickness using machine vision and image processing techniques on real-time. Today, the machine vision enable to inspect fast and exactly than human's eyes. The presented system has advantages of continuous measurement, flexibility and good accuracy. The system consists of a laser diode, a CCD camera with PC. The camera located on the opposite side of the incident beam measures the distance between two reflected laser beams from the glass top and bottom surface. We apply a binary algorithm to convert and analyze the image from camera to PC. Laser point coordination by border tracing algorithm is used to find the center of beam circle. The measured result was compared with micrometer and showed 0.002mm accuracy. Finally, the errors were discussed how to minimize the influence of glass wedge angle and angular error of moving stage.

A Path tracking algorithm and a VRML image overlay method (VRML과 영상오버레이를 이용한 로봇의 경로추적)

  • Sohn, Eun-Ho;Zhang, Yuanliang;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.907-908
    • /
    • 2006
  • We describe a method for localizing a mobile robot in its working environment using a vision system and Virtual Reality Modeling Language (VRML). The robot identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

Sorting for Plastic Bottles Recycling using Machine Vision Methods

  • SanaSadat Mirahsani;Sasan Ghasemipour;AmirAbbas Motamedi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.89-98
    • /
    • 2024
  • Due to the increase in population and consequently the increase in the production of plastic waste, recovery of this part of the waste is an undeniable necessity. On the other hand, the recycling of plastic waste, if it is placed in a systematic process and controlled, can be effective in creating jobs and maintaining environmental health. Waste collection in many large cities has become a major problem due to lack of proper planning with increasing waste from population accumulation and changing consumption patterns. Today, waste management is no longer limited to waste collection, but waste collection is one of the important areas of its management, i.e. training, segregation, collection, recycling and processing. In this study, a systematic method based on machine vision for sorting plastic bottles in different colors for recycling purposes will be proposed. In this method, image classification and segmentation techniques were presented to improve the performance of plastic bottle classification. Evaluation of the proposed method and comparison with previous works showed the proper performance of this method.

A Study on Lane Sensing System Using Stereo Vision Sensors (스테레오 비전센서를 이용한 차선감지 시스템 연구)

  • Huh, Kun-Soo;Park, Jae-Sik;Rhee, Kwang-Woon;Park, Jae-Hak
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.28 no.3
    • /
    • pp.230-237
    • /
    • 2004
  • Lane Sensing techniques based on vision sensors are regarded promising because they require little infrastructure on the highway except clear lane markers. However, they require more intelligent processing algorithms in vehicles to generate the previewed roadway from the vision images. In this paper, a lane sensing algorithm using vision sensors is developed to improve the sensing robustness. The parallel stereo-camera is utilized to regenerate the 3-dimensional road geometry. The lane geometry models are derived such that their parameters represent the road curvature, lateral offset and heading angle, respectively. The parameters of the lane geometry models are estimated by the Kalman filter and utilized to reconstruct the lane geometry in the global coordinate. The inverse perspective mapping from the image plane to the global coordinate considers roll and pitch motions of a vehicle so that the mapping error is minimized during acceleration, braking or steering. The proposed sensing system has been built and implemented on a 1/10-scale model car.

Car detection area segmentation using deep learning system

  • Dong-Jin Kwon;Sang-hoon Lee
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.182-189
    • /
    • 2023
  • A recently research, object detection and segmentation have emerged as crucial technologies widely utilized in various fields such as autonomous driving systems, surveillance and image editing. This paper proposes a program that utilizes the QT framework to perform real-time object detection and precise instance segmentation by integrating YOLO(You Only Look Once) and Mask R CNN. This system provides users with a diverse image editing environment, offering features such as selecting specific modes, drawing masks, inspecting detailed image information and employing various image processing techniques, including those based on deep learning. The program advantage the efficiency of YOLO to enable fast and accurate object detection, providing information about bounding boxes. Additionally, it performs precise segmentation using the functionalities of Mask R CNN, allowing users to accurately distinguish and edit objects within images. The QT interface ensures an intuitive and user-friendly environment for program control and enhancing accessibility. Through experiments and evaluations, our proposed system has been demonstrated to be effective in various scenarios. This program provides convenience and powerful image processing and editing capabilities to both beginners and experts, smoothly integrating computer vision technology. This paper contributes to the growth of the computer vision application field and showing the potential to integrate various image processing algorithms on a user-friendly platform

Machine Vision Technique for Rapid Measurement of Soybean Seed Vigor

  • Lee, Hoonsoo;Huy, Tran Quoc;Park, Eunsoo;Bae, Hyung-Jin;Baek, Insuck;Kim, Moon S.;Mo, Changyeun;Cho, Byoung-Kwan
    • Journal of Biosystems Engineering
    • /
    • v.42 no.3
    • /
    • pp.227-233
    • /
    • 2017
  • Purpose: Morphological properties of soybean roots are important indicators of the vigor of the seed, which determines the survival rate of the seedlings grown. The current vigor test for soybean seeds is manual measurement with the human eye. This study describes an application of a machine vision technique for rapid measurement of soybean seed vigor to replace the time-consuming and labor-intensive conventional method. Methods: A CCD camera was used to obtain color images of seeds during germination. Image processing techniques were used to obtain root segmentation. The various morphological parameters, such as primary root length, total root length, total surface area, average diameter, and branching points of roots were calculated from a root skeleton image using a customized pixel-based image processing algorithm. Results: The measurement accuracy of the machine vision system ranged from 92.6% to 98.8%, with accuracies of 96.2% for primary root length and 96.4% for total root length, compared to manual measurement. The correlation coefficient for each measurement was 0.999 with a standard error of prediction of 1.16 mm for primary root length and 0.97 mm for total root length. Conclusions: The developed machine vision system showed good performance for the morphological measurement of soybean roots. This image analysis algorithm, combined with a simple color camera, can be used as an alternative to the conventional seed vigor test method.

A Survey of Face Recognition Techniques

  • Jafri, Rabia;Arabnia, Hamid R.
    • Journal of Information Processing Systems
    • /
    • v.5 no.2
    • /
    • pp.41-68
    • /
    • 2009
  • Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.

A Novel Approach for Object Detection in Illuminated and Occluded Video Sequences Using Visual Information with Object Feature Estimation

  • Sharma, Kajal
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.2
    • /
    • pp.110-114
    • /
    • 2015
  • This paper reports a novel object-detection technique in video sequences. The proposed algorithm consists of detection of objects in illuminated and occluded videos by using object features and a neural network technique. It consists of two functional modules: region-based object feature extraction and continuous detection of objects in video sequences with region features. This scheme is proposed as an enhancement of the Lowe's scale-invariant feature transform (SIFT) object detection method. This technique solved the high computation time problem of feature generation in the SIFT method. The improvement is achieved by region-based feature classification in the objects to be detected; optimal neural network-based feature reduction is presented in order to reduce the object region feature dataset with winner pixel estimation between the video frames of the video sequence. Simulation results show that the proposed scheme achieves better overall performance than other object detection techniques, and region-based feature detection is faster in comparison to other recent techniques.