• Title/Summary/Keyword: artificial vision

Search Result 316, Processing Time 0.029 seconds

A vision-based system for inspection of expansion joints in concrete pavement

  • Jung Hee Lee ;bragimov Eldor ;Heungbae Gil ;Jong-Jae Lee
    • Smart Structures and Systems
    • /
    • v.32 no.5
    • /
    • pp.309-318
    • /
    • 2023
  • The appropriate maintenance of highway roads is critical for the safe operation of road networks and conserves maintenance costs. Multiple methods have been developed to investigate the surface of roads for various types of cracks and potholes, among other damage. Like road surface damage, the condition of expansion joints in concrete pavement is important to avoid unexpected hazardous situations. Thus, in this study, a new system is proposed for autonomous expansion joint monitoring using a vision-based system. The system consists of the following three key parts: (1) a camera-mounted vehicle, (2) indication marks on the expansion joints, and (3) a deep learning-based automatic evaluation algorithm. With paired marks indicating the expansion joints in a concrete pavement, they can be automatically detected. An inspection vehicle is equipped with an action camera that acquires images of the expansion joints in the road. You Only Look Once (YOLO) automatically detects the expansion joints with indication marks, which has a performance accuracy of 95%. The width of the detected expansion joint is calculated using an image processing algorithm. Based on the calculated width, the expansion joint is classified into the following two types: normal and dangerous. The obtained results demonstrate that the proposed system is very efficient in terms of speed and accuracy.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

Development of the Container Damage Inspection System (컨테이너 파손 검사장치의 개발)

  • Oh Jae Ho;Hong Seong Woo;Choi Gyu Jong;Kim Myong Ho;Ahn Doo Sung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.1
    • /
    • pp.82-88
    • /
    • 2005
  • The damage inspection of container surface is performed by the expert inspectors at the container terminal gate of harbor. In this paper, we substitute the expert's capability with the damage inspection system using the artificial intelligent control algorithm and vision system, so we can improve the work environment and effectively decrease the inspection time and cost. Firstly, using six CCD cameras attached to the terminal gate, whole container is partially captured according to eleven sensors aligned with the entering direction of container. Captured partial images are inspected by the fuzzy system which the expert's technology is embedded. Next, we compose partial images to be a complete container image through the correlation coefficient method. Complete container image is saved to solve future troublesome problems. In this paper, the effectiveness of the proposed system was verified through the field test.

Multiple Templates and Weighted Correlation Coefficient-based Object Detection and Tracking for Underwater Robots (수중 로봇을 위한 다중 템플릿 및 가중치 상관 계수 기반의 물체 인식 및 추종)

  • Kim, Dong-Hoon;Lee, Dong-Hwa;Myung, Hyun;Choi, Hyun-Taek
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.142-149
    • /
    • 2012
  • The camera has limitations of poor visibility in underwater environment due to the limited light source and medium noise of the environment. However, its usefulness in close range has been proved in many studies, especially for navigation. Thus, in this paper, vision-based object detection and tracking techniques using artificial objects for underwater robots have been studied. We employed template matching and mean shift algorithms for the object detection and tracking methods. Also, we propose the weighted correlation coefficient of adaptive threshold -based and color-region-aided approaches to enhance the object detection performance in various illumination conditions. The color information is incorporated into the template matched area and the features of the template are used to robustly calculate correlation coefficients. And the objects are recognized using multi-template matching approach. Finally, the water basin experiments have been conducted to demonstrate the performance of the proposed techniques using an underwater robot platform yShark made by KORDI.

Control Strategy for Obstacle Avoidance of an Agricultural Robot (농용 로봇의 장애물 회피알고리즘)

  • 류관희;김기영;박정인;류영선
    • Journal of Biosystems Engineering
    • /
    • v.25 no.2
    • /
    • pp.141-150
    • /
    • 2000
  • This study was carried out to de develop a control strategy of a fruit harvesting redundant robot. The method of generating a safe trajectory, which avoids collisions with obstracles such as branches or immature fruits, in the 3D(3-dimension) space using artificial potential field technique and virtual plane concept was proposed. Also, the method of setting reference velocity vectors to follow the trajectory and to avoid obstacles in the 3D space was proposed. Developed methods were verified with computer simulations and with actual robot tests. Fro the actual robot tests, a machine vision system was used for detecting fruits and obstacles, Results showed that developed control method could reduce the occurrences of the robot manipulator located in the possible collision distance. with 10 virtual obstacles generated randomly in the 3 D space, maximum rates of the occurrences of the robot manipulator located in the possible collision distance, 0.03 m, from the obstacles were 8 % with 5 degree of freedom (DOF), 8 % with 6-DOF, and 4% with 7-DOF, respectively.

  • PDF

Extraction of Geometric and Color Features in the Tobacco-leaf by Computer Vision (컴퓨터 시각에 의한 잎담배의 외형 및 색 특징 추출)

  • Cho, H.K.;Song, H.K.
    • Journal of Biosystems Engineering
    • /
    • v.19 no.4
    • /
    • pp.380-396
    • /
    • 1994
  • A personal computer based color machine vision system with video camera and fluorescent lighting system was used to generate images of stationary tobacco leaves. Image processing algorithms were developed to extract both the geometric and the color features of tobacco leaves. Geometric features include area, perimeter, centroid, roundness and complex ratio. Color calibration scheme was developed to convert measured pixel values to the standard color unit using both statistics and artificial neural network algorithm. Improved back propagation algorithm showed less sum of square errors than multiple linear regression. Color features provide not only quality evaluation quantities but the accurate color measurement. Those quality features would be useful in grading tobacco automatically. This system would also be useful in measuring visual features of other agricultural products.

  • PDF

Accurate Vehicle Positioning on a Numerical Map

  • Laneurit Jean;Chapuis Roland;Chausse Fr d ric
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.1
    • /
    • pp.15-31
    • /
    • 2005
  • Nowadays, the road safety is an important research field. One of the principal research topics in this field is the vehicle localization in the road network. This article presents an approach of multi sensor fusion able to locate a vehicle with a decimeter precision. The different informations used in this method come from the following sensors: a low cost GPS, a numeric camera, an odometer and a steer angle sensor. Taking into account a complete model of errors on GPS data (bias on position and nonwhite errors) as well as the data provided by an original approach coupling a vision algorithm with a precise numerical map allow us to get this precision.

Vision-based AGV Parking System (비젼 기반의 무인이송차량 정차 시스템)

  • Park, Young-Su;Park, Jee-Hoon;Lee, Je-Won;Kim, Sang-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.473-479
    • /
    • 2009
  • This paper proposes an efficient method to locate the automated guided vehicle (AGV) into a specific parking position using artificial visual landmark and vision-based algorithm. The landmark has comer features and a HSI color arrangement for robustness against illuminant variation. The landmark is attached to left of a parking spot under a crane. For parking, an AGV detects the landmark with CCD camera fixed to the AGV using Harris comer detector and matching descriptors of the comer features. After detecting the landmark, the AGV tracks the landmark using pyramidal Lucas-Kanade feature tracker and a refinement process. Then, the AGV decreases its speed and aligns its longitudinal position with the center of the landmark. The experiments showed the AGV parked accurately at the parking spot with small standard deviation of error under bright illumination and dark illumination.

Dividing Occluded Humans Based on an Artificial Neural Network for the Vision of a Surveillance Robot (감시용 로봇의 시각을 위한 인공 신경망 기반 겹친 사람의 구분)

  • Do, Yong-Tae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.505-510
    • /
    • 2009
  • In recent years the space where a robot works has been expanding to the human space unlike traditional industrial robots that work only at fixed positions apart from humans. A human in the recent situation may be the owner of a robot or the target in a robotic application. This paper deals with the latter case; when a robot vision system is employed to monitor humans for a surveillance application, each person in a scene needs to be identified. Humans, however, often move together, and occlusions between them occur frequently. Although this problem has not been seriously tackled in relevant literature, it brings difficulty into later image analysis steps such as tracking and scene understanding. In this paper, a probabilistic neural network is employed to learn the patterns of the best dividing position along the top pixels of an image region of partly occlude people. As this method uses only shape information from an image, it is simple and can be implemented in real time.

Analysis of the effect of class classification learning on the saliency map of Self-Supervised Transformer (클래스분류 학습이 Self-Supervised Transformer의 saliency map에 미치는 영향 분석)

  • Kim, JaeWook;Kim, Hyeoncheol
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.67-70
    • /
    • 2022
  • NLP 분야에서 적극 활용되기 시작한 Transformer 모델을 Vision 분야에서 적용하기 시작하면서 object detection과 segmentation 등 각종 분야에서 기존 CNN 기반 모델의 정체된 성능을 극복하며 향상되고 있다. 또한, label 데이터 없이 이미지들로만 자기지도학습을 한 ViT(Vision Transformer) 모델을 통해 이미지에 포함된 여러 중요한 객체의 영역을 검출하는 saliency map을 추출할 수 있게 되었으며, 이로 인해 ViT의 자기지도학습을 통한 object detection과 semantic segmentation 연구가 활발히 진행되고 있다. 본 논문에서는 ViT 모델 뒤에 classifier를 붙인 모델에 일반 학습한 모델과 자기지도학습의 pretrained weight을 사용해서 전이학습한 모델의 시각화를 통해 각 saliency map들을 비교 분석하였다. 이를 통해, 클래스 분류 학습 기반 전이학습이 transformer의 saliency map에 미치는 영향을 확인할 수 있었다.

  • PDF