• 제목/요약/키워드: computer vision systems

검색결과 599건 처리시간 0.029초

Image Enhanced Machine Vision System for Smart Factory

  • Kim, ByungJoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제13권2호
    • /
    • pp.7-13
    • /
    • 2021
  • Machine vision is a technology that helps the computer as if a person recognizes and determines things. In recent years, as advanced technologies such as optical systems, artificial intelligence and big data advanced in conventional machine vision system became more accurate quality inspection and it increases the manufacturing efficiency. In machine vision systems using deep learning, the image quality of the input image is very important. However, most images obtained in the industrial field for quality inspection typically contain noise. This noise is a major factor in the performance of the machine vision system. Therefore, in order to improve the performance of the machine vision system, it is necessary to eliminate the noise of the image. There are lots of research being done to remove noise from the image. In this paper, we propose an autoencoder based machine vision system to eliminate noise in the image. Through experiment proposed model showed better performance compared to the basic autoencoder model in denoising and image reconstruction capability for MNIST and fashion MNIST data sets.

Development and testing of a composite system for bridge health monitoring utilising computer vision and deep learning

  • Lydon, Darragh;Taylor, S.E.;Lydon, Myra;Martinez del Rincon, Jesus;Hester, David
    • Smart Structures and Systems
    • /
    • 제24권6호
    • /
    • pp.723-732
    • /
    • 2019
  • Globally road transport networks are subjected to continuous levels of stress from increasing loading and environmental effects. As the most popular mean of transport in the UK the condition of this civil infrastructure is a key indicator of economic growth and productivity. Structural Health Monitoring (SHM) systems can provide a valuable insight to the true condition of our aging infrastructure. In particular, monitoring of the displacement of a bridge structure under live loading can provide an accurate descriptor of bridge condition. In the past B-WIM systems have been used to collect traffic data and hence provide an indicator of bridge condition, however the use of such systems can be restricted by bridge type, assess issues and cost limitations. This research provides a non-contact low cost AI based solution for vehicle classification and associated bridge displacement using computer vision methods. Convolutional neural networks (CNNs) have been adapted to develop the QUBYOLO vehicle classification method from recorded traffic images. This vehicle classification was then accurately related to the corresponding bridge response obtained under live loading using non-contact methods. The successful identification of multiple vehicle types during field testing has shown that QUBYOLO is suitable for the fine-grained vehicle classification required to identify applied load to a bridge structure. The process of displacement analysis and vehicle classification for the purposes of load identification which was used in this research adds to the body of knowledge on the monitoring of existing bridge structures, particularly long span bridges, and establishes the significant potential of computer vision and Deep Learning to provide dependable results on the real response of our infrastructure to existing and potential increased loading.

컴퓨터 비젼을 이용한 파이프 불량 검사시스템 개발 (Development of Pipe Fault Inspection System using Computer Vision)

  • 박찬호;양순용;안경관;오현옥;이병룡
    • 제어로봇시스템학회논문지
    • /
    • 제9권10호
    • /
    • pp.822-831
    • /
    • 2003
  • A computer-vision based pipe-inspection algorithm is developed. The algorithm uses the modified Hough transformation and a line-scanning approach to identify the edge line and the radius of the pipe image, from which the eccentricity and dimension of the pipe-end is calculated. Line and circle detection was performed using Laplace operator with input image, which are acquired from the front and side cameras. In order to minimize the memory usage and the processing time, a clustering method with the modified Hough transformation is introduced for line detection. The dimension of inner and outer radius of pipe is calculated by the proposed line-scanning method. The method scans several lines along the X and Y axes, calculating the eccentricity of inner and outer circle, by which pipes with wrong end-shape can be classified and removed.

Unusual Motion Detection for Vision-Based Driver Assistance

  • Fu, Li-Hua;Wu, Wei-Dong;Zhang, Yu;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제15권1호
    • /
    • pp.27-34
    • /
    • 2015
  • For a vision-based driver assistance system, unusual motion detection is one of the important means of preventing accidents. In this paper, we propose a real-time unusual-motion-detection model, which contains two stages: salient region detection and unusual motion detection. In the salient-region-detection stage, we present an improved temporal attention model. In the unusual-motion-detection stage, three kinds of factors, the speed, the motion direction, and the distance, are extracted for detecting unusual motion. A series of experimental results demonstrates the proposed method and shows the feasibility of the proposed model.

Detection of Traditional Costumes: A Computer Vision Approach

  • Marwa Chacha Andrea;Mi Jin Noh;Choong Kwon Lee
    • 스마트미디어저널
    • /
    • 제12권11호
    • /
    • pp.125-133
    • /
    • 2023
  • Traditional attire has assumed a pivotal role within the contemporary fashion industry. The objective of this study is to construct a computer vision model tailored to the recognition of traditional costumes originating from five distinct countries, namely India, Korea, Japan, Tanzania, and Vietnam. Leveraging a dataset comprising 1,608 images, we proceeded to train the cutting-edge computer vision model YOLOv8. The model yielded an impressive overall mean average precision (MAP) of 96%. Notably, the Indian sari exhibited a remarkable MAP of 99%, the Tanzanian kitenge 98%, the Japanese kimono 92%, the Korean hanbok 89%, and the Vietnamese ao dai 83%. Furthermore, the model demonstrated a commendable overall box precision score of 94.7% and a recall rate of 84.3%. Within the realm of the fashion industry, this model possesses considerable utility for trend projection and the facilitation of personalized recommendation systems.

뇌세포형 컴퓨터 시스템 (Neural computer systems)

  • 김성수;우광방
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1988년도 한국자동제어학술회의논문집(국내학술편); 한국전력공사연수원, 서울; 21-22 Oct. 1988
    • /
    • pp.552-555
    • /
    • 1988
  • In this paper, the authors introduce the concepts of neural computer systems which have been studied over 25 years in other countries. And also we illustrate the models of neural networks suggested by researchers. Our fundamental hypothesis is that these models are applicable to the construction of artificial neural systems including neural computers. Therefore we assume that neural computer systems are abstract computer systems based on the computational properties of human brains and particularly well suited for problems in vision and language understanding.

  • PDF

스테레오 영상을 이용한 이동형 머니퓰레이터의 시각제어 (Visual Servoing of a Mobile Manipulator Based on Stereo Vision)

  • 이현정;박민규;이민철
    • 제어로봇시스템학회논문지
    • /
    • 제11권5호
    • /
    • pp.411-417
    • /
    • 2005
  • In this study, stereo vision system is applied to a mobile manipulator for effective tasks. The robot can recognize a target and compute the potion of the target using a stereo vision system. While a monocular vision system needs properties such as geometric shape of a target, a stereo vision system enables the robot to find the position of a target without additional information. Many algorithms have been studied and developed for an object recognition. However, most of these approaches have a disadvantage of the complexity of computations and they are inadequate for real-time visual servoing. Color information is useful for simple recognition in real-time visual servoing. This paper addresses object recognition using colors, stereo matching method to reduce its calculation time, recovery of 3D space and the visual servoing.

OpenCV를 사용한 스테레오 비전 시스템의 프로토타입 구현 (A Prototype for Stereo Vision Systems using OpenCV)

  • 이정수;정새암;김준성
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2008년도 하계종합학술대회
    • /
    • pp.763-764
    • /
    • 2008
  • Sensing is an important part of a smart home system. Vision sensors are a type of passive systems, which are not sensitive to noise. In this paper, we implement a prototype for stereo vision systems using OpenCV. It is an open source library for computer vision developed by Intel corporation. The prototype will by used for comparing performance among various stereo algorithms and for developing a stereo vision smart camera.

  • PDF

MIRAGE I 로봇 시스템의 개선 (The improvement of MIRAGE I robot system)

  • 한국현;서보익;오세종
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.605-607
    • /
    • 1997
  • According to the way of the robot control, the robot systems of all the teams which participate in the MIROSOT can be divided into three categories : the remote brainless system, the vision-based system and the robot-based system. The MIRAGE I robot control system uses the last one, the robot-based system. In the robot-based system the host computer with the vision system transmits the data on only the location of the ball and the robots. Based on this robot control method, we took part in the MIROSOT '96 and the MIROSOT '97.

  • PDF

SHAP를 이용한 이미지 어노테이션 자동화 프로세스 연구 (A Study on Image Annotation Automation Process using SHAP for Defect Detection)

  • 정진형;심현수;김용수
    • 산업경영시스템학회지
    • /
    • 제46권1호
    • /
    • pp.76-83
    • /
    • 2023
  • Recently, the development of computer vision with deep learning has made object detection using images applicable to diverse fields, such as medical care, manufacturing, and transportation. The manufacturing industry is saving time and money by applying computer vision technology to detect defects or issues that may occur during the manufacturing and inspection process. Annotations of collected images and their location information are required for computer vision technology. However, manually labeling large amounts of images is time-consuming, expensive, and can vary among workers, which may affect annotation quality and cause inaccurate performance. This paper proposes a process that can automatically collect annotations and location information for images using eXplainable AI, without manual annotation. If applied to the manufacturing industry, this process is thought to save the time and cost required for image annotation collection and collect relatively high-quality annotation information.