• Title/Summary/Keyword: computer vision systems

Search Result 600, Processing Time 0.027 seconds

Diabetic Retinopathy Grading in Ultra-widefield fundus image Using Deep Learning (딥 러닝을 사용한 초광각 망막 이미지에서 당뇨망막증의 등급 평가)

  • Van-Nguyen Pham;Kim-Ngoc T. Le;Hyunseung Choo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.632-633
    • /
    • 2023
  • Diabetic retinopathy (DR) is a prevalent complication of diabetes that can lead to vision impairment if not diagnosed and treated promptly. This study presents a novel approach for the automated grading of diabetic retinopathy in ultra-widefield fundus images (UFI) using deep learning techniques. We propose a method that involves preprocessing UFIs by cropping the central region to focus on the most relevant information. Subsequently, we employ state-of-the-art deep learning models, including ResNet50, EfficientNetB3, and Xception, to perform DR grade classification. Our extensive experiments reveal that Xception outperforms the other models in terms of classification accuracy, sensitivity, and specificity. his research contributes to the development of automated tools that can assist healthcare professionals in early DR detection and management, thereby reducing the risk of vision loss among diabetic patients.

Development of the Sorting Inspection System for Screw/Bolt Using a Slant Method (슬랜트방식을 이용한 스크류/볼트 선별검사시스템 개발)

  • Kim, Yong-Seok;Yang, Soon-Yong
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.19 no.5
    • /
    • pp.698-704
    • /
    • 2010
  • The machine vision system has been widely applied at automatic inspection field of the industries. Especially, the machine vision system shows good performance at difficult inspection field by contact method. In this paper, the automatic system of a slant method to inspect screw/bolt shape using machine vision is developed. The inspection system uses pattern matching method that search similar degree of the lucidity, the average lucidity, length and angle of inspection set up area using a circular scan and a line scan method. Also the feeding method for inspection product is the slant method, and feed rate is controlled by the ramp angle adjustment. This inspection system is composed of a feeding device, a transfer device, vision systems, a lighting device and computer, and is composed the sorting discharge system of the inferior product. The performance test carried out the feeding speed, the shape correct degree and the sorting discharge speed according to the type of screw/bolt. This sorting inspection system showed a satisfied test results in whole inspection items. Presently, this sorting inspection system is being used in the manufacturing process of screw/bolt usefully.

Vision Based Vehicle Detection and Traffic Parameter Extraction (비젼 기반 차량 검출 및 교통 파라미터 추출)

  • 하동문;이종민;김용득
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.11
    • /
    • pp.610-620
    • /
    • 2003
  • Various shadows are one of main factors that cause errors in vision based vehicle detection. In this paper, two simple methods, land mark based method and BS & Edge method, are proposed for vehicle detection and shadow rejection. In the experiments, the accuracy of vehicle detection is higher than 96%, during which the shadows arisen from roadside buildings grew considerably. Based on these two methods, vehicle counting, tracking, classification, and speed estimation are achieved so that real-time traffic parameters concerning traffic flow can be extracted to describe the load of each lane.

Robust Vision-Based Autonomous Navigation Against Environment Changes (환경 변화에 강인한 비전 기반 로봇 자율 주행)

  • Kim, Jungho;Kweon, In So
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.2
    • /
    • pp.57-65
    • /
    • 2008
  • Recently many researches on intelligent robots have been studied. An intelligent robot is capable of recognizing environments or objects to autonomously perform specific tasks using sensor readings. One of fundamental problems in vision-based robot applications is to recognize where it is and to decide safe path to perform autonomous navigation. However, previous approaches only consider well-organized environments that there is no moving object and environment changes. In this paper, we introduce a novel navigation strategy to handle occlusions caused by moving objects using various computer vision techniques. Experimental results demonstrate the capability to overcome such difficulties for autonomous navigation.

  • PDF

An Automated Projection Welding System using Vision Processing Technique (영상인식 기술을 이용한 프로젝션용접 자동화시스템)

  • Park, Ki-Jung;Song, Ha-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.4
    • /
    • pp.517-522
    • /
    • 2011
  • Conventional projection welding systems suffer from lots of defective products caused by manual handling. In this paper, we introduce a projection welding system that performs automatic identification, welding and counting of components and products. The proposed system checks the existence and identifies placement of components to be welded by a vision camera. After welding of the components, it automatically updates product counts and dressing items. We show that the proposed welding system can reduce the defect rate and improve the productivity through experimental test with a existing system.

Automation of Tire Tread Extruder Line Using Cameras (카메라를 이용한 타이어 트레드 압출라인 자동화)

  • Pyo, Choon-Seon;Lyou, Joon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.3
    • /
    • pp.262-267
    • /
    • 2013
  • This paper describes a vision based automation case study for the tire tread extruder line. To accurately measure the thread widths, two cameras with laser line illumination have been installed near the takeaway conveyer. The overall tread extruder line is then automated by controlling the speeds of take away conveyor and screw motor such that a difference between measured widths and the targeted data is minimized. By doing this, the conventional tread extruder line has been replaced by the developed automated computer system and with only one operator, increasing the production efficiency and reducing safety accidents.

Vision-based Kinematic Modeling of a Worm's Posture (시각기반 웜 자세의 기구학적 모형화)

  • Do, Yongtae;Tan, Kok Kiong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.3
    • /
    • pp.250-256
    • /
    • 2015
  • We present a novel method to model the body posture of a worm for vision-based automatic monitoring and analysis. The worm considered in this study is a Caenorhabditis elegans (C. elegans), which is popularly used for research in biological science and engineering. We model the posture by an open chain of a few curved or rigid line segments, in contrast to previously published approaches wherein a large number of small rigid elements are connected for the modeling. Each link segment is represented by only two parameters: an arc angle and an arc length for a curved segment, or an orientation angle and a link length for a straight line segment. Links in the proposed method can be readily related using the Denavit-Hartenberg convention due to similarities to the kinematics of an articulated manipulator. Our method was tested with real worm images, and accurate results were obtained.

A Lane Change Recognition System for Smart Cars (스마트카를 위한 차선변경 인식시스템)

  • Lee, Yong-Jin;Yang, Jeong-Ha;Kwak, Nojun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.46-51
    • /
    • 2015
  • In this paper, we propose a vision-based method to recognize lane changes of an autonomous vehicle. The proposed method is based on six states of driving situations defined by the positional relationship between a vehicle and its nearest lane detected. With the combinations of these states, the lane change is detected. The proposed method yields 98% recognition accuracy of lane change even in poor situations with partially invisible lanes.

Vision Processing for Precision Autonomous Landing Approach of an Unmanned Helicopter (무인헬기의 정밀 자동착륙 접근을 위한 영상정보 처리)

  • Kim, Deok-Ryeol;Kim, Do-Myoung;Suk, Jin-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.1
    • /
    • pp.54-60
    • /
    • 2009
  • In this paper, a precision landing approach is implemented based on real-time image processing. A full-scale landmark for automatic landing is used. canny edge detection method is applied to identify the outside quadrilateral while circular hough transform is used for the recognition of inside circle. Position information on the ground landmark is uplinked to the unmanned helicopter via ground control computer in real time so that the unmanned helicopter control the air vehicle for accurate landing approach. Ground test and a couple of flight tests for autonomous landing approach show that the image processing and automatic landing operation system have good performance for the landing approach phase at the altitude of $20m{\sim}1m$ above ground level.

Towards a Ubiquitous Robotic Companion: Design and Implementation of Ubiquitous Robotic Service Framework

  • Ha, Young-Guk;Sohn, Joo-Chan;Cho, Young-Jo;Yoon, Hyun-Soo
    • ETRI Journal
    • /
    • v.27 no.6
    • /
    • pp.666-676
    • /
    • 2005
  • In recent years, motivated by the emergence of ubiquitous computing technologies, a new class of networked robots, ubiquitous robots, has been introduced. The Ubiquitous Robotic Companion (URC) is our conceptual vision of ubiquitous service robots that provide users with the services they need, anytime and anywhere in ubiquitous computing environments. To realize the vision of URC, one of the essential requirements for robotic systems is to support ubiquity of services: that is, a robot service must be always available even though there are changes in the service environments. Specifically robotic systems need to be automatically interoperable with sensors and devices in current service environments, rather than statically preprogrammed for them. In this paper, the design and implementation of a semantic-based ubiquitous robotic space (SemanticURS) is presented. SemanticURS enables automated integration of networked robots into ubiquitous computing environments exploiting Semantic Web Services and AI-based planning technologies.

  • PDF