• Title/Summary/Keyword: computer vision systems

Search Result 607, Processing Time 0.026 seconds

Car detection area segmentation using deep learning system

  • Dong-Jin Kwon;Sang-hoon Lee
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.182-189
    • /
    • 2023
  • A recently research, object detection and segmentation have emerged as crucial technologies widely utilized in various fields such as autonomous driving systems, surveillance and image editing. This paper proposes a program that utilizes the QT framework to perform real-time object detection and precise instance segmentation by integrating YOLO(You Only Look Once) and Mask R CNN. This system provides users with a diverse image editing environment, offering features such as selecting specific modes, drawing masks, inspecting detailed image information and employing various image processing techniques, including those based on deep learning. The program advantage the efficiency of YOLO to enable fast and accurate object detection, providing information about bounding boxes. Additionally, it performs precise segmentation using the functionalities of Mask R CNN, allowing users to accurately distinguish and edit objects within images. The QT interface ensures an intuitive and user-friendly environment for program control and enhancing accessibility. Through experiments and evaluations, our proposed system has been demonstrated to be effective in various scenarios. This program provides convenience and powerful image processing and editing capabilities to both beginners and experts, smoothly integrating computer vision technology. This paper contributes to the growth of the computer vision application field and showing the potential to integrate various image processing algorithms on a user-friendly platform

Pose and Expression Invariant Alignment based Multi-View 3D Face Recognition

  • Ratyal, Naeem;Taj, Imtiaz;Bajwa, Usama;Sajid, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4903-4929
    • /
    • 2018
  • In this study, a fully automatic pose and expression invariant 3D face alignment algorithm is proposed to handle frontal and profile face images which is based on a two pass course to fine alignment strategy. The first pass of the algorithm coarsely aligns the face images to an intrinsic coordinate system (ICS) through a single 3D rotation and the second pass aligns them at fine level using a minimum nose tip-scanner distance (MNSD) approach. For facial recognition, multi-view faces are synthesized to exploit real 3D information and test the efficacy of the proposed system. Due to optimal separating hyper plane (OSH), Support Vector Machine (SVM) is employed in multi-view face verification (FV) task. In addition, a multi stage unified classifier based face identification (FI) algorithm is employed which combines results from seven base classifiers, two parallel face recognition algorithms and an exponential rank combiner, all in a hierarchical manner. The performance figures of the proposed methodology are corroborated by extensive experiments performed on four benchmark datasets: GavabDB, Bosphorus, UMB-DB and FRGC v2.0. Results show mark improvement in alignment accuracy and recognition rates. Moreover, a computational complexity analysis has been carried out for the proposed algorithm which reveals its superiority in terms of computational efficiency as well.

A Hierarchical deep model for food classification from photographs

  • Yang, Heekyung;Kang, Sungyong;Park, Chanung;Lee, JeongWook;Yu, Kyungmin;Min, Kyungha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1704-1720
    • /
    • 2020
  • Recognizing food from photographs presents many applications for machine learning, computer vision and dietetics, etc. Recent progress of deep learning techniques accelerates the recognition of food in a great scale. We build a hierarchical structure composed of deep CNN to recognize and classify food from photographs. We build a dataset for Korean food of 18 classes, which are further categorized in 4 major classes. Our hierarchical recognizer classifies foods into four major classes in the first step. Each food in the major classes is further classified into the exact class in the second step. We employ DenseNet structure for the baseline of our recognizer. The hierarchical structure provides higher accuracy and F1 score than those from the single-structured recognizer.

Dynamic swarm particle for fast motion vehicle tracking

  • Jati, Grafika;Gunawan, Alexander Agung Santoso;Jatmiko, Wisnu
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.54-66
    • /
    • 2020
  • Nowadays, the broad availability of cameras and embedded systems makes the application of computer vision very promising as a supporting technology for intelligent transportation systems, particularly in the field of vehicle tracking. Although there are several existing trackers, the limitation of using low-cost cameras, besides the relatively low processing power in embedded systems, makes most of these trackers useless. For the tracker to work under those conditions, the video frame rate must be reduced to decrease the burden on computation. However, doing this will make the vehicle seem to move faster on the observer's side. This phenomenon is called the fast motion challenge. This paper proposes a tracker called dynamic swarm particle (DSP), which solves the challenge. The term particle refers to the particle filter, while the term swarm refers to particle swarm optimization (PSO). The fundamental concept of our method is to exploit the continuity of vehicle dynamic motions by creating dynamic models based on PSO. Based on the experiments, DSP achieves a precision of 0.896 and success rate of 0.755. These results are better than those obtained by several other benchmark trackers.

Teleoperation System of a Mobile Robot over the Internet (인터넷을 이용한 이동로봇의 원격 운용 시스템)

  • Park, Taehyun;Gang, Geun-Taek;Lee, Wonchang
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.3
    • /
    • pp.270-274
    • /
    • 2002
  • This paper presents a teleoperation system that combines computer network and an autonomous mobile robot. We control remotely an autonomous mobile robot with vision over the Internet to guide it under unknown environments in the real time. The main feature of this system is that local operators need a web browser and a computer connected to the communication network and so they can command the robot in a remote location through the home page. The hardware architecture of this system consists of an autonomous mobile robot, workstation, and local computers. The software architecture of this system includes the client part for the user interface and robot control as well as the server part for communication between users and robot. The server and client systems are developed using Java language which is suitable to internet application and supports multi-platform. Furthermore. this system offers an image compression method using JPEG concept which reduces large time delay that occurs in network during image transmission.

Camera Calibration Method for an Automotive Safety Driving System (자동차 안전운전 보조 시스템에 응용할 수 있는 카메라 캘리브레이션 방법)

  • Park, Jong-Seop;Kim, Gi-Seok;Roh, Soo-Jang;Cho, Jae-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.621-626
    • /
    • 2015
  • This paper presents a camera calibration method in order to estimate the lane detection and inter-vehicle distance estimation system for an automotive safety driving system. In order to implement the lane detection and vision-based inter-vehicle distance estimation to the embedded navigations or black box systems, it is necessary to consider the computation time and algorithm complexity. The process of camera calibration estimates the horizon, the position of the car's hood and the lane width for extraction of region of interest (ROI) from input image sequences. The precision of the calibration method is very important to the lane detection and inter-vehicle distance estimation. The proposed calibration method consists of three main steps: 1) horizon area determination; 2) estimation of the car's hood area; and 3) estimation of initial lane width. Various experimental results show the effectiveness of the proposed method.

A Development of Monitor Screen Checking System for Monitor Manufacturing Firm (모니터 생산업체에서의 최종 모니터 화면검사 시스템의 개발)

  • 조영창;윤정오;최병진;정종혁;강상욱;오주환
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2000.05a
    • /
    • pp.107-111
    • /
    • 2000
  • There are many recent menial manufacturing firms not equipped with automatic checking system in their final process. And the check is based on the human perception so the automatic checking system is needed for the consistency and the accuracy of the checking process to elevate the productivity and the Quality. As the performance of computer systems and the vision systems has been increased the cost for the system is reduced and their applicable algorithms have been developed. In this study we develop monitor checking system which is low-cost, fast, and easy to adopt by the small-scaled manufacturing firms. The system is based on the computer vision techniques, and is equipped with the GUI interface and checking functions such as centering, yoke rotation, pincushion. sizing. Monitor checking system developed in this study can be used in the final checking process thereby we expect the synergy effects both on the efficiency of production and on the reduction of the cost for the facility investments.

  • PDF

A Study on the Development of Monitor Screen Checking System (모니터 화면검사 시스템의 개발에 관한 연구)

  • 조영창;윤정오;최병진;정종혁;강상욱;오주환
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.5 no.3
    • /
    • pp.111-116
    • /
    • 2000
  • There are many recent monitor manufacturing firms not equipped with automatic checking system in their final process. And the check is based on the human perception, so the automatic checking system is needed for the consistency and the accuracy of the checking process to elevate the productivity and the quality. As the performance of computer systems and the vision systems has been increased, the cost for the system is reduced and their applicable algorithms have been developed. In this study we develop monitor checking system which is low-cost, fast, and easy to adopt by the small-scaled manufacturing films. The system is based on the computer vision techniques, and is equipped with the GUI interface and checking functions such as centering, yoke rotation, pincushion, sizing, brightness, and grayscale tracking. Monitor checking system developed in this study can be used in the final checking process thereby we expect the synergy effects both on the efficiency of production and on the reduction of the cost for the facility investments.

  • PDF

The CAI systems for the image processing theory and practice (영상처리 이론 및 실습교육을 위한 통합교육시스템에 관한 연구)

  • 손병락;채옥삼
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.741-744
    • /
    • 1998
  • In the area of the multimedia computer, the demand for the image processing engineer is large. However, there are not many engineers capable of developing practical systems. To teach practical image processing techniques, we need an intergrated education systems which can efficiently present the image processing theory and, at the same time, provide interactive experiments for the theory presented. In this paper, we propose an integrated education systems for the image processing. It consists of the theory presentation systems and the experiment systems. The theory presentation systems uspports multimedia display functions and HTML. It is tightly integrated with the experiment systems which developed based on the integrated image processing algorithm development system called Hello-vision.

  • PDF

A hierarchical semantic segmentation framework for computer vision-based bridge damage detection

  • Jingxiao Liu;Yujie Wei ;Bingqing Chen;Hae Young Noh
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.325-334
    • /
    • 2023
  • Computer vision-based damage detection enables non-contact, efficient and low-cost bridge health monitoring, which reduces the need for labor-intensive manual inspection or that for a large number of on-site sensing instruments. By leveraging recent semantic segmentation approaches, we can detect regions of critical structural components and identify damages at pixel level on images. However, existing methods perform poorly when detecting small and thin damages (e.g., cracks); the problem is exacerbated by imbalanced samples. To this end, we incorporate domain knowledge to introduce a hierarchical semantic segmentation framework that imposes a hierarchical semantic relationship between component categories and damage types. For instance, certain types of concrete cracks are only present on bridge columns, and therefore the noncolumn region may be masked out when detecting such damages. In this way, the damage detection model focuses on extracting features from relevant structural components and avoid those from irrelevant regions. We also utilize multi-scale augmentation to preserve contextual information of each image, without losing the ability to handle small and/or thin damages. In addition, our framework employs an importance sampling, where images with rare components are sampled more often, to address sample imbalance. We evaluated our framework on a public synthetic dataset that consists of 2,000 railway bridges. Our framework achieves a 0.836 mean intersection over union (IoU) for structural component segmentation and a 0.483 mean IoU for damage segmentation. Our results have in total 5% and 18% improvements for the structural component segmentation and damage segmentation tasks, respectively, compared to the best-performing baseline model.