• Title/Summary/Keyword: Computer vision technology

Search Result 669, Processing Time 0.023 seconds

Implementation of Improved Object Detection and Tracking based on Camshift and SURF for Augmented Reality Service (증강현실 서비스를 위한 Camshift와 SURF를 개선한 객체 검출 및 추적 구현)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.4
    • /
    • pp.97-102
    • /
    • 2017
  • Object detection and tracking have become one of the most active research areas in the past few years, and play an important role in computer vision applications over our daily life. Many tracking techniques are proposed, and Camshift is an effective algorithm for real time dynamic object tracking, which uses only color features, so that the algorithm is sensitive to illumination and some other environmental elements. This paper presents and implements an effective moving object detection and tracking to reduce the influence of illumination interference, which improve the performance of tracking under similar color background. The implemented prototype system recognizes object using invariant features, and reduces the dimension of feature descriptor to rectify the problems. The experimental result shows that that the system is superior to the existing methods in processing time, and maintains better problem ratios in various environments.

  • PDF

Multiple Properties-Based Moving Object Detection Algorithm

  • Zhou, Changjian;Xing, Jinge;Liu, Haibo
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.124-135
    • /
    • 2021
  • Object detection is a fundamental yet challenging task in computer vision that plays an important role in object recognition, tracking, scene analysis and understanding. This paper aims to propose a multiproperty fusion algorithm for moving object detection. First, we build a scale-invariant feature transform (SIFT) vector field and analyze vectors in the SIFT vector field to divide vectors in the SIFT vector field into different classes. Second, the distance of each class is calculated by dispersion analysis. Next, the target and contour can be extracted, and then we segment the different images, reversal process and carry on morphological processing, the moving objects can be detected. The experimental results have good stability, accuracy and efficiency.

Pose and Expression Invariant Alignment based Multi-View 3D Face Recognition

  • Ratyal, Naeem;Taj, Imtiaz;Bajwa, Usama;Sajid, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4903-4929
    • /
    • 2018
  • In this study, a fully automatic pose and expression invariant 3D face alignment algorithm is proposed to handle frontal and profile face images which is based on a two pass course to fine alignment strategy. The first pass of the algorithm coarsely aligns the face images to an intrinsic coordinate system (ICS) through a single 3D rotation and the second pass aligns them at fine level using a minimum nose tip-scanner distance (MNSD) approach. For facial recognition, multi-view faces are synthesized to exploit real 3D information and test the efficacy of the proposed system. Due to optimal separating hyper plane (OSH), Support Vector Machine (SVM) is employed in multi-view face verification (FV) task. In addition, a multi stage unified classifier based face identification (FI) algorithm is employed which combines results from seven base classifiers, two parallel face recognition algorithms and an exponential rank combiner, all in a hierarchical manner. The performance figures of the proposed methodology are corroborated by extensive experiments performed on four benchmark datasets: GavabDB, Bosphorus, UMB-DB and FRGC v2.0. Results show mark improvement in alignment accuracy and recognition rates. Moreover, a computational complexity analysis has been carried out for the proposed algorithm which reveals its superiority in terms of computational efficiency as well.

Gesture Recognition by Analyzing a Trajetory on Spatio-Temporal Space (시공간상의 궤적 분석에 의한 제스쳐 인식)

  • 민병우;윤호섭;소정;에지마 도시야끼
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.157-157
    • /
    • 1999
  • Researches on the gesture recognition have become a very interesting topic in the computer vision area, Gesture recognition from visual images has a number of potential applicationssuch as HCI (Human Computer Interaction), VR(Virtual Reality), machine vision. To overcome thetechnical barriers in visual processing, conventional approaches have employed cumbersome devicessuch as datagloves or color marked gloves. In this research, we capture gesture images without usingexternal devices and generate a gesture trajectery composed of point-tokens. The trajectory Is spottedusing phase-based velocity constraints and recognized using the discrete left-right HMM. Inputvectors to the HMM are obtained by using the LBG clustering algorithm on a polar-coordinate spacewhere point-tokens on the Cartesian space .are converted. A gesture vocabulary is composed oftwenty-two dynamic hand gestures for editing drawing elements. In our experiment, one hundred dataper gesture are collected from twenty persons, Fifty data are used for training and another fifty datafor recognition experiment. The recognition result shows about 95% recognition rate and also thepossibility that these results can be applied to several potential systems operated by gestures. Thedeveloped system is running in real time for editing basic graphic primitives in the hardwareenvironments of a Pentium-pro (200 MHz), a Matrox Meteor graphic board and a CCD camera, anda Window95 and Visual C++ software environment.

Recognition of Individual Holstein Cattle by Imaging Body Patterns

  • Kim, Hyeon T.;Choi, Hong L.;Lee, Dae W.;Yoon, Yong C.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.18 no.8
    • /
    • pp.1194-1198
    • /
    • 2005
  • A computer vision system was designed and validated to recognize an individual Holstein cattle by processing images of their body patterns. This system involves image capture, image pre-processing, algorithm processing, and an artificial neural network recognition algorithm. Optimum management of individuals is one of the most important factors in keeping cattle healthy and productive. In this study, an image-processing system was used to recognize individual Holstein cattle by identifying the body-pattern images captured by a charge-coupled device (CCD). A recognition system was developed and applied to acquire images of 49 cattles. The pixel values of the body images were transformed into input data comprising binary signals for the neural network. Images of the 49 cattle were analyzed to learn input layer elements, and ten cattles were used to verify the output layer elements in the neural network by using an individual recognition program. The system proved to be reliable for the individual recognition of cattles in natural light.

Multi-Channel Vision System for On-Line Quantification of Appearance Quality Factors of Apple

  • Lee, Soo Hee;Noh, Sang Ha
    • Agricultural and Biosystems Engineering
    • /
    • v.1 no.2
    • /
    • pp.106-110
    • /
    • 2000
  • An integrated on-line inspection system was constructed with seven cameras, half mirrors to split images. 720 nm and 970 nm band pass filters, illumination chamber having several tungsten-halogen lamps, one main computer, one color frame grabber, two 4-channel multiplexors, and flat plate conveyer, etc. A total of seven images, that is, one color image form the top of an apple and two B/W images from each side (top, right and left) could be captured and displayed on a computer monitor through the multiplexor. One of the two B/W images captured from each side is 720nm filtered image and the other is 970 nm. With this system an on-line grading software was developed to evaluate appearance quality. On-line test results with Fuji apples that were manually fed on the conveyer showed that grading accuracies of the color, defect and shape were 95.3%, 86% and 88.6%, respectively. Grading time was 0.35 second per apple on an average. Therefore, this on-line grading system could be used for inspection of the final products produced from an apple sorting system.

  • PDF

A Robotic System for Transferring Tobacco Seedlings

  • Lee, D.W.;W.F.McClure
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1993.10a
    • /
    • pp.850-858
    • /
    • 1993
  • Germinatin and early growth of tobacco seedlings in trays containing many cells is increasing in popularity . Since 100 % germination is not likely , a major problem is to locate and replace the content of those cells which contain either no seedling or a stunted seedling with a plug containing a viable seedling. Empty cells and seedlings of poor quality take up valuable space in a greenhouse. They may also cause difficulty when transplanting seedlings into the field. Robotic technology, including the implementation of computer vision, appears to be an attractive alternative to the use of manual labor for accomplishing this task. Operating AGBOT, short for Agricultural ROBOT, involved four steps : (1) capturing the image, (2) processing the image, (3) moving the manipulator, (4) working the gripper. This research seedlings within a cell-grown environment. the configuration of the cell-grown seedling environment dictated the design of a Cartesian robot suitable for working ov r a flat plane. Experiments of AGBOT performance in transferring large seedlings produced trays which were more than 98% survived one week after transfer. In general , the system generated much better than expected.

  • PDF

MULTI-CHANNEL VISION SYSTEM FOR ON-LINE QUANTIFICATION OF APPEARANCE QUALITY FACTORS OF APPLE

  • Lee, S. H.;S. H. Noh
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11c
    • /
    • pp.551-559
    • /
    • 2000
  • An integrated on-line inspection system was constructed with seven cameras, half mirrors to split images, 720 nm and 970 nm band pass filters, illumination chamber having several tungsten-halogen lamps, one main computer, one color frame grabber, two 4-channel multiplexors, and flat plate conveyer, etc., so that a total of seven images, that is, one color image from the top side of an apple and two B/W images from each side (top, right and left) could be captured and displayed on a computer monitor through the multiplexor. One of the two B/W images captured from each side is 720nm filter image and the other is 970nm. With this system an on-line grading software was developed to evaluate appearance quality. On-line test results to the Fuji apples that were manually fed on the conveyer showed that grading accuracies of the color, defective and shape were 95.3%, 86% and 91%, respectively. Grading time was 0.35 sec per apple on an average. Therefore, this on-line grading system could be used for inspection of the final products produced from an apple sorting system.

  • PDF

Indoor Surveillance Camera based Human Centric Lighting Control for Smart Building Lighting Management

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Lee, Min Woo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.207-212
    • /
    • 2020
  • The human centric lighting (HCL) control is a major focus point of the smart lighting system design to provide energy efficient and people mood rhythmic motivation lighting in smart buildings. This paper proposes the HCL control using indoor surveillance camera to improve the human motivation and well-beings in the indoor environments like residential and industrial buildings. In this proposed approach, the indoor surveillance camera video streams are used to predict the day lights and occupancy, occupancy specific emotional features predictions using the advanced computer vision techniques, and this human centric features are transmitted to the smart building light management system. The smart building light management system connected with internet of things (IoT) featured lighting devices and controls the light illumination of the objective human specific lighting devices. The proposed concept experimental model implemented using RGB LED lighting devices connected with IoT features open-source controller in the network along with networked video surveillance solution. The experiment results are verified with custom made automatic lighting control demon application integrated with OpenCV framework based computer vision methods to predict the human centric features and based on the estimated features the lighting illumination level and colors are controlled automatically. The experiment results received from the demon system are analyzed and used for the real-time development of a lighting system control strategy.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.