• 제목/요약/키워드: Computer vision technology

검색결과 666건 처리시간 0.029초

Multiple Properties-Based Moving Object Detection Algorithm

  • Zhou, Changjian;Xing, Jinge;Liu, Haibo
    • Journal of Information Processing Systems
    • /
    • 제17권1호
    • /
    • pp.124-135
    • /
    • 2021
  • Object detection is a fundamental yet challenging task in computer vision that plays an important role in object recognition, tracking, scene analysis and understanding. This paper aims to propose a multiproperty fusion algorithm for moving object detection. First, we build a scale-invariant feature transform (SIFT) vector field and analyze vectors in the SIFT vector field to divide vectors in the SIFT vector field into different classes. Second, the distance of each class is calculated by dispersion analysis. Next, the target and contour can be extracted, and then we segment the different images, reversal process and carry on morphological processing, the moving objects can be detected. The experimental results have good stability, accuracy and efficiency.

Pose and Expression Invariant Alignment based Multi-View 3D Face Recognition

  • Ratyal, Naeem;Taj, Imtiaz;Bajwa, Usama;Sajid, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권10호
    • /
    • pp.4903-4929
    • /
    • 2018
  • In this study, a fully automatic pose and expression invariant 3D face alignment algorithm is proposed to handle frontal and profile face images which is based on a two pass course to fine alignment strategy. The first pass of the algorithm coarsely aligns the face images to an intrinsic coordinate system (ICS) through a single 3D rotation and the second pass aligns them at fine level using a minimum nose tip-scanner distance (MNSD) approach. For facial recognition, multi-view faces are synthesized to exploit real 3D information and test the efficacy of the proposed system. Due to optimal separating hyper plane (OSH), Support Vector Machine (SVM) is employed in multi-view face verification (FV) task. In addition, a multi stage unified classifier based face identification (FI) algorithm is employed which combines results from seven base classifiers, two parallel face recognition algorithms and an exponential rank combiner, all in a hierarchical manner. The performance figures of the proposed methodology are corroborated by extensive experiments performed on four benchmark datasets: GavabDB, Bosphorus, UMB-DB and FRGC v2.0. Results show mark improvement in alignment accuracy and recognition rates. Moreover, a computational complexity analysis has been carried out for the proposed algorithm which reveals its superiority in terms of computational efficiency as well.

시공간상의 궤적 분석에 의한 제스쳐 인식 (Gesture Recognition by Analyzing a Trajetory on Spatio-Temporal Space)

  • 민병우;윤호섭;소정;에지마 도시야끼
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제26권1호
    • /
    • pp.157-157
    • /
    • 1999
  • Researches on the gesture recognition have become a very interesting topic in the computer vision area, Gesture recognition from visual images has a number of potential applicationssuch as HCI (Human Computer Interaction), VR(Virtual Reality), machine vision. To overcome thetechnical barriers in visual processing, conventional approaches have employed cumbersome devicessuch as datagloves or color marked gloves. In this research, we capture gesture images without usingexternal devices and generate a gesture trajectery composed of point-tokens. The trajectory Is spottedusing phase-based velocity constraints and recognized using the discrete left-right HMM. Inputvectors to the HMM are obtained by using the LBG clustering algorithm on a polar-coordinate spacewhere point-tokens on the Cartesian space .are converted. A gesture vocabulary is composed oftwenty-two dynamic hand gestures for editing drawing elements. In our experiment, one hundred dataper gesture are collected from twenty persons, Fifty data are used for training and another fifty datafor recognition experiment. The recognition result shows about 95% recognition rate and also thepossibility that these results can be applied to several potential systems operated by gestures. Thedeveloped system is running in real time for editing basic graphic primitives in the hardwareenvironments of a Pentium-pro (200 MHz), a Matrox Meteor graphic board and a CCD camera, anda Window95 and Visual C++ software environment.

Recognition of Individual Holstein Cattle by Imaging Body Patterns

  • Kim, Hyeon T.;Choi, Hong L.;Lee, Dae W.;Yoon, Yong C.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • 제18권8호
    • /
    • pp.1194-1198
    • /
    • 2005
  • A computer vision system was designed and validated to recognize an individual Holstein cattle by processing images of their body patterns. This system involves image capture, image pre-processing, algorithm processing, and an artificial neural network recognition algorithm. Optimum management of individuals is one of the most important factors in keeping cattle healthy and productive. In this study, an image-processing system was used to recognize individual Holstein cattle by identifying the body-pattern images captured by a charge-coupled device (CCD). A recognition system was developed and applied to acquire images of 49 cattles. The pixel values of the body images were transformed into input data comprising binary signals for the neural network. Images of the 49 cattle were analyzed to learn input layer elements, and ten cattles were used to verify the output layer elements in the neural network by using an individual recognition program. The system proved to be reliable for the individual recognition of cattles in natural light.

Multi-Channel Vision System for On-Line Quantification of Appearance Quality Factors of Apple

  • Lee, Soo Hee;Noh, Sang Ha
    • Agricultural and Biosystems Engineering
    • /
    • 제1권2호
    • /
    • pp.106-110
    • /
    • 2000
  • An integrated on-line inspection system was constructed with seven cameras, half mirrors to split images. 720 nm and 970 nm band pass filters, illumination chamber having several tungsten-halogen lamps, one main computer, one color frame grabber, two 4-channel multiplexors, and flat plate conveyer, etc. A total of seven images, that is, one color image form the top of an apple and two B/W images from each side (top, right and left) could be captured and displayed on a computer monitor through the multiplexor. One of the two B/W images captured from each side is 720nm filtered image and the other is 970 nm. With this system an on-line grading software was developed to evaluate appearance quality. On-line test results with Fuji apples that were manually fed on the conveyer showed that grading accuracies of the color, defect and shape were 95.3%, 86% and 88.6%, respectively. Grading time was 0.35 second per apple on an average. Therefore, this on-line grading system could be used for inspection of the final products produced from an apple sorting system.

  • PDF

A Robotic System for Transferring Tobacco Seedlings

  • Lee, D.W.;W.F.McClure
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 1993년도 Proceedings of International Conference for Agricultural Machinery and Process Engineering
    • /
    • pp.850-858
    • /
    • 1993
  • Germinatin and early growth of tobacco seedlings in trays containing many cells is increasing in popularity . Since 100 % germination is not likely , a major problem is to locate and replace the content of those cells which contain either no seedling or a stunted seedling with a plug containing a viable seedling. Empty cells and seedlings of poor quality take up valuable space in a greenhouse. They may also cause difficulty when transplanting seedlings into the field. Robotic technology, including the implementation of computer vision, appears to be an attractive alternative to the use of manual labor for accomplishing this task. Operating AGBOT, short for Agricultural ROBOT, involved four steps : (1) capturing the image, (2) processing the image, (3) moving the manipulator, (4) working the gripper. This research seedlings within a cell-grown environment. the configuration of the cell-grown seedling environment dictated the design of a Cartesian robot suitable for working ov r a flat plane. Experiments of AGBOT performance in transferring large seedlings produced trays which were more than 98% survived one week after transfer. In general , the system generated much better than expected.

  • PDF

MULTI-CHANNEL VISION SYSTEM FOR ON-LINE QUANTIFICATION OF APPEARANCE QUALITY FACTORS OF APPLE

  • Lee, S. H.;S. H. Noh
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 2000년도 THE THIRD INTERNATIONAL CONFERENCE ON AGRICULTURAL MACHINERY ENGINEERING. V.III
    • /
    • pp.551-559
    • /
    • 2000
  • An integrated on-line inspection system was constructed with seven cameras, half mirrors to split images, 720 nm and 970 nm band pass filters, illumination chamber having several tungsten-halogen lamps, one main computer, one color frame grabber, two 4-channel multiplexors, and flat plate conveyer, etc., so that a total of seven images, that is, one color image from the top side of an apple and two B/W images from each side (top, right and left) could be captured and displayed on a computer monitor through the multiplexor. One of the two B/W images captured from each side is 720nm filter image and the other is 970nm. With this system an on-line grading software was developed to evaluate appearance quality. On-line test results to the Fuji apples that were manually fed on the conveyer showed that grading accuracies of the color, defective and shape were 95.3%, 86% and 91%, respectively. Grading time was 0.35 sec per apple on an average. Therefore, this on-line grading system could be used for inspection of the final products produced from an apple sorting system.

  • PDF

Indoor Surveillance Camera based Human Centric Lighting Control for Smart Building Lighting Management

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Lee, Min Woo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Advanced Culture Technology
    • /
    • 제8권1호
    • /
    • pp.207-212
    • /
    • 2020
  • The human centric lighting (HCL) control is a major focus point of the smart lighting system design to provide energy efficient and people mood rhythmic motivation lighting in smart buildings. This paper proposes the HCL control using indoor surveillance camera to improve the human motivation and well-beings in the indoor environments like residential and industrial buildings. In this proposed approach, the indoor surveillance camera video streams are used to predict the day lights and occupancy, occupancy specific emotional features predictions using the advanced computer vision techniques, and this human centric features are transmitted to the smart building light management system. The smart building light management system connected with internet of things (IoT) featured lighting devices and controls the light illumination of the objective human specific lighting devices. The proposed concept experimental model implemented using RGB LED lighting devices connected with IoT features open-source controller in the network along with networked video surveillance solution. The experiment results are verified with custom made automatic lighting control demon application integrated with OpenCV framework based computer vision methods to predict the human centric features and based on the estimated features the lighting illumination level and colors are controlled automatically. The experiment results received from the demon system are analyzed and used for the real-time development of a lighting system control strategy.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • 제33권5호
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

가상 비전 시스템 이미지 생성 및 전송 효율에 관한 연구 (A Study on the Virtual Vision System Image Creation and Transmission Efficiency)

  • 김원
    • 한국융합학회논문지
    • /
    • 제11권9호
    • /
    • pp.15-20
    • /
    • 2020
  • 소프트웨어가 국가의 혁신 및 성장, 가치 창출에 핵심 요소가 되고 있고, 이에 소프트웨어 관련 교육은 필수적인 상황이다. 공학 교육에 대한 수행방법 가운데 하나로, 실행이 어려운 상황에 대하여 유사한 환경에서 교육할 수 있는 가상 시뮬레이션을 통한 교육이 다양하게 이루어지고 있다. 최근 생산 현장에 스마트팩토리의 구축이 확산되고 있고, 비전 시스템을 활용한 제품 검사가 수행되고 있다. 그러나 비전 시스템에 대한 운영 기술의 부족으로 많은 어려움을 겪고 있으나, 비전 시스템 교육을 위한 시스템 구축에 많은 비용이 필요하다. 본 논문에서는 컴퓨터와 물리엔진의 카메라 기능을 융합하여 영상을 추출하고 전송할 수 있는 교육용 가상 시뮬레이션 모델을 제안한다. 제안 모델의 실험 결과 평균 35.4FPS로 30Hz 이상의 이미지 생성이 가능하며, 22.7ms의 시간으로 이미지를 송수신할 수 있어, 교육용 가상 시뮬레이션 교육 환경에서 활용할 수 있을 것이다.