• 제목/요약/키워드: computer vision systems

검색결과 599건 처리시간 0.021초

스테레오비젼을 이용한 3차원 물체 측정 시스템 (3D measuring system by using the stereo vision)

  • 조진연;김기범
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1997년도 추계학술대회 논문집
    • /
    • pp.224-228
    • /
    • 1997
  • Computer vision system become more important as the researches on inspection systems, intelligent robots , diagnostic medical systems is performed actively. In this paper, 3D measuring system is developed by using stereo vision. The relation between left image and right image is obtained by using 8 point algorithm, and fundamental matrix, epipole and 3D reconstruction algorithm are used to measure 3D dimensions. 3D measuring system was developed by Visual Basic, in which 3D coordinates would be obtained by simple mouse clicks. This software would be applied to construction area, home interior system, rapid measuring system.

  • PDF

지능형 철도 시스템 모델 개발을 위한 컬러비전 기반의 소형 기차 위치 측정 (Estimation of Miniature Train Location by Color Vision for Development of an Intelligent Railway System)

  • 노광현;한민홍
    • 제어로봇시스템학회논문지
    • /
    • 제9권1호
    • /
    • pp.44-49
    • /
    • 2003
  • This paper describes a method of estimating miniature train location by color vision for development of an intelligent railway system model. In the teal world, to control trains automatically, GPS(Global Positioning System) is indispensable to determine the location of trains. A color vision system was used for estimating the location of trains in an indoor experiment. Two different rectangular color bars were attached to the top of each train as a means of identifying them. Several trains were detected where they were located on the track by color feature, geometric features and moment invariant, and tracked simultaneously. In the experiment the identity, location and direction of each train were estimated and transferred to the control computer using serial communication. Processing speed of up to 8 frames/sec could be achieved, which was enough speed for the real-time train control.

LCD 구동 모듈 PCB의 자동 기능 검사를 위한 Emulated Vision Tester (Emulated Vision Tester for Automatic Functional Inspection of LCD Drive Module PCB)

  • 주영복;한찬호;박길흠;허경무
    • 전자공학회논문지SC
    • /
    • 제46권2호
    • /
    • pp.22-27
    • /
    • 2009
  • 본 논문에서는 LCD 구동 모듈 PCB의 기능 검사를 위한 자동 검사 시스템인 EVT (Emulated Vision Tester)를 제안하고 구현하였다. 기존의 대표적인 자동검사 방법으로는 전기적 검사나 영상기반 검사방식이 있으나 전기적 검사만으로는 Timing이 주요한 변수가 되는 LCD 장비에서는 검출할 수 없는 구동불량이 존재하며 영상기반 검사는 영상획득에 일관성이 결여되거나 Gray Scale의 구분이 불명확하여 검출결과의 재현성이 떨어진다. EVT 시스템은 Pattern Generator에서 인가된 입력 패턴 신호와 구동 모듈을 통한 후 출력되는 디지털 신호를 직접 비교하여 패턴을 검사하고 아날로그 신호 (전압, 저항, 파형)의 이상 여부도 신속 정확하게 검사할 수 있는 하드웨어적인 자동 검사 방법이다. 제안된 EVT 검사기는 높은 검출 신뢰도와 빠른 처리 속도 그리고 간결한 시스템 구성으로 원가 절감 및 전공정 검사 자동화의 실현을 가능케 하는 등 많은 장점을 가진다.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • 제23권11호
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

A Machine Vision System for Inspecting Tape-Feeder Operation

  • Cho Tai-Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권2호
    • /
    • pp.95-99
    • /
    • 2006
  • A tape feeder of a SMD(Surface Mount Device) mounter is a device that sequentially feeds electronic components on a tape reel to the pick-up system of the mounter. As components are getting much smaller, feeding accuracy of a feeder becomes one of the most important factors for successful component pick-up. Therefore, it is critical to keep the feeding accuracy to a specified level in the assembly and production of tape feeders. This paper describes a tape feeder inspection system that was developed to automatically measure and to inspect feeding accuracy using machine vision. It consists of a feeder base, an image acquisition system, and a personal computer. The image acquisition system is composed of CCD cameras with lens, LED illumination systems, and a frame grabber inside the PC. This system loads up to six feeders at a time and inspects them automatically and sequentially. The inspection software was implemented using Visual C++ on Windows with easily usable GUI. Using this system, we can automatically measure and inspect the quality of ail feeders in production process by analyzing the measurement results statistically.

A Simple Proposal For Ain Makkah Almukkarmah An Application Using Augmented Reality Technology

  • Taghreed Alotaibi;Laila Alkabkabi;Rana Alzahrani;Eman Almalki;Ghosson Banjar;Kholod Alshareef;Olfat M. Mirza
    • International Journal of Computer Science & Network Security
    • /
    • 제23권12호
    • /
    • pp.115-122
    • /
    • 2023
  • Makkah Al-Mukarramah is the capital of Islamic world. It receives special attention from the Saudi government's rulers to transform it into a smart city for the benefit of millions of pilgrims. One of the 2030 vision objectives is to transform specific cities to smart ones with advanced technological facilitation, Makkah is one of these cities. The history of Makkah is not well known for some Muslims. As a result, we built the concepts of our application "Ain Makkah" to enable visitors of Makkah to know the history of Makkah by using technology. In particular "Ain Makkah" uses Augmented Reality to view the history of Al-Kaaba. A 3D model will overlay Al-Kaaba to show it in the last years. Our project will use Augmented Reality to build a 3D model to overlay Al-Kaaba. Future work will expand the number of historical landmarks of Makkah.

Thermal imaging and computer vision technologies for the enhancement of pig husbandry: a review

  • Md Nasim Reza;Md Razob Ali;Samsuzzaman;Md Shaha Nur Kabir;Md Rejaul Karim;Shahriar Ahmed;Hyunjin Kyoung;Gookhwan Kim;Sun-Ok Chung
    • Journal of Animal Science and Technology
    • /
    • 제66권1호
    • /
    • pp.31-56
    • /
    • 2024
  • Pig farming, a vital industry, necessitates proactive measures for early disease detection and crush symptom monitoring to ensure optimum pig health and safety. This review explores advanced thermal sensing technologies and computer vision-based thermal imaging techniques employed for pig disease and piglet crush symptom monitoring on pig farms. Infrared thermography (IRT) is a non-invasive and efficient technology for measuring pig body temperature, providing advantages such as non-destructive, long-distance, and high-sensitivity measurements. Unlike traditional methods, IRT offers a quick and labor-saving approach to acquiring physiological data impacted by environmental temperature, crucial for understanding pig body physiology and metabolism. IRT aids in early disease detection, respiratory health monitoring, and evaluating vaccination effectiveness. Challenges include body surface emissivity variations affecting measurement accuracy. Thermal imaging and deep learning algorithms are used for pig behavior recognition, with the dorsal plane effective for stress detection. Remote health monitoring through thermal imaging, deep learning, and wearable devices facilitates non-invasive assessment of pig health, minimizing medication use. Integration of advanced sensors, thermal imaging, and deep learning shows potential for disease detection and improvement in pig farming, but challenges and ethical considerations must be addressed for successful implementation. This review summarizes the state-of-the-art technologies used in the pig farming industry, including computer vision algorithms such as object detection, image segmentation, and deep learning techniques. It also discusses the benefits and limitations of IRT technology, providing an overview of the current research field. This study provides valuable insights for researchers and farmers regarding IRT application in pig production, highlighting notable approaches and the latest research findings in this field.

자동 표면 결함검사 시스템에서 Retro 광학계를 이용한 3D 깊이정보 측정방법 (Linear System Depth Detection using Retro Reflector for Automatic Vision Inspection System)

  • 주영복
    • 반도체디스플레이기술학회지
    • /
    • 제21권4호
    • /
    • pp.77-80
    • /
    • 2022
  • Automatic Vision Inspection (AVI) systems automatically detect defect features and measure their sizes via camera vision. It has been populated because of the accuracy and consistency in terms of QC (Quality Control) of inspection processes. Also, it is important to predict the performance of an AVI to meet customer's specification in advance. AVI are usually suffered from false negative and positives. It can be overcome by providing extra information such as 3D depth information. Stereo vision processing has been popular for depth extraction of the 3D images from 2D images. However, stereo vision methods usually take long time to process. In this paper, retro optical system using reflectors is proposed and experimented to overcome the problem. The optical system extracts the depth without special SW processes. The vision sensor and optical components such as illumination and depth detecting module are integrated as a unit. The depth information can be extracted on real-time basis and utilized and can improve the performance of an AVI system.

사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환 (Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision)

  • 홍송표;최한승;김의명
    • 한국측량학회지
    • /
    • 제37권4호
    • /
    • pp.267-277
    • /
    • 2019
  • 사진측량과 컴퓨터비전 분야는 카메라에서 촬영된 영상에서 3차원 좌표를 결정하는 것은 동일하지만 두 분야는 카메라 렌즈왜곡 모델링 방법과 카메라 좌표계의 차이점으로 인하여 서로 간에 직접적인 호환이 어렵다. 일반적으로 드론 영상의 자료처리는 컴퓨터비전 기반의 소프트웨어를 이용하여 번들블록조정을 수행한 후 지도제작을 위해서 사진측량 기반의 소프트웨어로 도화를 수행하게 된다. 이때 카메라 렌즈왜곡의 모델을 사진측량에서 사용하는 수식으로 변환해야 하는 문제에 직면하게 된다. 이에 본 연구에서는 사진측량과 컴퓨터비전에서 사용되는 좌표계와 렌즈왜곡 모델식의 차이점에 대하여 기술하고 이를 변환하는 방법론을 제안하였다. 카메라 렌즈왜곡 모델의 변환식의 검증을 위해서 먼저 렌즈왜곡이 없는 가상의 좌표에 컴퓨터비전 기반의 렌즈왜곡 모델을 이용하여 렌즈왜곡을 부여하였다. 그리고 나서 렌즈왜곡이 부여된 사진좌표를 이용하여 사진측량 기반의 렌즈왜곡 모델을 이용하여 왜곡계수를 결정한 후 사진좌표에서 렌즈왜곡을 제거하여 원래의 왜곡이 없는 가상좌표와 비교하였다. 그 결과 평균제곱근거리가 0.5픽셀 이내로 양호한 것으로 나타났다. 또한 사진측량용 렌즈왜곡 계수를 적용하여 정밀도화 가능여부를 판단하기 위해서 에피폴라 영상을 생성하였다. 생성된 에피폴라 영상에서 y-시차의 평균제곱근오차가 계산한 결과 0.3픽셀 이내로 양호하게 나타났음을 알 수 있었다.

임베디드 엣지 플랫폼에서의 경량 비전 트랜스포머 성능 평가 (Performance Evaluation of Efficient Vision Transformers on Embedded Edge Platforms)

  • 이민하;이성재;김태현
    • 대한임베디드공학회논문지
    • /
    • 제18권3호
    • /
    • pp.89-100
    • /
    • 2023
  • Recently, on-device artificial intelligence (AI) solutions using mobile devices and embedded edge devices have emerged in various fields, such as computer vision, to address network traffic burdens, low-energy operations, and security problems. Although vision transformer deep learning models have outperformed conventional convolutional neural network (CNN) models in computer vision, they require more computations and parameters than CNN models. Thus, they are not directly applicable to embedded edge devices with limited hardware resources. Many researchers have proposed various model compression methods or lightweight architectures for vision transformers; however, there are only a few studies evaluating the effects of model compression techniques of vision transformers on performance. Regarding this problem, this paper presents a performance evaluation of vision transformers on embedded platforms. We investigated the behaviors of three vision transformers: DeiT, LeViT, and MobileViT. Each model performance was evaluated by accuracy and inference time on edge devices using the ImageNet dataset. We assessed the effects of the quantization method applied to the models on latency enhancement and accuracy degradation by profiling the proportion of response time occupied by major operations. In addition, we evaluated the performance of each model on GPU and EdgeTPU-based edge devices. In our experimental results, LeViT showed the best performance in CPU-based edge devices, and DeiT-small showed the highest performance improvement in GPU-based edge devices. In addition, only MobileViT models showed performance improvement on EdgeTPU. Summarizing the analysis results through profiling, the degree of performance improvement of each vision transformer model was highly dependent on the proportion of parts that could be optimized in the target edge device. In summary, to apply vision transformers to on-device AI solutions, either proper operation composition and optimizations specific to target edge devices must be considered.