• Title/Summary/Keyword: computer vision systems

Search Result 599, Processing Time 0.025 seconds

3D measuring system by using the stereo vision (스테레오비젼을 이용한 3차원 물체 측정 시스템)

  • 조진연;김기범
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.10a
    • /
    • pp.224-228
    • /
    • 1997
  • Computer vision system become more important as the researches on inspection systems, intelligent robots , diagnostic medical systems is performed actively. In this paper, 3D measuring system is developed by using stereo vision. The relation between left image and right image is obtained by using 8 point algorithm, and fundamental matrix, epipole and 3D reconstruction algorithm are used to measure 3D dimensions. 3D measuring system was developed by Visual Basic, in which 3D coordinates would be obtained by simple mouse clicks. This software would be applied to construction area, home interior system, rapid measuring system.

  • PDF

Estimation of Miniature Train Location by Color Vision for Development of an Intelligent Railway System (지능형 철도 시스템 모델 개발을 위한 컬러비전 기반의 소형 기차 위치 측정)

  • 노광현;한민홍
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.1
    • /
    • pp.44-49
    • /
    • 2003
  • This paper describes a method of estimating miniature train location by color vision for development of an intelligent railway system model. In the teal world, to control trains automatically, GPS(Global Positioning System) is indispensable to determine the location of trains. A color vision system was used for estimating the location of trains in an indoor experiment. Two different rectangular color bars were attached to the top of each train as a means of identifying them. Several trains were detected where they were located on the track by color feature, geometric features and moment invariant, and tracked simultaneously. In the experiment the identity, location and direction of each train were estimated and transferred to the control computer using serial communication. Processing speed of up to 8 frames/sec could be achieved, which was enough speed for the real-time train control.

Emulated Vision Tester for Automatic Functional Inspection of LCD Drive Module PCB (LCD 구동 모듈 PCB의 자동 기능 검사를 위한 Emulated Vision Tester)

  • Joo, Young-Bok;Han, Chan-Ho;Park, Kil-Houm;Huh, Kyung-Moo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.2
    • /
    • pp.22-27
    • /
    • 2009
  • In this paper, an automatic functional inspection system EVT (Emulated Vision Tester) for LCD drive module PCB has been proposed and implemented. Typical automatic inspection system such as probing methods and vision-based systems are widely known and used, however, there exist undetectable defects due to critical timing factors which they may miss to catch from LCD equipments. Especially typical vision-based systems have inconsistency on acquisition of images so that distinction between gray scales can be difficult which results in low level of performance and reliability on the inspection results. The proposed EVT system is pure hardware solution. It directly compares pattern signals from a pattern generator to output signals from LCD drive module. It also inspects variety of analog signals such as voltage, resistance, wave forms and so forth. The EVT system not only shows high performance in terms of reliability and processing speed but reduces costs on inspection and maintenance. Also, full automation of entire production line can be realized when EVT is applied in in-line inspection processes.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

A Machine Vision System for Inspecting Tape-Feeder Operation

  • Cho Tai-Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.2
    • /
    • pp.95-99
    • /
    • 2006
  • A tape feeder of a SMD(Surface Mount Device) mounter is a device that sequentially feeds electronic components on a tape reel to the pick-up system of the mounter. As components are getting much smaller, feeding accuracy of a feeder becomes one of the most important factors for successful component pick-up. Therefore, it is critical to keep the feeding accuracy to a specified level in the assembly and production of tape feeders. This paper describes a tape feeder inspection system that was developed to automatically measure and to inspect feeding accuracy using machine vision. It consists of a feeder base, an image acquisition system, and a personal computer. The image acquisition system is composed of CCD cameras with lens, LED illumination systems, and a frame grabber inside the PC. This system loads up to six feeders at a time and inspects them automatically and sequentially. The inspection software was implemented using Visual C++ on Windows with easily usable GUI. Using this system, we can automatically measure and inspect the quality of ail feeders in production process by analyzing the measurement results statistically.

A Simple Proposal For Ain Makkah Almukkarmah An Application Using Augmented Reality Technology

  • Taghreed Alotaibi;Laila Alkabkabi;Rana Alzahrani;Eman Almalki;Ghosson Banjar;Kholod Alshareef;Olfat M. Mirza
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.115-122
    • /
    • 2023
  • Makkah Al-Mukarramah is the capital of Islamic world. It receives special attention from the Saudi government's rulers to transform it into a smart city for the benefit of millions of pilgrims. One of the 2030 vision objectives is to transform specific cities to smart ones with advanced technological facilitation, Makkah is one of these cities. The history of Makkah is not well known for some Muslims. As a result, we built the concepts of our application "Ain Makkah" to enable visitors of Makkah to know the history of Makkah by using technology. In particular "Ain Makkah" uses Augmented Reality to view the history of Al-Kaaba. A 3D model will overlay Al-Kaaba to show it in the last years. Our project will use Augmented Reality to build a 3D model to overlay Al-Kaaba. Future work will expand the number of historical landmarks of Makkah.

Thermal imaging and computer vision technologies for the enhancement of pig husbandry: a review

  • Md Nasim Reza;Md Razob Ali;Samsuzzaman;Md Shaha Nur Kabir;Md Rejaul Karim;Shahriar Ahmed;Hyunjin Kyoung;Gookhwan Kim;Sun-Ok Chung
    • Journal of Animal Science and Technology
    • /
    • v.66 no.1
    • /
    • pp.31-56
    • /
    • 2024
  • Pig farming, a vital industry, necessitates proactive measures for early disease detection and crush symptom monitoring to ensure optimum pig health and safety. This review explores advanced thermal sensing technologies and computer vision-based thermal imaging techniques employed for pig disease and piglet crush symptom monitoring on pig farms. Infrared thermography (IRT) is a non-invasive and efficient technology for measuring pig body temperature, providing advantages such as non-destructive, long-distance, and high-sensitivity measurements. Unlike traditional methods, IRT offers a quick and labor-saving approach to acquiring physiological data impacted by environmental temperature, crucial for understanding pig body physiology and metabolism. IRT aids in early disease detection, respiratory health monitoring, and evaluating vaccination effectiveness. Challenges include body surface emissivity variations affecting measurement accuracy. Thermal imaging and deep learning algorithms are used for pig behavior recognition, with the dorsal plane effective for stress detection. Remote health monitoring through thermal imaging, deep learning, and wearable devices facilitates non-invasive assessment of pig health, minimizing medication use. Integration of advanced sensors, thermal imaging, and deep learning shows potential for disease detection and improvement in pig farming, but challenges and ethical considerations must be addressed for successful implementation. This review summarizes the state-of-the-art technologies used in the pig farming industry, including computer vision algorithms such as object detection, image segmentation, and deep learning techniques. It also discusses the benefits and limitations of IRT technology, providing an overview of the current research field. This study provides valuable insights for researchers and farmers regarding IRT application in pig production, highlighting notable approaches and the latest research findings in this field.

Linear System Depth Detection using Retro Reflector for Automatic Vision Inspection System (자동 표면 결함검사 시스템에서 Retro 광학계를 이용한 3D 깊이정보 측정방법)

  • Joo, Young Bok
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.4
    • /
    • pp.77-80
    • /
    • 2022
  • Automatic Vision Inspection (AVI) systems automatically detect defect features and measure their sizes via camera vision. It has been populated because of the accuracy and consistency in terms of QC (Quality Control) of inspection processes. Also, it is important to predict the performance of an AVI to meet customer's specification in advance. AVI are usually suffered from false negative and positives. It can be overcome by providing extra information such as 3D depth information. Stereo vision processing has been popular for depth extraction of the 3D images from 2D images. However, stereo vision methods usually take long time to process. In this paper, retro optical system using reflectors is proposed and experimented to overcome the problem. The optical system extracts the depth without special SW processes. The vision sensor and optical components such as illumination and depth detecting module are integrated as a unit. The depth information can be extracted on real-time basis and utilized and can improve the performance of an AVI system.

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

Performance Evaluation of Efficient Vision Transformers on Embedded Edge Platforms (임베디드 엣지 플랫폼에서의 경량 비전 트랜스포머 성능 평가)

  • Minha Lee;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.3
    • /
    • pp.89-100
    • /
    • 2023
  • Recently, on-device artificial intelligence (AI) solutions using mobile devices and embedded edge devices have emerged in various fields, such as computer vision, to address network traffic burdens, low-energy operations, and security problems. Although vision transformer deep learning models have outperformed conventional convolutional neural network (CNN) models in computer vision, they require more computations and parameters than CNN models. Thus, they are not directly applicable to embedded edge devices with limited hardware resources. Many researchers have proposed various model compression methods or lightweight architectures for vision transformers; however, there are only a few studies evaluating the effects of model compression techniques of vision transformers on performance. Regarding this problem, this paper presents a performance evaluation of vision transformers on embedded platforms. We investigated the behaviors of three vision transformers: DeiT, LeViT, and MobileViT. Each model performance was evaluated by accuracy and inference time on edge devices using the ImageNet dataset. We assessed the effects of the quantization method applied to the models on latency enhancement and accuracy degradation by profiling the proportion of response time occupied by major operations. In addition, we evaluated the performance of each model on GPU and EdgeTPU-based edge devices. In our experimental results, LeViT showed the best performance in CPU-based edge devices, and DeiT-small showed the highest performance improvement in GPU-based edge devices. In addition, only MobileViT models showed performance improvement on EdgeTPU. Summarizing the analysis results through profiling, the degree of performance improvement of each vision transformer model was highly dependent on the proportion of parts that could be optimized in the target edge device. In summary, to apply vision transformers to on-device AI solutions, either proper operation composition and optimizations specific to target edge devices must be considered.