• Title/Summary/Keyword: Computer vision technology

Search Result 666, Processing Time 0.024 seconds

Quality Assessment of Beef Using Computer Vision Technology

  • Rahman, Md. Faizur;Iqbal, Abdullah;Hashem, Md. Abul;Adedeji, Akinbode A.
    • Food Science of Animal Resources
    • /
    • v.40 no.6
    • /
    • pp.896-907
    • /
    • 2020
  • Imaging technique or computer vision (CV) technology has received huge attention as a rapid and non-destructive technique throughout the world for measuring quality attributes of agricultural products including meat and meat products. This study was conducted to test the ability of CV technology to predict the quality attributes of beef. Images were captured from longissimus dorsi muscle in beef at 24 h post-mortem. Traits evaluated were color value (L*, a*, b*), pH, drip loss, cooking loss, dry matter, moisture, crude protein, fat, ash, thiobarbituric acid reactive substance (TBARS), peroxide value (POV), free fatty acid (FFA), total coliform count (TCC), total viable count (TVC) and total yeast-mould count (TYMC). Images were analyzed using the Matlab software (R2015a). Different reference values were determined by physicochemical, proximate, biochemical and microbiological test. All determination were done in triplicate and the mean value was reported. Data analysis was carried out using the programme Statgraphics Centurion XVI. Calibration and validation model were fitted using the software Unscrambler X version 9.7. A higher correlation found in a* (r=0.65) and moisture (r=0.56) with 'a*' value obtained from image analysis and the highest calibration and prediction accuracy was found in lightness (r2c=0.73, r2p=0.69) in beef. Results of this work show that CV technology may be a useful tool for predicting meat quality traits in the laboratory and meat processing industries.

Real-Time Fire Detection Method Using YOLOv8 (YOLOv8을 이용한 실시간 화재 검출 방법)

  • Tae Hee Lee;Chun-Su Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.77-80
    • /
    • 2023
  • Since fires in uncontrolled environments pose serious risks to society and individuals, many researchers have been investigating technologies for early detection of fires that occur in everyday life. Recently, with the development of deep learning vision technology, research on fire detection models using neural network backbones such as Transformer and Convolution Natural Network has been actively conducted. Vision-based fire detection systems can solve many problems with physical sensor-based fire detection systems. This paper proposes a fire detection method using the latest YOLOv8, which improves the existing fire detection method. The proposed method develops a system that detects sparks and smoke from input images by training the Yolov8 model using a universal fire detection dataset. We also demonstrate the superiority of the proposed method through experiments by comparing it with existing methods.

  • PDF

Computer vision-based remote displacement monitoring system for in-situ bridge bearings robust to large displacement induced by temperature change

  • Kim, Byunghyun;Lee, Junhwa;Sim, Sung-Han;Cho, Soojin;Park, Byung Ho
    • Smart Structures and Systems
    • /
    • v.30 no.5
    • /
    • pp.521-535
    • /
    • 2022
  • Efficient management of deteriorating civil infrastructure is one of the most important research topics in many developed countries. In particular, the remote displacement measurement of bridges using linear variable differential transformers, global positioning systems, laser Doppler vibrometers, and computer vision technologies has been attempted extensively. This paper proposes a remote displacement measurement system using closed-circuit televisions (CCTVs) and a computer-vision-based method for in-situ bridge bearings having relatively large displacement due to temperature change in long term. The hardware of the system is composed of a reference target for displacement measurement, a CCTV to capture target images, a gateway to transmit images via a mobile network, and a central server to store and process transmitted images. The usage of CCTV capable of night vision capture and wireless data communication enable long-term 24-hour monitoring on wide range of bridge area. The computer vision algorithm to estimate displacement from the images involves image preprocessing for enhancing the circular features of the target, circular Hough transformation for detecting circles on the target in the whole field-of-view (FOV), and homography transformation for converting the movement of the target in the images into an actual expansion displacement. The simple target design and robust circle detection algorithm help to measure displacement using target images where the targets are far apart from each other. The proposed system is installed at the Tancheon Overpass located in Seoul, and field experiments are performed to evaluate the accuracy of circle detection and displacement measurements. The circle detection accuracy is evaluated using 28,542 images captured from 71 CCTVs installed at the testbed, and only 48 images (0.168%) fail to detect the circles on the target because of subpar imaging conditions. The accuracy of displacement measurement is evaluated using images captured for 17 days from three CCTVs; the average and root-mean-square errors are 0.10 and 0.131 mm, respectively, compared with a similar displacement measurement. The long-term operation of the system, as evaluated using 8-month data, shows high accuracy and stability of the proposed system.

Current Situation of Renewable Energy Resources Marketing and its Challenges in Light of Saudi Vision 2030 Case Study: Northern Border Region

  • AL-Ghaswyneh, Odai Falah Mohammad
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.3
    • /
    • pp.89-94
    • /
    • 2022
  • The Saudi Vision 2030 defined the directions of the national economy and market towards diversifying sources of income, and developing energy to become less dependent on oil. The study sought through a theoretical review to identify the reality of the energy sector and the areas of investment available in the field of renewable energy. Findings showed that investment in the renewable energy sector is a promising source according to solar, wind, hydrogen, geothermal energy and burning waste than landfill to extract biogas for less emission. The renewable energy sector faces challenges related to technology, production cost, price, quantity of production and consumption, and markets. The study revealed some recommendations providing and suggested electronic marketing system to provide investors and consumers with energy available from renewable sources.

The Power Line Deflection Monitoring System using Panoramic Video Stitching and Deep Learning (딥 러닝과 파노라마 영상 스티칭 기법을 이용한 송전선 늘어짐 모니터링 시스템)

  • Park, Eun-Soo;Kim, Seunghwan;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.13-24
    • /
    • 2020
  • There are about nine million power line poles and 1.3 million kilometers of the power line for electric power distribution in Korea. Maintenance of such a large number of electric power facilities requires a lot of manpower and time. Recently, various fault diagnosis techniques using artificial intelligence have been studied. Therefore, in this paper, proposes a power line deflection detect system using artificial intelligence and computer vision technology in images taken by vision system. The proposed system proceeds as follows. (i) Detection of transmission tower using object detection system (ii) Histogram equalization technique to solve the degradation in image quality problem of video data (iii) In general, since the distance between two transmission towers is long, a panoramic video stitching process is performed to grasp the entire power line (iv) Detecting deflection using computer vision technology after applying power line detection algorithm This paper explain and experiment about each process.

An Efficient Camera Calibration Method for Head Pose Tracking (머리의 자세를 추적하기 위한 효율적인 카메라 보정 방법에 관한 연구)

  • Park, Gyeong-Su;Im, Chang-Ju;Lee, Gyeong-Tae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.77-90
    • /
    • 2000
  • The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system was proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We proposed an efficient camera calibration method to track the 3D position and orientation of the user's head accurately. We also evaluated the performance of the proposed method. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the conventional direct linear transformation method which has been used in camera calibration. The results of this study can be applied to the tracking head movements related to the eye-controlled human/computer interface and the virtual reality technology.

  • PDF

Lightweight image classifier for CIFAR-10

  • Sharma, Akshay Kumar;Rana, Amrita;Kim, Kyung Ki
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.5
    • /
    • pp.286-289
    • /
    • 2021
  • Image classification is one of the fundamental applications of computer vision. It enables a system to identify an object in an image. Recently, image classification applications have broadened their scope from computer applications to edge devices. The convolutional neural network (CNN) is the main class of deep learning neural networks that are widely used in computer tasks, and it delivers high accuracy. However, CNN algorithms use a large number of parameters and incur high computational costs, which hinder their implementation in edge hardware devices. To address this issue, this paper proposes a lightweight image classifier that provides good accuracy while using fewer parameters. The proposed image classifier diverts the input into three paths and utilizes different scales of receptive fields to extract more feature maps while using fewer parameters at the time of training. This results in the development of a model of small size. This model is tested on the CIFAR-10 dataset and achieves an accuracy of 90% using .26M parameters. This is better than the state-of-the-art models, and it can be implemented on edge devices.

Computer Vision-Based Process Inspection for the Development of Automated Assembly Technology Ethernet Connectors (이더넷 커넥터 자동 조립 기술 개발을 위한 컴퓨터 비전 기반 공정 검사)

  • Yunjung Hong;Geon Lee;Jiyoung Woo;Yunyoung Nam
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.89-90
    • /
    • 2024
  • 본 연구는 와이어 하네스의 불량 여부를 정확하고 빠르게 감지하기 위해 컴퓨터 비전을 활용하여 압착된 단자의 길이, 단자 끝단 치수(너비), 압착된 부분의 폭(와이어부, 심선부)의 6가지 측정값을 계산하는 것을 목표로 한다. 단자 영역별 특징과 배경과 객체 간 음영 차이를 이용하여 기준점을 생성함으로써 값들을 추출하였다. 최종적으로 각 측정 유형별로 99.1%, 98.7%, 92.6%, 92.5%, 99.9%, 99.7% 정확도를 달성하였으며, 모든 측정값에서 평균 97%의 정확도로 우수한 결과를 얻었다.

  • PDF

Comparative Study of Corner and Feature Extractors for Real-Time Object Recognition in Image Processing

  • Mohapatra, Arpita;Sarangi, Sunita;Patnaik, Srikanta;Sabut, Sukant
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.4
    • /
    • pp.263-270
    • /
    • 2014
  • Corner detection and feature extraction are essential aspects of computer vision problems such as object recognition and tracking. Feature detectors such as Scale Invariant Feature Transform (SIFT) yields high quality features but computationally intensive for use in real-time applications. The Features from Accelerated Segment Test (FAST) detector provides faster feature computation by extracting only corner information in recognising an object. In this paper we have analyzed the efficient object detection algorithms with respect to efficiency, quality and robustness by comparing characteristics of image detectors for corner detector and feature extractors. The simulated result shows that compared to conventional SIFT algorithm, the object recognition system based on the FAST corner detector yields increased speed and low performance degradation. The average time to find keypoints in SIFT method is about 0.116 seconds for extracting 2169 keypoints. Similarly the average time to find corner points was 0.651 seconds for detecting 1714 keypoints in FAST methods at threshold 30. Thus the FAST method detects corner points faster with better quality images for object recognition.

Motion Segmentation from Color Video Sequences based on AMF

  • Kim, Alla;Kim, Yoon-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.31-38
    • /
    • 2009
  • A process of identifying moving objects from data is typical task in many computer vision applications. In this paper, we propose a motion segmentation method that generally consists from background subtraction and foreground pixel segmentation. The Approximated Median Filter (AMF) was chosen to perform background modelling. To demonstrate the effectiveness of proposed approach, we tested it gray-scale video data as well as RGB color space.

  • PDF