• 제목/요약/키워드: Computer vision technology

검색결과 666건 처리시간 0.025초

Quality Assessment of Beef Using Computer Vision Technology

  • Rahman, Md. Faizur;Iqbal, Abdullah;Hashem, Md. Abul;Adedeji, Akinbode A.
    • 한국축산식품학회지
    • /
    • 제40권6호
    • /
    • pp.896-907
    • /
    • 2020
  • Imaging technique or computer vision (CV) technology has received huge attention as a rapid and non-destructive technique throughout the world for measuring quality attributes of agricultural products including meat and meat products. This study was conducted to test the ability of CV technology to predict the quality attributes of beef. Images were captured from longissimus dorsi muscle in beef at 24 h post-mortem. Traits evaluated were color value (L*, a*, b*), pH, drip loss, cooking loss, dry matter, moisture, crude protein, fat, ash, thiobarbituric acid reactive substance (TBARS), peroxide value (POV), free fatty acid (FFA), total coliform count (TCC), total viable count (TVC) and total yeast-mould count (TYMC). Images were analyzed using the Matlab software (R2015a). Different reference values were determined by physicochemical, proximate, biochemical and microbiological test. All determination were done in triplicate and the mean value was reported. Data analysis was carried out using the programme Statgraphics Centurion XVI. Calibration and validation model were fitted using the software Unscrambler X version 9.7. A higher correlation found in a* (r=0.65) and moisture (r=0.56) with 'a*' value obtained from image analysis and the highest calibration and prediction accuracy was found in lightness (r2c=0.73, r2p=0.69) in beef. Results of this work show that CV technology may be a useful tool for predicting meat quality traits in the laboratory and meat processing industries.

YOLOv8을 이용한 실시간 화재 검출 방법 (Real-Time Fire Detection Method Using YOLOv8)

  • 이태희;박천수
    • 반도체디스플레이기술학회지
    • /
    • 제22권2호
    • /
    • pp.77-80
    • /
    • 2023
  • Since fires in uncontrolled environments pose serious risks to society and individuals, many researchers have been investigating technologies for early detection of fires that occur in everyday life. Recently, with the development of deep learning vision technology, research on fire detection models using neural network backbones such as Transformer and Convolution Natural Network has been actively conducted. Vision-based fire detection systems can solve many problems with physical sensor-based fire detection systems. This paper proposes a fire detection method using the latest YOLOv8, which improves the existing fire detection method. The proposed method develops a system that detects sparks and smoke from input images by training the Yolov8 model using a universal fire detection dataset. We also demonstrate the superiority of the proposed method through experiments by comparing it with existing methods.

  • PDF

Computer vision-based remote displacement monitoring system for in-situ bridge bearings robust to large displacement induced by temperature change

  • Kim, Byunghyun;Lee, Junhwa;Sim, Sung-Han;Cho, Soojin;Park, Byung Ho
    • Smart Structures and Systems
    • /
    • 제30권5호
    • /
    • pp.521-535
    • /
    • 2022
  • Efficient management of deteriorating civil infrastructure is one of the most important research topics in many developed countries. In particular, the remote displacement measurement of bridges using linear variable differential transformers, global positioning systems, laser Doppler vibrometers, and computer vision technologies has been attempted extensively. This paper proposes a remote displacement measurement system using closed-circuit televisions (CCTVs) and a computer-vision-based method for in-situ bridge bearings having relatively large displacement due to temperature change in long term. The hardware of the system is composed of a reference target for displacement measurement, a CCTV to capture target images, a gateway to transmit images via a mobile network, and a central server to store and process transmitted images. The usage of CCTV capable of night vision capture and wireless data communication enable long-term 24-hour monitoring on wide range of bridge area. The computer vision algorithm to estimate displacement from the images involves image preprocessing for enhancing the circular features of the target, circular Hough transformation for detecting circles on the target in the whole field-of-view (FOV), and homography transformation for converting the movement of the target in the images into an actual expansion displacement. The simple target design and robust circle detection algorithm help to measure displacement using target images where the targets are far apart from each other. The proposed system is installed at the Tancheon Overpass located in Seoul, and field experiments are performed to evaluate the accuracy of circle detection and displacement measurements. The circle detection accuracy is evaluated using 28,542 images captured from 71 CCTVs installed at the testbed, and only 48 images (0.168%) fail to detect the circles on the target because of subpar imaging conditions. The accuracy of displacement measurement is evaluated using images captured for 17 days from three CCTVs; the average and root-mean-square errors are 0.10 and 0.131 mm, respectively, compared with a similar displacement measurement. The long-term operation of the system, as evaluated using 8-month data, shows high accuracy and stability of the proposed system.

Current Situation of Renewable Energy Resources Marketing and its Challenges in Light of Saudi Vision 2030 Case Study: Northern Border Region

  • AL-Ghaswyneh, Odai Falah Mohammad
    • International Journal of Computer Science & Network Security
    • /
    • 제22권3호
    • /
    • pp.89-94
    • /
    • 2022
  • The Saudi Vision 2030 defined the directions of the national economy and market towards diversifying sources of income, and developing energy to become less dependent on oil. The study sought through a theoretical review to identify the reality of the energy sector and the areas of investment available in the field of renewable energy. Findings showed that investment in the renewable energy sector is a promising source according to solar, wind, hydrogen, geothermal energy and burning waste than landfill to extract biogas for less emission. The renewable energy sector faces challenges related to technology, production cost, price, quantity of production and consumption, and markets. The study revealed some recommendations providing and suggested electronic marketing system to provide investors and consumers with energy available from renewable sources.

딥 러닝과 파노라마 영상 스티칭 기법을 이용한 송전선 늘어짐 모니터링 시스템 (The Power Line Deflection Monitoring System using Panoramic Video Stitching and Deep Learning)

  • 박은수;김승환;이상순;류은석
    • 방송공학회논문지
    • /
    • 제25권1호
    • /
    • pp.13-24
    • /
    • 2020
  • 한국에는 전력 분배를 위하여 약 9백만 개의 전신주와 1.3백만 킬로미터의 송전선이 있다. 이러한 많은 전력 설비의 유지보수를 위해서는 많은 인력과 시간이 소요된다. 최근 인공지능을 사용한 여러 고장진단 기술들이 연구되어 오고 있기 때문에 본 논문에서는 송전선의 여러 요인으로 인한 늘어짐을 감지하기 위해 기존의 현장에서의 검증 방법이 아닌 카메라 시스템으로 촬영한 영상에서의 인공 지능 기술을 활용한 송전선 늘어짐 감지 시스템을 제안한다. 제안하는 시스템은 (i) 객체 탐지 시스템을 이용한 송전탑 감지 (ii) 동영상 촬영 데이터의 화질 저하 문제를 해결하기 위한 히스토그램 평활화 기법 (iii) 송전선 전체를 파악하기 위한 파노라마 영상 스티칭(iv) 송전선 탐지 알고리즘 적용 후 파노라마 영상 스티칭 기술을 이용한 늘어짐 판단 과정으로 진행된다. 본 논문에서는 각각의 과정들에 대한 설명 및 실험 결과를 보인다.

머리의 자세를 추적하기 위한 효율적인 카메라 보정 방법에 관한 연구 (An Efficient Camera Calibration Method for Head Pose Tracking)

  • 박경수;임창주;이경태
    • 대한인간공학회지
    • /
    • 제19권1호
    • /
    • pp.77-90
    • /
    • 2000
  • The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system was proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We proposed an efficient camera calibration method to track the 3D position and orientation of the user's head accurately. We also evaluated the performance of the proposed method. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the conventional direct linear transformation method which has been used in camera calibration. The results of this study can be applied to the tracking head movements related to the eye-controlled human/computer interface and the virtual reality technology.

  • PDF

Lightweight image classifier for CIFAR-10

  • Sharma, Akshay Kumar;Rana, Amrita;Kim, Kyung Ki
    • 센서학회지
    • /
    • 제30권5호
    • /
    • pp.286-289
    • /
    • 2021
  • Image classification is one of the fundamental applications of computer vision. It enables a system to identify an object in an image. Recently, image classification applications have broadened their scope from computer applications to edge devices. The convolutional neural network (CNN) is the main class of deep learning neural networks that are widely used in computer tasks, and it delivers high accuracy. However, CNN algorithms use a large number of parameters and incur high computational costs, which hinder their implementation in edge hardware devices. To address this issue, this paper proposes a lightweight image classifier that provides good accuracy while using fewer parameters. The proposed image classifier diverts the input into three paths and utilizes different scales of receptive fields to extract more feature maps while using fewer parameters at the time of training. This results in the development of a model of small size. This model is tested on the CIFAR-10 dataset and achieves an accuracy of 90% using .26M parameters. This is better than the state-of-the-art models, and it can be implemented on edge devices.

이더넷 커넥터 자동 조립 기술 개발을 위한 컴퓨터 비전 기반 공정 검사 (Computer Vision-Based Process Inspection for the Development of Automated Assembly Technology Ethernet Connectors)

  • 홍윤정;이건;우지영;남윤영
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2024년도 제69차 동계학술대회논문집 32권1호
    • /
    • pp.89-90
    • /
    • 2024
  • 본 연구는 와이어 하네스의 불량 여부를 정확하고 빠르게 감지하기 위해 컴퓨터 비전을 활용하여 압착된 단자의 길이, 단자 끝단 치수(너비), 압착된 부분의 폭(와이어부, 심선부)의 6가지 측정값을 계산하는 것을 목표로 한다. 단자 영역별 특징과 배경과 객체 간 음영 차이를 이용하여 기준점을 생성함으로써 값들을 추출하였다. 최종적으로 각 측정 유형별로 99.1%, 98.7%, 92.6%, 92.5%, 99.9%, 99.7% 정확도를 달성하였으며, 모든 측정값에서 평균 97%의 정확도로 우수한 결과를 얻었다.

  • PDF

Comparative Study of Corner and Feature Extractors for Real-Time Object Recognition in Image Processing

  • Mohapatra, Arpita;Sarangi, Sunita;Patnaik, Srikanta;Sabut, Sukant
    • Journal of information and communication convergence engineering
    • /
    • 제12권4호
    • /
    • pp.263-270
    • /
    • 2014
  • Corner detection and feature extraction are essential aspects of computer vision problems such as object recognition and tracking. Feature detectors such as Scale Invariant Feature Transform (SIFT) yields high quality features but computationally intensive for use in real-time applications. The Features from Accelerated Segment Test (FAST) detector provides faster feature computation by extracting only corner information in recognising an object. In this paper we have analyzed the efficient object detection algorithms with respect to efficiency, quality and robustness by comparing characteristics of image detectors for corner detector and feature extractors. The simulated result shows that compared to conventional SIFT algorithm, the object recognition system based on the FAST corner detector yields increased speed and low performance degradation. The average time to find keypoints in SIFT method is about 0.116 seconds for extracting 2169 keypoints. Similarly the average time to find corner points was 0.651 seconds for detecting 1714 keypoints in FAST methods at threshold 30. Thus the FAST method detects corner points faster with better quality images for object recognition.

Motion Segmentation from Color Video Sequences based on AMF

  • 알라김;김윤호
    • 한국정보전자통신기술학회논문지
    • /
    • 제2권3호
    • /
    • pp.31-38
    • /
    • 2009
  • A process of identifying moving objects from data is typical task in many computer vision applications. In this paper, we propose a motion segmentation method that generally consists from background subtraction and foreground pixel segmentation. The Approximated Median Filter (AMF) was chosen to perform background modelling. To demonstrate the effectiveness of proposed approach, we tested it gray-scale video data as well as RGB color space.

  • PDF