• Title/Summary/Keyword: real-time vision

Search Result 847, Processing Time 0.029 seconds

Catadioptric Omnidirectional Optical System Using a Spherical Mirror with a Central Hole and a Plane Mirror for Visible Light (중심 구멍이 있는 구면거울과 평면거울을 이용한 가시광용 반사굴절식 전방위 광학계)

  • Seo, Hyeon Jin;Jo, Jae Heung
    • Korean Journal of Optics and Photonics
    • /
    • v.26 no.2
    • /
    • pp.88-97
    • /
    • 2015
  • An omnidirectional optical system can be described as a special optical system that images in real time a panoramic image with an azimuthal angle of $360^{\circ}$ and the altitude angle corresponding to the upper and lower fields of view from the horizon line. In this paper, for easy fabrication and compact size, we designed and fabricated a catadioptric omnidirectional optical system consisting of the mirror part of a spherical mirror with a central hole (that is, obscuration), a plane mirror, the imaging lens part of 3 single spherical lenses, and a spherical doublet in the visible light spectrum. We evaluated its image performance by measuring the cut-off spatial frequency using automobile license plates, and the vertical field of view using an ISO 12233 chart. We achieved a catadioptric omnidirectional optical system with vertical field of view from $+53^{\circ}$ to $-17^{\circ}$ and an azimuthal angle of $360^{\circ}$. This optical system cleaniy imaged letters on a car's front license plate at the object distance of 3 meters, which corresponds to a cut-off spatial frequency of 135 lp/mm.

Automatic Bee-Counting System with Dual Infrared Sensor based on ICT (ICT 기반 이중 적외선 센서를 이용한 꿀벌 출입 자동 모니터링 시스템)

  • Son, Jae Deok;Lim, Sooho;Kim, Dong-In;Han, Giyoun;Ilyasov, Rustem;Yunusbaev, Ural;Kwon, Hyung Wook
    • Journal of Apiculture
    • /
    • v.34 no.1
    • /
    • pp.47-55
    • /
    • 2019
  • Honey bees are a vital part of the food chain as the most important pollinators for a broad palette of crops and wild plants. The climate change and colony collapse disorder (CCD) phenomenon make it challenging to develop ICT solutions to predict changes in beehive and alert about potential threats. In this paper, we report the test results of the bee-counting system which stands out against the previous analogues due to its comprehensive components including an improved dual infrared sensor to detect honey bees entering and leaving the hive, environmental sensors that measure ambient and interior, a wireless network with the bluetooth low energy (BLE) to transmit the sensing data in real time to the gateway, and a cloud which accumulate and analyze data. To assess the system accuracy, 3 persons manually counted the outgoing and incoming honey bees using the video record of 360-minute length. The difference between automatic and manual measurements for outgoing and incoming scores were 3.98% and 4.43% respectively. These differences are relatively lower than previous analogues, which inspires a vision that the tested system is a good candidate to use in precise apicultural industry, scientific research and education.

Filtering-Based Method and Hardware Architecture for Drivable Area Detection in Road Environment Including Vegetation (초목을 포함한 도로 환경에서 주행 가능 영역 검출을 위한 필터링 기반 방법 및 하드웨어 구조)

  • Kim, Younghyeon;Ha, Jiseok;Choi, Cheol-Ho;Moon, Byungin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.51-58
    • /
    • 2022
  • Drivable area detection, one of the main functions of advanced driver assistance systems, means detecting an area where a vehicle can safely drive. The drivable area detection is closely related to the safety of the driver and it requires high accuracy with real-time operation. To satisfy these conditions, V-disparity-based method is widely used to detect a drivable area by calculating the road disparity value in each row of an image. However, the V-disparity-based method can falsely detect a non-road area as a road when the disparity value is not accurate or the disparity value of the object is equal to the disparity value of the road. In a road environment including vegetation, such as a highway and a country road, the vegetation area may be falsely detected as the drivable area because the disparity characteristics of the vegetation are similar to those of the road. Therefore, this paper proposes a drivable area detection method and hardware architecture with a high accuracy in road environments including vegetation areas by reducing the number of false detections caused by V-disparity characteristic. When 289 images provided by KITTI road dataset are used to evaluate the road detection performance of the proposed method, it shows an accuracy of 90.12% and a recall of 97.96%. In addition, when the proposed hardware architecture is implemented on the FPGA platform, it uses 8925 slice registers and 7066 slice LUTs.

Appropriate Smart Factory : Demonstration of Applicability to Industrial Safety (적정 스마트공장: 산업안전 기술로의 적용 가능성 실증)

  • Kwon, Kui-Kam;Jeong, Woo-Kyun;Kim, Hyungjung;Quan, Ying-Jun;Kim, Younggyun;Lee, Hyunsu;Park, Suyoung;Park, Sae-Jin;Hong, SungJin;Yun, Won-Jae;Jung, Guyeop;Lee, Gyu Wha;Ahn, Sung-Hoon
    • Journal of Appropriate Technology
    • /
    • v.7 no.2
    • /
    • pp.196-205
    • /
    • 2021
  • As industrial safety increases, various industrial accident prevention technologies using smart factory technology are being studied. However, small and medium enterprises (SMEs), which account for the majority of industrial accidents, are having difficulties in preventing industrial accidents by applying these smart factory technologies due to practical problems. In this study, customized monitoring and warning systems for each type of industrial accident were developed and applied to the actual field. Through this, we demonstrated industrial accident prevention technology through appropriate smart factory technology used by SMEs. A customized monitoring system using vision, current, temperature, and gas sensors was established for the four major disaster types: worker body access, short circuit and overcurrent, fire and burns due to high temperature, and emission of hazardous gas. In addition, a notification method suitable for each work environment was applied so that the monitored risk factors could be recognized quickly, and real-time data transmission and display enabled workers and managers to understand the disaster risk effectively. Through the application and demonstration of these appropriate smart factory technologies, the spread of these industrial safety technologies is to be discussed.

A Study on Tire Surface Defect Detection Method Using Depth Image (깊이 이미지를 이용한 타이어 표면 결함 검출 방법에 관한 연구)

  • Kim, Hyun Suk;Ko, Dong Beom;Lee, Won Gok;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.5
    • /
    • pp.211-220
    • /
    • 2022
  • Recently, research on smart factories triggered by the 4th industrial revolution is being actively conducted. Accordingly, the manufacturing industry is conducting various studies to improve productivity and quality based on deep learning technology with robust performance. This paper is a study on the method of detecting tire surface defects in the visual inspection stage of the tire manufacturing process, and introduces a tire surface defect detection method using a depth image acquired through a 3D camera. The tire surface depth image dealt with in this study has the problem of low contrast caused by the shallow depth of the tire surface and the difference in the reference depth value due to the data acquisition environment. And due to the nature of the manufacturing industry, algorithms with performance that can be processed in real time along with detection performance is required. Therefore, in this paper, we studied a method to normalize the depth image through relatively simple methods so that the tire surface defect detection algorithm does not consist of a complex algorithm pipeline. and conducted a comparative experiment between the general normalization method and the normalization method suggested in this paper using YOLO V3, which could satisfy both detection performance and speed. As a result of the experiment, it is confirmed that the normalization method proposed in this paper improved performance by about 7% based on mAP 0.5, and the method proposed in this paper is effective.

Design and Implementation of Real-time Digital Twin in Heterogeneous Robots using OPC UA (OPC UA를 활용한 이기종 로봇의 실시간 디지털 트윈 설계 및 구현)

  • Jeehyeong Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.189-196
    • /
    • 2023
  • As the manufacturing paradigm shifts, various collaborative robots are creating new markets. Demand for collaborative robots is increasing in all industries for the purpose of easy operation, productivity improvement, and replacement of manpower who do simple tasks compared to existing industrial robots. However, accidents frequently occur during work caused by collaborative robots in industrial sites, threatening the safety of workers. In order to construct an industrial site through robots in a human-centered environment, the safety of workers must be guaranteed, and there is a need to develop a collaborative robot guard system that provides reliable communication without the possibility of dispatch. It is necessary to double prevent accidents that occur within the working radius of cobots and reduce the risk of safety accidents through sensors and computer vision. We build a system based on OPC UA, an international protocol for communication with various industrial equipment, and propose a collaborative robot guard system through image analysis using ultrasonic sensors and CNN (Convolution Neural Network). The proposed system evaluates the possibility of robot control in an unsafe situation for a worker.

Vision-based Low-cost Walking Spatial Recognition Algorithm for the Safety of Blind People (시각장애인 안전을 위한 영상 기반 저비용 보행 공간 인지 알고리즘)

  • Sunghyun Kang;Sehun Lee;Junho Ahn
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.81-89
    • /
    • 2023
  • In modern society, blind people face difficulties in navigating common environments such as sidewalks, elevators, and crosswalks. Research has been conducted to alleviate these inconveniences for the visually impaired through the use of visual and audio aids. However, such research often encounters limitations when it comes to practical implementation due to the high cost of wearable devices, high-performance CCTV systems, and voice sensors. In this paper, we propose an artificial intelligence fusion algorithm that utilizes low-cost video sensors integrated into smartphones to help blind people safely navigate their surroundings during walking. The proposed algorithm combines motion capture and object detection algorithms to detect moving people and various obstacles encountered during walking. We employed the MediaPipe library for motion capture to model and detect surrounding pedestrians during motion. Additionally, we used object detection algorithms to model and detect various obstacles that can occur during walking on sidewalks. Through experimentation, we validated the performance of the artificial intelligence fusion algorithm, achieving accuracy of 0.92, precision of 0.91, recall of 0.99, and an F1 score of 0.95. This research can assist blind people in navigating through obstacles such as bollards, shared scooters, and vehicles encountered during walking, thereby enhancing their mobility and safety.