• Title/Summary/Keyword: real time object detection

Search Result 524, Processing Time 0.032 seconds

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.

A Study on Establishment Method of Smart Factory Dataset for Artificial Intelligence (인공지능형 스마트공장 데이터셋 구축 방법에 관한 연구)

  • Park, Youn-Soo;Lee, Sang-Deok;Choi, Jeong-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.203-208
    • /
    • 2021
  • At the manufacturing site, workers have been operating by inputting materials into the manufacturing process and leaving input records according to the work instructions, but product LOT tracking has been not possible due to many omissions. Recently, it is being carried out as a system to automatically input materials using RFID-Tag. In particular, the initial automatic recognition rate was good at 97 percent by automatically generating input information through RACK (TAG) ID and RACK input time analysis, but the automatic recognition rate continues to decrease due to multi-material RACK, TAG loss, and new product input issues. It is expected that it will contribute to increasing speed and yield (normal product ratio) in the overall production process by improving automatic recognition rate and real-time monitoring through the establishment of artificial intelligent smart factory datasets.

A Study on Improvement of Pedestrian Care System for Cooperative Automated Driving (자율협력주행을 위한 보행자Care 시스템 개선에 관한 연구)

  • Lee, Sangsoo;Kim, Jonghwan;Lee, Sunghwa;Kim, Jintae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.111-116
    • /
    • 2021
  • This study is a study on improving the pedestrian Care system, which delivers jaywalking events in real time to the autonomous driving control center and Autonomous driving vehicles in operation and issues warnings and announcements to pedestrians based on pedestrian signals. In order to secure reliability of object detection method of pedestrian Care system, the inspection method combined with camera sensor with Lidar sensor and the improved system algorithm were presented. In addition, for the occurrence events of Lidar sensors and intelligent CCTV received during the operation of autonomous driving vehicles, the system algorithm for the elimination of overlapping events and the improvement of accuracy of the same time, place, and object was presented.

A Study on the Application Model of AI Convergence Services Using CCTV Video for the Advancement of Retail Marketing (리테일 마케팅 고도화를 위한 CCTV 영상 데이터 기반의 AI 융합 응용 서비스 활용 모델 연구)

  • Kim, Jong-Yul;Kim, Hyuk-Jung
    • Journal of Digital Convergence
    • /
    • v.19 no.5
    • /
    • pp.197-205
    • /
    • 2021
  • Recently, the retail industry has been increasingly demanding information technology convergence and utilization to respond to various external environmental threats such as COVID-19 and to be competitive using AI technologies, but there is a very lack of research and application services. This study is a CCTV video data-driven AI application case study, using CCTV image data collection in retail space, object detection and tracking AI model, time series database to store real-time tracked objects and tracking data, heatmap to analyze congestion and interest in retail space, social access zone.We present the orientation and verify its usability in the direction designed through practical implementation.

Illumination Environment Adaptive Real-time Video Surveillance System for Security of Important Area (중요지역 보안을 위한 조명환경 적응형 실시간 영상 감시 시스템)

  • An, Sung-Jin;Lee, Kwan-Hee;Kwon, Goo-Rak;Kim, Nam-Hyung;Ko, Sung-Jea
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.2 s.314
    • /
    • pp.116-125
    • /
    • 2007
  • In this paper, we propose a illumination environment adaptive real-time surveillance system for security of important area such as military bases, prisons, and strategic infra structures. The proposed system recognizes movement of objects on the bright environments as well as in dark illumination. The procedure of proposed system may be summarized as follows. First, the system discriminates between bright and dark with input image distribution. Then, if the input image is dark, the system has a pre-processing. The Multi-scale Retinex Color Restoration(MSRCR) is processed to enhance the contrast of image captured in dark environments. Secondly, the enhanced input image is subtracted with the revised background image. And then, we take a morphology image processing to obtain objects correctly. Finally, each bounding box enclosing each objects are tracked. The center point of each bounding box obtained by the proposed algorithm provides more accurate tracking information. Experimental results show that the proposed system provides good performance even though an object moves very fast and the background is quite dark.

A study on accident prevention AI system based on estimation of bus passengers' intentions (시내버스 승하차 의도분석 기반 사고방지 AI 시스템 연구)

  • Seonghwan Park;Sunoh Byun;Junghoon Park
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.57-66
    • /
    • 2023
  • In this paper, we present a study on an AI-based system utilizing the CCTV system within city buses to predict the intentions of boarding and alighting passengers, with the aim of preventing accidents. The proposed system employs the YOLOv7 Pose model to detect passengers, while utilizing an LSTM model to predict intentions of tracked passengers. The system can be installed on the bus's CCTV terminals, allowing for real-time visual confirmation of passengers' intentions throughout driving. It also provides alerts to the driver, mitigating potential accidents during passenger transitions. Test results show accuracy rates of 0.81 for analyzing boarding intentions and 0.79 for predicting alighting intentions onboard. To ensure real-time performance, we verified that a minimum of 5 frames per second analysis is achievable in a GPU environment. his algorithm enhance the safety of passenger transitions during bus operations. In the future, with improved hardware specifications and abundant data collection, the system's expansion into various safety-related metrics is promising. This algorithm is anticipated to play a pivotal role in ensuring safety when autonomous driving becomes commercialized. Additionally, its applicability could extend to other modes of public transportation, such as subways and all forms of mass transit, contributing to the overall safety of public transportation systems.

Visual Feedback System for Manipulating Objects Using Hand Motions in Virtual Reality Environment (가상 환경에서의 손동작을 사용한 물체 조작에 대한 시각적 피드백 시스템)

  • Seo, Woong;Kwon, Sangmo;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.9-19
    • /
    • 2020
  • With the recent development of various kinds of virtual reality devices, there has been an active research effort to increase the sense of reality by recognizing the physical behavior of users rather than by classical user input methods. Among such devices, the Leap Motion controller recognizes the user's hand gestures and can realistically trace the user's hand in a virtual reality environment. However, manipulating an object in virtual reality using a recognized user's hand often causes the hand to pass through the object, which should not occur in the real world. This study presents a way to build a visual feedback system for enhancing the user's sense of interaction between hands and objects in virtual reality. In virtual reality, the user's hands are examined precisely by using a ray tracing method to see if the virtual object collides with the user's hand, and when any collision occurs, visual feedback is given through the process of reconstructing the user's hand by moving the position of the end of the user's fingers that enter the object through sign distance field and reverse mechanics. This enables realistic interaction in virtual reality in real time.

Distance measurement System from detected objects within Kinect depth sensor's field of view and its applications (키넥트 깊이 측정 센서의 가시 범위 내 감지된 사물의 거리 측정 시스템과 그 응용분야)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.279-282
    • /
    • 2017
  • Kinect depth sensor, a depth camera developed by Microsoft as a natural user interface for game appeared as a very useful tool in computer vision field. In this paper, due to kinect's depth sensor and its high frame rate, we developed a distance measurement system using Kinect camera to test it for unmanned vehicles which need vision systems to perceive the surrounding environment like human do in order to detect objects in their path. Therefore, kinect depth sensor is used to detect objects in its field of view and enhance the distance measurement system from objects to the vision sensor. Detected object is identified in accuracy way to determine if it is a real object or a pixel nose to reduce the processing time by ignoring pixels which are not a part of a real object. Using depth segmentation techniques along with Open CV library for image processing, we can identify present objects within Kinect camera's field of view and measure the distance from them to the sensor. Tests show promising results that this system can be used as well for autonomous vehicles equipped with low-cost range sensor, Kinect camera, for further processing depending on the application type when they reach a certain distance far from detected objects.

  • PDF

Design of Image Extraction Hardware for Hand Gesture Vision Recognition

  • Lee, Chang-Yong;Kwon, So-Young;Kim, Young-Hyung;Lee, Yong-Hwan
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.71-83
    • /
    • 2020
  • In this paper, we propose a system that can detect the shape of a hand at high speed using an FPGA. The hand-shape detection system is designed using Verilog HDL, a hardware language that can process in parallel instead of sequentially running C++ because real-time processing is important. There are several methods for hand gesture recognition, but the image processing method is used. Since the human eye is sensitive to brightness, the YCbCr color model was selected among various color expression methods to obtain a result that is less affected by lighting. For the CbCr elements, only the components corresponding to the skin color are filtered out from the input image by utilizing the restriction conditions. In order to increase the speed of object recognition, a median filter that removes noise present in the input image is used, and this filter is designed to allow comparison of values and extraction of intermediate values at the same time to reduce the amount of computation. For parallel processing, it is designed to locate the centerline of the hand during scanning and sorting the stored data. The line with the highest count is selected as the center line of the hand, and the size of the hand is determined based on the count, and the hand and arm parts are separated. The designed hardware circuit satisfied the target operating frequency and the number of gates.

Robust Outlier-Object Detection in Image Pairs Based on Variable Threshold Using Empirical Correction Constant (실험적 교정상수를 사용한 가변문턱값에 기초한 영상 쌍에서의 강인한 이상 물체 검출)

  • Kim, Dong-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.1
    • /
    • pp.14-22
    • /
    • 2009
  • By calculating the differences between two images, which are captured with the same scene at different time, we can detect a set of outliers, such as occluding objects due to moving vehicles. To reduce the influence from the different intensity properties of the images, a simple technique that reruns the regression, which is based on the polynomial regression model, is employed. For a robust detection of outliers, the image difference is normalized by the noise variance. Hence, an accurate estimate of the noise variance is very important. In this paper, using an empirically obtained correction constant is proposed. Numerical analysis using both synthetic and real images are also shown in this paper to show the robust performance of the detection algorithm.