• Title/Summary/Keyword: Object-detection

Search Result 2,473, Processing Time 0.03 seconds

Deep Learning based Vehicle AR Manual for Improving User Experience (사용자 경험 향상을 위한 딥러닝 기반 차량용 AR 매뉴얼)

  • Lee, Jeong-Min;Kim, Jun-Hak;Seok, Jung-Won;Park, Jinho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.125-134
    • /
    • 2022
  • This paper implements an AR manual for a vehicle that can be used even in the vehicle interior space where it is difficult to apply the augmentation method of AR content, which is mainly used, and applies a deep learning model to improve the augmentation matching between real space and virtual objects. Through deep learning, the logo of the steering wheel is recognized regardless of the position, angle, and inclination, and 3D interior space coordinates are generated based on this, and the virtual button is precisely augmented on the actual vehicle parts. Based on the same learning model, the function to recognize the main warning light symbols of the vehicle is also implemented to increase the functionality and usability as an AR manual for vehicles.

LiDAR Static Obstacle Map based Position Correction Algorithm for Urban Autonomous Driving (도심 자율주행을 위한 라이다 정지 장애물 지도 기반 위치 보정 알고리즘)

  • Noh, Hanseok;Lee, Hyunsung;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.2
    • /
    • pp.39-44
    • /
    • 2022
  • This paper presents LiDAR static obstacle map based vehicle position correction algorithm for urban autonomous driving. Real Time Kinematic (RTK) GPS is commonly used in highway automated vehicle systems. For urban automated vehicle systems, RTK GPS have some trouble in shaded area. Therefore, this paper represents a method to estimate the position of the host vehicle using AVM camera, front camera, LiDAR and low-cost GPS based on Extended Kalman Filter (EKF). Static obstacle map (STOM) is constructed only with static object based on Bayesian rule. To run the algorithm, HD map and Static obstacle reference map (STORM) must be prepared in advance. STORM is constructed by accumulating and voxelizing the static obstacle map (STOM). The algorithm consists of three main process. The first process is to acquire sensor data from low-cost GPS, AVM camera, front camera, and LiDAR. Second, low-cost GPS data is used to define initial point. Third, AVM camera, front camera, LiDAR point cloud matching to HD map and STORM is conducted using Normal Distribution Transformation (NDT) method. Third, position of the host vehicle position is corrected based on the Extended Kalman Filter (EKF).The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment and showed better performance than only lane-detection algorithm. It is expected to be more robust and accurate than raw lidar point cloud matching algorithm in autonomous driving.

Optimization of Pose Estimation Model based on Genetic Algorithms for Anomaly Detection in Unmanned Stores (무인점포 이상행동 인식을 위한 유전 알고리즘 기반 자세 추정 모델 최적화)

  • Sang-Hyeop Lee;Jang-Sik Park
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.1
    • /
    • pp.113-119
    • /
    • 2023
  • In this paper, we propose an optimization of a pose estimation deep learning model for recognition of abnormal behavior in unmanned stores using radio frequencies. The radio frequency use millimeter wave in the 30 GHz to 300 GHz band. Due to the short wavelength and strong straightness, it is a frequency with less grayness and less interference due to radio absorption on the object. A millimeter wave radar is used to solve the problem of personal information infringement that may occur in conventional CCTV image-based pose estimation. Deep learning-based pose estimation models generally use convolution neural networks. The convolution neural network is a combination of convolution layers and pooling layers of different types, and there are many cases of convolution filter size, number, and convolution operations, and more cases of combining components. Therefore, it is difficult to find the structure and components of the optimal posture estimation model for input data. Compared with conventional millimeter wave-based posture estimation studies, it is possible to explore the structure and components of the optimal posture estimation model for input data using genetic algorithms, and the performance of optimizing the proposed posture estimation model is excellent. Data are collected for actual unmanned stores, and point cloud data and three-dimensional keypoint information of Kinect Azure are collected using millimeter wave radar for collapse and property damage occurring in unmanned stores. As a result of the experiment, it was confirmed that the error was moored compared to the conventional posture estimation model.

Present and Prospect of Ocean Observation Using Pressure-recording Inverted Echo Sounder (PIES) (압력측정 전도음향측심기(PIES)를 활용한 해양관측의 현재와 전망)

  • CHANHYUNG JEON;KANG-NYEONG LEE;HAJIN SONG;JEONG-YEOB CHAE;JAE-HUN PARK
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.28 no.1
    • /
    • pp.51-61
    • /
    • 2023
  • Sound can travel a long distance in the ocean; hence, acoustic instruments have been widely used for ocean observations in various fields such as bathymetric survey, object detection, underwater communication, and current measurements. Herein we introduce a pressure-recording inverted echo sounder (PIES) which is one of the most powerful instruments, moored at seafloor for ocean observation in physical oceanography. The PIES can measure various kinds of oceanic phenomena (currents, mesoscale eddies, internal waves, and sea surface height variabilities) and support acoustic telemetry and pop-up data shuttle (PDS) system for remote data acquisition. In this paper, we review uses of PIES and describe present and prospective system of PIES including remote data acquisition toward (quasi) real-time data recovery.

A Study on Automatically Information Collection of Underground Facility Using R-CNN Techniques (R-CNN 기법을 이용한 지중매설물 제원 정보 자동 추출 연구)

  • Hyunsuk Park;Kiman Hong;Yongsung Cho
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.3
    • /
    • pp.689-697
    • /
    • 2023
  • Purpose: The purpose of this study is to automatically extract information on underground facilities using a general-purpose smartphone in the process of applying the mini-trenching method. Method: Data sets for image learning were collected under various conditions such as day and night, height, and angle, and the object detection algorithm used the R-CNN algorithm. Result: As a result of the study, F1-Score was applied as a performance evaluation index that can consider the average of accurate predictions and reproduction rates at the same time, and F1-Score was 0.76. Conclusion: The results of this study showed that it was possible to extract information on underground buried materials based on smartphones, but it is necessary to improve the precision and accuracy of the algorithm through additional securing of learning data and on-site demonstration.

Data Fusion and Pursuit-Evasion Simulations for Position Evaluation of Tactical Objects (전술객체 위치 모의를 위한 데이터 융합 및 추적 회피 시뮬레이션)

  • Jin, Seung-Ri;Kim, Seok-Kwon;Son, Jae-Won;Park, Dong-Jo
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.4
    • /
    • pp.209-218
    • /
    • 2010
  • The aim of the study on the tactical object representation techniques in synthetic environment is on acquiring fundamental techniques for detection and tracking of tactical objects, and evaluating the strategic situation in the virtual ground. In order to acquire these techniques, there need the tactical objects' position tracking and evaluation, and an inter-sharing technique between tactical models. In this paper, we study the algorithms on the sensor data fusion and coordinate conversion, proportional navigation guidance(PNG), and pursuit-evasion technique for engineering and higher level models. Additionally, we simulate the position evaluation of tractical objects using the pursuit and evasion maneuvers between a submarine and a torpedo.

Highly Flexible Piezoelectric Tactile Sensor based on PZT/Epoxy Nanocomposite for Texture Recognition (텍스처 인지를 위한 PZT/Epoxy 나노 복합소재 기반 유연 압전 촉각센서)

  • Yulim Min;Yunjeong Kim;Jeongnam Kim;Saerom Seo;Hye Jin Kim
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.2
    • /
    • pp.88-94
    • /
    • 2023
  • Recently, piezoelectric tactile sensors have garnered considerable attention in the field of texture recognition owing to their high sensitivity and high-frequency detection capability. Despite their remarkable potential, improving their mechanical flexibility to attach to complex surfaces remains challenging. In this study, we present a flexible piezoelectric sensor that can be bent to an extremely small radius of up to 2.5 mm and still maintain good electrical performance. The proposed sensor was fabricated by controlling the thickness that induces internal stress under external deformation. The fabricated piezoelectric sensor exhibited a high sensitivity of 9.3 nA/kPa ranging from 0 to 10 kPa and a wide frequency range of up to 1 kHz. To demonstrate real-time texture recognition by rubbing the surface of an object with our sensor, nine sets of fabric plates were prepared to reflect their material properties and surface roughness. To extract features of the objects from the detected sensing data, we converted the analog dataset to short-term Fourier transform images. Subsequently, texture recognition was performed using a convolutional neural network with a classification accuracy of 97%.

Road Image Recognition Technology based on Deep Learning Using TIDL NPU in SoC Enviroment (SoC 환경에서 TIDL NPU를 활용한 딥러닝 기반 도로 영상 인식 기술)

  • Yunseon Shin;Juhyun Seo;Minyoung Lee;Injung Kim
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.25-31
    • /
    • 2022
  • Deep learning-based image processing is essential for autonomous vehicles. To process road images in real-time in a System-on-Chip (SoC) environment, we need to execute deep learning models on a NPU (Neural Procesing Units) specialized for deep learning operations. In this study, we imported seven open-source image processing deep learning models, that were developed on GPU servers, to Texas Instrument Deep Learning (TIDL) NPU environment. We confirmed that the models imported in this study operate normally in the SoC virtual environment through performance evaluation and visualization. This paper introduces the problems that occurred during the migration process due to the limitations of NPU environment and how to solve them, and thereby, presents a reference case worth referring to for developers and researchers who want to port deep learning models to SoC environments.

Smart Radar System for Life Pattern Recognition (생활패턴 인지가 가능한 스마트 레이더 시스템)

  • Sang-Joong Jung
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.2
    • /
    • pp.91-96
    • /
    • 2022
  • At the current camera-based technology level, sensor-based basic life pattern recognition technology has to suffer inconvenience to obtain accurate data, and commercial band products are difficult to collect accurate data, and cannot take into account the motive, cause, and psychological effect of behavior. the current situation. In this paper, radar technology for life pattern recognition is a technology that measures the distance, speed, and angle with an object by transmitting a waveform designed to detect nearby people or objects in daily life and processing the reflected received signal. It was designed to supplement issues such as privacy protection in the existing image-based service by applying it. For the implementation of the proposed system, based on TI IWR1642 chip, RF chipset control for 60GHz band millimeter wave FMCW transmission/reception, module development for distance/speed/angle detection, and technology including signal processing software were implemented. It is expected that analysis of individual life patterns will be possible by calculating self-management and behavior sequences by extracting personalized life patterns through quantitative analysis of life patterns as meta-analysis of living information in security and safe guards application.

Estimation of two-dimensional position of soybean crop for developing weeding robot (제초로봇 개발을 위한 2차원 콩 작물 위치 자동검출)

  • SooHyun Cho;ChungYeol Lee;HeeJong Jeong;SeungWoo Kang;DaeHyun Lee
    • Journal of Drive and Control
    • /
    • v.20 no.2
    • /
    • pp.15-23
    • /
    • 2023
  • In this study, two-dimensional location of crops for auto weeding was detected using deep learning. To construct a dataset for soybean detection, an image-capturing system was developed using a mono camera and single-board computer and the system was mounted on a weeding robot to collect soybean images. A dataset was constructed by extracting RoI (region of interest) from the raw image and each sample was labeled with soybean and the background for classification learning. The deep learning model consisted of four convolutional layers and was trained with a weakly supervised learning method that can provide object localization only using image-level labeling. Localization of the soybean area can be visualized via CAM and the two-dimensional position of the soybean was estimated by clustering the pixels associated with the soybean area and transforming the pixel coordinates to world coordinates. The actual position, which is determined manually as pixel coordinates in the image was evaluated and performances were 6.6(X-axis), 5.1(Y-axis) and 1.2(X-axis), 2.2(Y-axis) for MSE and RMSE about world coordinates, respectively. From the results, we confirmed that the center position of the soybean area derived through deep learning was sufficient for use in automatic weeding systems.