• 제목/요약/키워드: driving image generation

검색결과 27건 처리시간 0.024초

PC 기반형 자동차 운전 연습기 개발 (Development of car driving trainer under PC environment)

  • 이승호;김성덕
    • 제어로봇시스템학회논문지
    • /
    • 제3권4호
    • /
    • pp.415-421
    • /
    • 1997
  • A car driving trainer for beginners developed under PC-based environment is described in this paper. For this trainer, a hardware is implemented as a practice car, and a trainer program is designed by computer image generation method to display 3-dimensional images on a CRT monitor. The trainer program consists of 3 main parts, that is, a speed estimate part, a wheel trace calculation part and a driving image generation part. Furthermore, a map editor is also installed for taking any test drive. After comparing this driving trainer to specify it was verified that the developed car driving trainer showed has good performances, such as lower cost, higher resolution and better image display speed.

  • PDF

가상현실을 이용한 실시간 차량 그래픽 주행 시뮬레이터 (A Real-Time Graphic Driving Simulator Using Virtual Reality Technique)

  • 장재원;손권;최경현;송남용
    • 한국정밀공학회지
    • /
    • 제17권7호
    • /
    • pp.80-89
    • /
    • 2000
  • Driving simulators provide engineers with a power tool in the development and modification stages of vehicle models. One of the most important factors to realistic simulations is the fidelity obtained by a motion bed and a real-time visual image generation algorithm. Virtual reality technology has been widely used to enhance the fidelity of vehicle simulators. This paper develops the virtual environment for such visual system as head-mounted display for a vehicle driving simulator. Virtual vehicle and environment models are constructed using the object-oriented analysis and design approach. Based on the object model, a three-dimensional graphic model is completed with CAD tools such as Rhino and Pro/ENGINEER. For real-time image generation, the optimized IRIS Performer 3D graphics library is embedded with the multi-thread methodology. The developed software for a virtual driving simulator offers an effective interface to virtual reality devices.

  • PDF

자동차 시뮬레이터의 가상환경 구성에 대한 연구 (Construction of Virtual Environment for a Vehicle Simulator)

  • 장재원;손권;최경현
    • 한국자동차공학회논문집
    • /
    • 제8권4호
    • /
    • pp.158-168
    • /
    • 2000
  • Vehicle driving simulators can provide engineers with benefits on the development and modification of vehicle models. One of the most important factors to realistic simulations is the fidelity given by a motion system and a real-time visual image generation system. Virtual reality technology has been widely used to achieve high fidelity. In this paper the virtual environment including a visual system like a head-mounted display is developed for a vehicle driving simulator system by employing the virtual reality technique. virtual vehicle and environment models are constructed using the object-oriented analysis and design approach. Accordint to the object model a three dimensional graphic model is developed with CAD tools such as Rhino and Pro/E. For the real-time image generation the optimized IRIS Performer 3D graphics library is embedded with the multi-thread methodology. Compared with the single loop apprach the proposed methodology yields an acceptable image generation speed 20 frames/sec for the simulator.

  • PDF

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving

  • Han, Seung-Jun;Kang, Jungyu;Min, Kyoung-Wook;Choi, Jungdan
    • ETRI Journal
    • /
    • 제43권4호
    • /
    • pp.603-616
    • /
    • 2021
  • Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.

자율주행을 위한 이중초점 스테레오 카메라 시스템을 이용한 깊이 영상 생성 방법 (Depth Generation using Bifocal Stereo Camera System for Autonomous Driving)

  • 이은경
    • 한국전자통신학회논문지
    • /
    • 제16권6호
    • /
    • pp.1311-1316
    • /
    • 2021
  • 본 논문에서는 이중시점 스테레오 이미지와 그에 상응하는 깊이맵을 생성하기 위해 서로 다른 초점거리를 가지고 있는 두 카메라를 결합한 이중시점 스테레오 카메라 시스템을 제안한다. 제안한 이중초점 스테레오 카메라 시스템을 이용해 깊이맵을 생성하기 위해서는 먼저 서로 다른 초점을 가진 두 카메라에 대한 카메라 정보를 추출하기 위한 카메라 보정(Camera Calibration)을 수행한다. 카메라 파라미터를 이용해 깊이맵 생성을 위한 공통 이미지 평면을 생성하고 스테레오 이미지 정렬화(Image Rectification)를 수행한다. 마지막으로 정렬화된 스테레오 이미지를 이용하여 깊이맵을 생성하였다. 본 논문에서는 깊이맵을 생성하기 위해서 SGM(Semi-global Matching) 알고리즘을 사용하였다. 제안한 이중초점 스테레오 카메라 시스템은 서로 다른 초점 카메라들이 수행해야 하는 기능을 수행함과 동시에 두 카메라를 이용한 스테레오 정합(Stereo Matching)을 통해서 현재 주행 중인 환경에서의 차량, 보행자, 장애물과의 거리 정보까지 생성할 수 있어서 보다 안전한 자율주행 차량 설계를 가능하게 하였다.

Characteristics of Motion-blur Free TFT-LCD using Short Persistent CCFL in Blinking Backlight Driving

  • Han, Jeong-Min;Ok, Chul-Ho;Hwang, Jeoung-Yeon;Seo, Dae-Shik
    • Transactions on Electrical and Electronic Materials
    • /
    • 제8권4호
    • /
    • pp.166-169
    • /
    • 2007
  • In applying LCD to TV application, one of the most significant factors to be improved is image sticking on the moving picture. LCD is different from CRT in the sense that it's continuous passive device, which holds images in entire frame period, while impulse type device generate image in very short time. To reduce image sticking problem related to hold type display mode, we made an experiment to drive TN-LCD like CRT. We made articulate images by turn on-off backlight, and we realized the ratio of Back Light on-off time by counting between on time and off time for video signal input during 1 frame (16.7 ms). Conventional CCFL (cold cathode fluorescent lamp) cannot follow fast on-off speed, so we evaluated new fluorescent substances of light source to improve residual light characteristic of CCFL. We realized articulate image generation similar to CRT by CCFL blinking drive and TN-LCD overdriving. As a result, reduced image sticking phenomenon was validated by naked eye and response time measurement.

딥러닝 기반 3차원 라이다의 반사율 세기 신호를 이용한 흑백 영상 생성 기법 (Deep Learning Based Gray Image Generation from 3D LiDAR Reflection Intensity)

  • 김현구;유국열;박주현;정호열
    • 대한임베디드공학회논문지
    • /
    • 제14권1호
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a method of generating a 2D gray image from LiDAR 3D reflection intensity. The proposed method uses the Fully Convolutional Network (FCN) to generate the gray image from 2D reflection intensity which is projected from LiDAR 3D intensity. Both encoder and decoder of FCN are configured with several convolution blocks in the symmetric fashion. Each convolution block consists of a convolution layer with $3{\times}3$ filter, batch normalization layer and activation function. The performance of the proposed method architecture is empirically evaluated by varying depths of convolution blocks. The well-known KITTI data set for various scenarios is used for training and performance evaluation. The simulation results show that the proposed method produces the improvements of 8.56 dB in peak signal-to-noise ratio and 0.33 in structural similarity index measure compared with conventional interpolation methods such as inverse distance weighted and nearest neighbor. The proposed method can be possibly used as an assistance tool in the night-time driving system for autonomous vehicles.

PathGAN: Local path planning with attentive generative adversarial networks

  • Dooseop Choi;Seung-Jun Han;Kyoung-Wook Min;Jeongdan Choi
    • ETRI Journal
    • /
    • 제44권6호
    • /
    • pp.1004-1019
    • /
    • 2022
  • For autonomous driving without high-definition maps, we present a model capable of generating multiple plausible paths from egocentric images for autonomous vehicles. Our generative model comprises two neural networks: feature extraction network (FEN) and path generation network (PGN). The FEN extracts meaningful features from an egocentric image, whereas the PGN generates multiple paths from the features, given a driving intention and speed. To ensure that the paths generated are plausible and consistent with the intention, we introduce an attentive discriminator and train it with the PGN under a generative adversarial network framework. Furthermore, we devise an interaction model between the positions in the paths and the intentions hidden in the positions and design a novel PGN architecture that reflects the interaction model for improving the accuracy and diversity of the generated paths. Finally, we introduce ETRIDriving, a dataset for autonomous driving, in which the recorded sensor data are labeled with discrete high-level driving actions, and demonstrate the state-of-the-art performance of the proposed model on ETRIDriving in terms of accuracy and diversity.

교통인프라 센서융합 기술을 활용한 실시간 교통정보 생성 기술 개발 (Development of Real-time Traffic Information Generation Technology Using Traffic Infrastructure Sensor Fusion Technology)

  • 김성진;한수호;김기환;김정래
    • 한국IT서비스학회지
    • /
    • 제22권2호
    • /
    • pp.57-70
    • /
    • 2023
  • In order to establish an autonomous driving environment, it is necessary to study traffic safety and demand prediction by analyzing information generated from the transportation infrastructure beyond relying on sensors by the vehicle itself. In this paper, we propose a real-time traffic information generation method using sensor convergence technology of transportation infrastructure. The proposed method uses sensors such as cameras and radars installed in the transportation infrastructure to generate information such as crosswalk pedestrian presence or absence, crosswalk pause judgment, distance to stop line, queue, head distance, and car distance according to each characteristic. create information An experiment was conducted by comparing the proposed method with the drone measurement result by establishing a demonstration environment. As a result of the experiment, it was confirmed that it was possible to recognize pedestrians at crosswalks and the judgment of a pause in front of a crosswalk, and most data such as distance to the stop line and queues showed more than 95% accuracy, so it was judged to be usable.

딥러닝을 이용한 차로이탈 경고 시스템 (Lane Departure Warning System using Deep Learning)

  • 최승완;이건태;김광수;곽수영
    • 한국산업정보학회논문지
    • /
    • 제24권2호
    • /
    • pp.25-31
    • /
    • 2019
  • 최근 인공지능 기술이 급격히 발전하면서 첨단 운전자 지원 시스템 분야에 딥러닝 기술을 접목하여 기존의 기술보다 뛰어난 성능을 보여주기 위한 여러 연구들이 진행 되고 있다. 이러한 동향에 맞춰 본 논문 또한 첨단 운전자 지원 시스템의 핵심 요소 중 하나인 차로이탈 경고시스템에 딥러닝 기술을 접목한 방법을 제안한다. 제안하는 방법과 기존의 차선검출 기반의 경고시스템과의 비교 실험을 통해 그 성능을 평가 하였다. 고속도로 주행영상과 시내 주행영상을 이용한 두 가지의 서로 다른 환경에서 모두 제안하는 방법이 정확도 및 정밀도 부분에서 더 높은 수치를 보여주었다.