• 제목/요약/키워드: 2D Dataset

검색결과 200건 처리시간 0.028초

열악한 환경에서의 자율주행을 위한 다중센서 데이터셋 구축 (Build a Multi-Sensor Dataset for Autonomous Driving in Adverse Weather Conditions)

  • 심성대;민지홍;안성용;이종우;이정석;배광탁;김병준;서준원;최덕선
    • 로봇학회논문지
    • /
    • 제17권3호
    • /
    • pp.245-254
    • /
    • 2022
  • Sensor dataset for autonomous driving is one of the essential components as the deep learning approaches are widely used. However, most driving datasets are focused on typical environments such as sunny or cloudy. In addition, most datasets deal with color images and lidar. In this paper, we propose a driving dataset with multi-spectral images and lidar in adverse weather conditions such as snowy, rainy, smoky, and dusty. The proposed data acquisition system has 4 types of cameras (color, near-infrared, shortwave, thermal), 1 lidar, 2 radars, and a navigation sensor. Our dataset is the first dataset that handles multi-spectral cameras in adverse weather conditions. The Proposed dataset is annotated as 2D semantic labels, 3D semantic labels, and 2D/3D bounding boxes. Many tasks are available on our dataset, for example, object detection and driveable region detection. We also present some experimental results on the adverse weather dataset.

국내 도로 환경에 특화된 자율주행을 위한 멀티카메라 데이터 셋 구축 및 유효성 검증 (Construction and Effectiveness Evaluation of Multi Camera Dataset Specialized for Autonomous Driving in Domestic Road Environment)

  • 이진희;이재근;박재형;김제석;권순
    • 대한임베디드공학회논문지
    • /
    • 제17권5호
    • /
    • pp.273-280
    • /
    • 2022
  • Along with the advancement of deep learning technology, securing high-quality dataset for verification of developed technology is emerging as an important issue, and developing robust deep learning models to the domestic road environment is focused by many research groups. Especially, unlike expressways and automobile-only roads, in the complex city driving environment, various dynamic objects such as motorbikes, electric kickboards, large buses/truck, freight cars, pedestrians, and traffic lights are mixed in city road. In this paper, we built our dataset through multi camera-based processing (collection, refinement, and annotation) including the various objects in the city road and estimated quality and validity of our dataset by using YOLO-based model in object detection. Then, quantitative evaluation of our dataset is performed by comparing with the public dataset and qualitative evaluation of it is performed by comparing with experiment results using open platform. We generated our 2D dataset based on annotation rules of KITTI/COCO dataset, and compared the performance with the public dataset using the evaluation rules of KITTI/COCO dataset. As a result of comparison with public dataset, our dataset shows about 3 to 53% higher performance and thus the effectiveness of our dataset was validated.

승용자율주행을 위한 의미론적 분할 데이터셋 유효성 검증 (Validation of Semantic Segmentation Dataset for Autonomous Driving)

  • 곽석우;나호용;김경수;송은지;정세영;이계원;정지현;황성호
    • 드라이브 ㆍ 컨트롤
    • /
    • 제19권4호
    • /
    • pp.104-109
    • /
    • 2022
  • For autonomous driving research using AI, datasets collected from road environments play an important role. In other countries, various datasets such as CityScapes, A2D2, and BDD have already been released, but datasets suitable for the domestic road environment still need to be provided. This paper analyzed and verified the dataset reflecting the Korean driving environment. In order to verify the training dataset, the class imbalance was confirmed by comparing the number of pixels and instances of the dataset. A similar A2D2 dataset was trained with the same deep learning model, ConvNeXt, to compare and verify the constructed dataset. IoU was compared for the same class between two datasets with ConvNeXt and mIoU was compared. In this paper, it was confirmed that the collected dataset reflecting the driving environment of Korea is suitable for learning.

Compressive sensing-based two-dimensional scattering-center extraction for incomplete RCS data

  • Bae, Ji-Hoon;Kim, Kyung-Tae
    • ETRI Journal
    • /
    • 제42권6호
    • /
    • pp.815-826
    • /
    • 2020
  • We propose a two-dimensional (2D) scattering-center-extraction (SCE) method using sparse recovery based on the compressive-sensing theory, even with data missing from the received radar cross-section (RCS) dataset. First, using the proposed method, we generate a 2D grid via adaptive discretization that has a considerably smaller size than a fully sampled fine grid. Subsequently, the coarse estimation of 2D scattering centers is performed using both the method of iteratively reweighted least square and a general peak-finding algorithm. Finally, the fine estimation of 2D scattering centers is performed using the orthogonal matching pursuit (OMP) procedure from an adaptively sampled Fourier dictionary. The measured RCS data, as well as simulation data using the point-scatterer model, are used to evaluate the 2D SCE accuracy of the proposed method. The results indicate that the proposed method can achieve higher SCE accuracy for an incomplete RCS dataset with missing data than that achieved by the conventional OMP, basis pursuit, smoothed L0, and existing discrete spectral estimation techniques.

강건한 CNN기반 수중 물체 인식을 위한 이미지 합성과 자동화된 Annotation Tool (Synthesizing Image and Automated Annotation Tool for CNN based Under Water Object Detection)

  • 전명환;이영준;신영식;장혜수;여태경;김아영
    • 로봇학회논문지
    • /
    • 제14권2호
    • /
    • pp.139-149
    • /
    • 2019
  • In this paper, we present auto-annotation tool and synthetic dataset using 3D CAD model for deep learning based object detection. To be used as training data for deep learning methods, class, segmentation, bounding-box, contour, and pose annotations of the object are needed. We propose an automated annotation tool and synthetic image generation. Our resulting synthetic dataset reflects occlusion between objects and applicable for both underwater and in-air environments. To verify our synthetic dataset, we use MASK R-CNN as a state-of-the-art method among object detection model using deep learning. For experiment, we make the experimental environment reflecting the actual underwater environment. We show that object detection model trained via our dataset show significantly accurate results and robustness for the underwater environment. Lastly, we verify that our synthetic dataset is suitable for deep learning model for the underwater environments.

가상환자 데이터세트를 기반으로 악관절과 심미를 고려한 진단 및 치료계획 수립 (From TMJ to 3D Digital Smile Design with Virtual Patient Dataset for diagnosis and treatment planning)

  • 이수영;강동휘;이도연;김희철
    • 대한심미치과학회지
    • /
    • 제30권2호
    • /
    • pp.71-90
    • /
    • 2021
  • 가상 환자 데이터 세트는 단일 환자로부터 획득한 구강스캔 안면스캔 전신스캔 하악운동경로데이터 등 다양한 소스의 진단 데이터를 하나의 3차원 좌표계로 정렬한 데이터의 집합이다. 치과의사는 가상 환자 데이터 세트를 사용하여 효과적으로 치료 계획을 수립하고 다양한 치료 계획을 가상공간상에서 시뮬레이션 할 수 있으며, 가상 환자 데이터 세트에서 환자의 미소를 디자인 후 그 결과를 시뮬레이션하고 최적의 치료결과를 선택할 수 있다. 가상공간에서 선택된 치료 계획은 3D 프린팅, 밀링, 사출 성형과 같은 제조 기술을 사용하여 환자에게 동일하게 전달될 수 있다. 이러 치료 계획의 전달은 임시 수복물 제작 및 환자의 구강 내에서 목업 확인을 통해 최종 보철물 제작으로 연결할 수 있다. 이와 같이 진단 데이터, 중첩 및 가공의 정확도가 보장된다면 3차원 가상공간 상에서 시뮬레이션된 3D 디지털 스마일 디자인을 실제 환자에게 정확하게 전달할 수 있다. 가상환자데이터세트의 임상적용방법으로 동기능적교합측정 검사를 통해 교합조정치료를 치료계획에서 배제할수 있는 의사결정방법과, 턱관절질환을 가지고 있는 청소년기 특발성 척추측만증 환자의 턱관절 치료전후 전신스캔 비교분석방법, 그리고 전악수복증례인 상하악 총의치환자 진료시 가상환자데이터세트에 기반한 교합평면분석 및 디지털심미분석방법을 제시하였다.

A Comprehensive Analysis of Deformable Image Registration Methods for CT Imaging

  • Kang Houn Lee;Young Nam Kang
    • 대한의용생체공학회:의공학회지
    • /
    • 제44권5호
    • /
    • pp.303-314
    • /
    • 2023
  • This study aimed to assess the practical feasibility of advanced deformable image registration (DIR) algorithms in radiotherapy by employing two distinct datasets. The first dataset included 14 4D lung CT scans and 31 head and neck CT scans. In the 4D lung CT dataset, we employed the DIR algorithm to register organs at risk and tumors based on respiratory phases. The second dataset comprised pre-, mid-, and post-treatment CT images of the head and neck region, along with organ at risk and tumor delineations. These images underwent registration using the DIR algorithm, and Dice similarity coefficients (DSCs) were compared. In the 4D lung CT dataset, registration accuracy was evaluated for the spinal cord, lung, lung nodules, esophagus, and tumors. The average DSCs for the non-learning-based SyN and NiftyReg algorithms were 0.92±0.07 and 0.88±0.09, respectively. Deep learning methods, namely Voxelmorph, Cyclemorph, and Transmorph, achieved average DSCs of 0.90±0.07, 0.91±0.04, and 0.89±0.05, respectively. For the head and neck CT dataset, the average DSCs for SyN and NiftyReg were 0.82±0.04 and 0.79±0.05, respectively, while Voxelmorph, Cyclemorph, and Transmorph showed average DSCs of 0.80±0.08, 0.78±0.11, and 0.78±0.09, respectively. Additionally, the deep learning DIR algorithms demonstrated faster transformation times compared to other models, including commercial and conventional mathematical algorithms (Voxelmorph: 0.36 sec/images, Cyclemorph: 0.3 sec/images, Transmorph: 5.1 sec/images, SyN: 140 sec/images, NiftyReg: 40.2 sec/images). In conclusion, this study highlights the varying clinical applicability of deep learning-based DIR methods in different anatomical regions. While challenges were encountered in head and neck CT registrations, 4D lung CT registrations exhibited favorable results, indicating the potential for clinical implementation. Further research and development in DIR algorithms tailored to specific anatomical regions are warranted to improve the overall clinical utility of these methods.

Selection of features and hidden Markov model parameters for English word recognition from Leap Motion air-writing trajectories

  • Deval Verma;Himanshu Agarwal;Amrish Kumar Aggarwal
    • ETRI Journal
    • /
    • 제46권2호
    • /
    • pp.250-262
    • /
    • 2024
  • Air-writing recognition is relevant in areas such as natural human-computer interaction, augmented reality, and virtual reality. A trajectory is the most natural way to represent air writing. We analyze the recognition accuracy of words written in air considering five features, namely, writing direction, curvature, trajectory, orthocenter, and ellipsoid, as well as different parameters of a hidden Markov model classifier. Experiments were performed on two representative datasets, whose sample trajectories were collected using a Leap Motion Controller from a fingertip performing air writing. Dataset D1 contains 840 English words from 21 classes, and dataset D2 contains 1600 English words from 40 classes. A genetic algorithm was combined with a hidden Markov model classifier to obtain the best subset of features. Combination ftrajectory, orthocenter, writing direction, curvatureg provided the best feature set, achieving recognition accuracies on datasets D1 and D2 of 98.81% and 83.58%, respectively.

한국인의 인체정보 활용을 위한 전략적 요인에 관한 연구 (A Study on Strategic Factors for the Application of Digitalized Korean Human Dataset)

  • 박동진;이상태;이상호;이승복;신동선
    • 디지털융복합연구
    • /
    • 제8권2호
    • /
    • pp.203-216
    • /
    • 2010
  • 본 연구는 디지털화된 한국인 인체정보 데이터세트의 전략적 활용계획수립과 관련된다. 구체적으로 국가경쟁력 제고를 위한 R&D 전략 포트폴리오의 작성을 위하여 필요한 중요한 의사결정 요인들을 파악하고 조직화하는 것이다. 타 국가의 경우를 보더라도 디지털 인체정보 데이터세트와 시각화 에프리케이션의 개발은 국가수준에서 전략적인 R&D 프로젝트로 선정하여 추진하고 있다. 본 연구에서는 연구목적을 달성하기 위하여 해당 분야의 전문가 집단을 구성하였으며, 이들을 통하여 R&D 비전, SWOT분석 및 전략개발, 연구분야 및 세부과제를 파악하였다. 또한 전략계획 수립을 위하여 각 세부과제들을 중요도와 시급도 관점에서 우선순위를 파악하였다. 연구의 방법으로는 브레인스토밍, 델파이방법과 AHP(Analytical Hierarchy Process) 기법을 도입하였다. 본 연구의 결과는 추후 R&D 포트폴리오 작성을 위한 가이드라인이 될 뿐 아니라, 해당 분야에 연구투자를 평가하는 프레임워크로서 중요한 역할을 담당할 것이다.

  • PDF

CAD 모델 기반의 4D CT 데이터 제작 의용공학 융합 프로그램 개발 (Development of 4D CT Data Generation Program based on CAD Models through the Convergence of Biomedical Engineering)

  • 서정민;한민철;이현수;이세형;김찬형
    • 한국융합학회논문지
    • /
    • 제8권4호
    • /
    • pp.131-137
    • /
    • 2017
  • 본 연구에서는 Computer-Aided Design (CAD) 모델로부터 4D CT 데이터로 변환하는 프로그램을 개발하였다. 개발된 프로그램의 성능을 확인하기 위해, 공학과 의학의 융합 모델로 인체 호흡을 모사할 수 있는 호흡모사 팬텀을 CAD 기반 프로그램으로 모델링하였으며, 이 모델을 10개의 위상영상을 포함하는 DICOM형태의 4D CT 데이터로 변환하는 CAD2DICOM을 개발하였다. 이후, 제작된 4D CT 데이터의 정확성 및 유효성을 평가하기 위하여 영상의 해상도, 종양의 체적 및 위치 등을 방사선치료계획시스템을 이용하여 평가하였다. 결과적으로, 제작된 4D CT 데이터가 방사선치료계획시스템 상에 정상적으로 인식됨을 확인하였으며, 모든 위상에서 종양 체적은 8.8cc로 차이가 나타나지 않고 종양의 움직임도 설정된 10mm로 나타나 정확히 반영됨을 확인하였다. 본 연구를 통해 개발된 프로그램을 이용하면 실제 4차원 CT 촬영에서 발생할 수 있는 영상의 인공물(허상)이 없는 표준 영상을 획득할 수 있으므로, 향후 움직임에 민감한 4차원 방사선 치료계획연구 및 4차원 방사선 영상 평가연구 등에 활용될 것으로 사료된다.