• Title/Summary/Keyword: 2D Dataset

Search Result 200, Processing Time 0.026 seconds

Build a Multi-Sensor Dataset for Autonomous Driving in Adverse Weather Conditions (열악한 환경에서의 자율주행을 위한 다중센서 데이터셋 구축)

  • Sim, Sungdae;Min, Jihong;Ahn, Seongyong;Lee, Jongwoo;Lee, Jung Suk;Bae, Gwangtak;Kim, Byungjun;Seo, Junwon;Choe, Tok Son
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.245-254
    • /
    • 2022
  • Sensor dataset for autonomous driving is one of the essential components as the deep learning approaches are widely used. However, most driving datasets are focused on typical environments such as sunny or cloudy. In addition, most datasets deal with color images and lidar. In this paper, we propose a driving dataset with multi-spectral images and lidar in adverse weather conditions such as snowy, rainy, smoky, and dusty. The proposed data acquisition system has 4 types of cameras (color, near-infrared, shortwave, thermal), 1 lidar, 2 radars, and a navigation sensor. Our dataset is the first dataset that handles multi-spectral cameras in adverse weather conditions. The Proposed dataset is annotated as 2D semantic labels, 3D semantic labels, and 2D/3D bounding boxes. Many tasks are available on our dataset, for example, object detection and driveable region detection. We also present some experimental results on the adverse weather dataset.

Construction and Effectiveness Evaluation of Multi Camera Dataset Specialized for Autonomous Driving in Domestic Road Environment (국내 도로 환경에 특화된 자율주행을 위한 멀티카메라 데이터 셋 구축 및 유효성 검증)

  • Lee, Jin-Hee;Lee, Jae-Keun;Park, Jaehyeong;Kim, Je-Seok;Kwon, Soon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.273-280
    • /
    • 2022
  • Along with the advancement of deep learning technology, securing high-quality dataset for verification of developed technology is emerging as an important issue, and developing robust deep learning models to the domestic road environment is focused by many research groups. Especially, unlike expressways and automobile-only roads, in the complex city driving environment, various dynamic objects such as motorbikes, electric kickboards, large buses/truck, freight cars, pedestrians, and traffic lights are mixed in city road. In this paper, we built our dataset through multi camera-based processing (collection, refinement, and annotation) including the various objects in the city road and estimated quality and validity of our dataset by using YOLO-based model in object detection. Then, quantitative evaluation of our dataset is performed by comparing with the public dataset and qualitative evaluation of it is performed by comparing with experiment results using open platform. We generated our 2D dataset based on annotation rules of KITTI/COCO dataset, and compared the performance with the public dataset using the evaluation rules of KITTI/COCO dataset. As a result of comparison with public dataset, our dataset shows about 3 to 53% higher performance and thus the effectiveness of our dataset was validated.

Validation of Semantic Segmentation Dataset for Autonomous Driving (승용자율주행을 위한 의미론적 분할 데이터셋 유효성 검증)

  • Gwak, Seoku;Na, Hoyong;Kim, Kyeong Su;Song, EunJi;Jeong, Seyoung;Lee, Kyewon;Jeong, Jihyun;Hwang, Sung-Ho
    • Journal of Drive and Control
    • /
    • v.19 no.4
    • /
    • pp.104-109
    • /
    • 2022
  • For autonomous driving research using AI, datasets collected from road environments play an important role. In other countries, various datasets such as CityScapes, A2D2, and BDD have already been released, but datasets suitable for the domestic road environment still need to be provided. This paper analyzed and verified the dataset reflecting the Korean driving environment. In order to verify the training dataset, the class imbalance was confirmed by comparing the number of pixels and instances of the dataset. A similar A2D2 dataset was trained with the same deep learning model, ConvNeXt, to compare and verify the constructed dataset. IoU was compared for the same class between two datasets with ConvNeXt and mIoU was compared. In this paper, it was confirmed that the collected dataset reflecting the driving environment of Korea is suitable for learning.

Compressive sensing-based two-dimensional scattering-center extraction for incomplete RCS data

  • Bae, Ji-Hoon;Kim, Kyung-Tae
    • ETRI Journal
    • /
    • v.42 no.6
    • /
    • pp.815-826
    • /
    • 2020
  • We propose a two-dimensional (2D) scattering-center-extraction (SCE) method using sparse recovery based on the compressive-sensing theory, even with data missing from the received radar cross-section (RCS) dataset. First, using the proposed method, we generate a 2D grid via adaptive discretization that has a considerably smaller size than a fully sampled fine grid. Subsequently, the coarse estimation of 2D scattering centers is performed using both the method of iteratively reweighted least square and a general peak-finding algorithm. Finally, the fine estimation of 2D scattering centers is performed using the orthogonal matching pursuit (OMP) procedure from an adaptively sampled Fourier dictionary. The measured RCS data, as well as simulation data using the point-scatterer model, are used to evaluate the 2D SCE accuracy of the proposed method. The results indicate that the proposed method can achieve higher SCE accuracy for an incomplete RCS dataset with missing data than that achieved by the conventional OMP, basis pursuit, smoothed L0, and existing discrete spectral estimation techniques.

Synthesizing Image and Automated Annotation Tool for CNN based Under Water Object Detection (강건한 CNN기반 수중 물체 인식을 위한 이미지 합성과 자동화된 Annotation Tool)

  • Jeon, MyungHwan;Lee, Yeongjun;Shin, Young-Sik;Jang, Hyesu;Yeu, Taekyeong;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.139-149
    • /
    • 2019
  • In this paper, we present auto-annotation tool and synthetic dataset using 3D CAD model for deep learning based object detection. To be used as training data for deep learning methods, class, segmentation, bounding-box, contour, and pose annotations of the object are needed. We propose an automated annotation tool and synthetic image generation. Our resulting synthetic dataset reflects occlusion between objects and applicable for both underwater and in-air environments. To verify our synthetic dataset, we use MASK R-CNN as a state-of-the-art method among object detection model using deep learning. For experiment, we make the experimental environment reflecting the actual underwater environment. We show that object detection model trained via our dataset show significantly accurate results and robustness for the underwater environment. Lastly, we verify that our synthetic dataset is suitable for deep learning model for the underwater environments.

From TMJ to 3D Digital Smile Design with Virtual Patient Dataset for diagnosis and treatment planning (가상환자 데이터세트를 기반으로 악관절과 심미를 고려한 진단 및 치료계획 수립)

  • Lee, Soo Young;Kang, Dong Huy;Lee, Doyun;Kim, Heechul
    • Journal of the Korean Academy of Esthetic Dentistry
    • /
    • v.30 no.2
    • /
    • pp.71-90
    • /
    • 2021
  • The virtual patient dataset is a collection of diagnostic data from various sources acquired from a single patient into a coordinate system of three-dimensional visualization. Virtual patient dataset makes it possible to establish a treatment plan, simulate various treatment procedures, and create a treatment planning delivery device. Clinicians can design and simulate a patient's smile on the virtual patient dataset and select the optimal result from the diagnostic process. The selected treatment plan can be delivered identically to the patient using manufacturing techniques such as 3D printing, milling, and injection molding. The delivery of this treatment plan can be linked to the final prosthesis through mockup confirmation through provisional restoration fabrication and delivery in the patient's mouth. In this way, if the diagnostic data superimposition and processing accuracy during the manufacturing process are guaranteed, 3D digital smile design simulated in 3D visualization can be accurately delivered to the real patient. As a clinical application method of the virtual patient dataset, we suggest a decision-making method that can exclude occlusal adjustment treatment from the treatment plan through the digital occlusal pressure analysis. A comparative analysis of whole-body scans before and after temporomandibular joint treatment was suggested for adolescent idiopathic scoliosis patients with temporomandibular joint disease. Occlusal plane and smile aesthetic analysis based on the virtual patient dataset was presented when treating patients with complete dentures.

A Comprehensive Analysis of Deformable Image Registration Methods for CT Imaging

  • Kang Houn Lee;Young Nam Kang
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.303-314
    • /
    • 2023
  • This study aimed to assess the practical feasibility of advanced deformable image registration (DIR) algorithms in radiotherapy by employing two distinct datasets. The first dataset included 14 4D lung CT scans and 31 head and neck CT scans. In the 4D lung CT dataset, we employed the DIR algorithm to register organs at risk and tumors based on respiratory phases. The second dataset comprised pre-, mid-, and post-treatment CT images of the head and neck region, along with organ at risk and tumor delineations. These images underwent registration using the DIR algorithm, and Dice similarity coefficients (DSCs) were compared. In the 4D lung CT dataset, registration accuracy was evaluated for the spinal cord, lung, lung nodules, esophagus, and tumors. The average DSCs for the non-learning-based SyN and NiftyReg algorithms were 0.92±0.07 and 0.88±0.09, respectively. Deep learning methods, namely Voxelmorph, Cyclemorph, and Transmorph, achieved average DSCs of 0.90±0.07, 0.91±0.04, and 0.89±0.05, respectively. For the head and neck CT dataset, the average DSCs for SyN and NiftyReg were 0.82±0.04 and 0.79±0.05, respectively, while Voxelmorph, Cyclemorph, and Transmorph showed average DSCs of 0.80±0.08, 0.78±0.11, and 0.78±0.09, respectively. Additionally, the deep learning DIR algorithms demonstrated faster transformation times compared to other models, including commercial and conventional mathematical algorithms (Voxelmorph: 0.36 sec/images, Cyclemorph: 0.3 sec/images, Transmorph: 5.1 sec/images, SyN: 140 sec/images, NiftyReg: 40.2 sec/images). In conclusion, this study highlights the varying clinical applicability of deep learning-based DIR methods in different anatomical regions. While challenges were encountered in head and neck CT registrations, 4D lung CT registrations exhibited favorable results, indicating the potential for clinical implementation. Further research and development in DIR algorithms tailored to specific anatomical regions are warranted to improve the overall clinical utility of these methods.

Selection of features and hidden Markov model parameters for English word recognition from Leap Motion air-writing trajectories

  • Deval Verma;Himanshu Agarwal;Amrish Kumar Aggarwal
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.250-262
    • /
    • 2024
  • Air-writing recognition is relevant in areas such as natural human-computer interaction, augmented reality, and virtual reality. A trajectory is the most natural way to represent air writing. We analyze the recognition accuracy of words written in air considering five features, namely, writing direction, curvature, trajectory, orthocenter, and ellipsoid, as well as different parameters of a hidden Markov model classifier. Experiments were performed on two representative datasets, whose sample trajectories were collected using a Leap Motion Controller from a fingertip performing air writing. Dataset D1 contains 840 English words from 21 classes, and dataset D2 contains 1600 English words from 40 classes. A genetic algorithm was combined with a hidden Markov model classifier to obtain the best subset of features. Combination ftrajectory, orthocenter, writing direction, curvatureg provided the best feature set, achieving recognition accuracies on datasets D1 and D2 of 98.81% and 83.58%, respectively.

A Study on Strategic Factors for the Application of Digitalized Korean Human Dataset (한국인의 인체정보 활용을 위한 전략적 요인에 관한 연구)

  • Park, Dong-Jin;Lee, Sang-Tae;Lee, Sang-Ho;Lee, Seung-Bok;Shin, Dong-Sun
    • Journal of Digital Convergence
    • /
    • v.8 no.2
    • /
    • pp.203-216
    • /
    • 2010
  • This study corresponds to an exploratory survey that identifies and organizes important decision factors for establishing R&D strategic portfolio in the application of digitalized Korean human-dataset. In the case of countries that have performed the above, the digitalized human-dataset and its visualization application development research are regarded as strategic R&D projects selected and supervised in national level. To achieve the goal of this study, we organize a professional group that reviews articles, suggests research topics, considers alternatives and answers questionnaires. With this study, we draw and refine the detailed factors; these are reflected during a strategic planning phase that includes R&D vision setting, SWOT analysis and strategy development, research area and project selection. In addition to this contribution for supporting the strategic planning, the study also shows the detailed research area's definition/scope and their priorities in terms of importance and urgency. This addition will act as a guideline for investigating further research and as a framework for assessing the current status of research investment.

  • PDF

Development of 4D CT Data Generation Program based on CAD Models through the Convergence of Biomedical Engineering (CAD 모델 기반의 4D CT 데이터 제작 의용공학 융합 프로그램 개발)

  • Seo, Jeong Min;Han, Min Cheol;Lee, Hyun Su;Lee, Se Hyung;Kim, Chan Hyeong
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.4
    • /
    • pp.131-137
    • /
    • 2017
  • In the present study, we developed the 4D CT data generation program from CAD-based models. To evaluate the developed program, a CAD-based respiratory motion phantom was designed using CAD software, and converted into 4D CT dataset, which include 10 phases of 3D CTs. The generated 4D CT dataset was evaluated its effectiveness and accuracy through the implementation in radiation therapy planning system (RTPS). Consequently, the results show that the generated 4D CT dataset can be successfully implemented in RTPS, and targets in all phases of 4D CT dataset were moved well according to the user parameters (10 mm) with its stationarily volume (8.8 cc). The developed program, unlike real 4D CT scanner, due to the its ability to make a gold-standard dataset without any artifacts constructed by modality's movements, we believe that this program will be used when the motion effect is important, such as 4D radiation treatment planning and 4D radiation imaging.