• 제목/요약/키워드: IoU

검색결과 195건 처리시간 0.026초

$CO_2$ 레이저 광의 조사조건에 따른 치아의 치수강내 온도상승에 관한 연구 (A Study of Temperature Changes in the Dental Tissues Irradiated by $10.6{\mu}m$ Laser Beam)

  • 고동섭;박용환;신상훈;엄효순;김웅;이찬영
    • 한국광학회지
    • /
    • 제1권2호
    • /
    • pp.210-216
    • /
    • 1990
  • 본 연구에서는 레이저와 치아조직과의 상호작용에 대한 연구의 일환으로, $CO_2$ 레이저의 발진파장인 $10.6\mu\textrm{m}$의 레이저 빔 조사시에 일어나는 치아의 온도변화를 조사하기 위하여 $CO_2$ 레이저 발진장치를 제작하여 여러 가지 조사 에너지, 조사시간에 대하여 발거된 치아의 치수강의 온도변화를 측정 분석하였다. 측정한 data를 분석하여 최대 상승온도 $\DeltaT_m$를 추정할 수 있는 다음과 같은 실용적인 경험식을 얻었다. $\DeltaT_m=\alphaP\Delta\tauexp(-\betad)$$ 여기서 P는 레이저의 출력(W)이고 $\Delta\tau$는 조사시간(sec), d는 치아의 두께(mm)이다.

  • PDF

YOLO알고리즘을 활용한 시각장애인용 식사보조 시스템 개발 (Development a Meal Support System for the Visually Impaired Using YOLO Algorithm)

  • 이군호;문미경
    • 한국전자통신학회논문지
    • /
    • 제16권5호
    • /
    • pp.1001-1010
    • /
    • 2021
  • 시각이 온전한 사람들은 식사를 할 때 시각에 대한 의존도를 깊게 인지하지 못한다. 그러나 시각장애인은 식단에 어떤 음식이 있는지 알지 못하기 때문에 옆에 있는 보조인이 시각장애인 수저로 음식의 위치를 시계방향 또는 전후좌우 등 일정한 방향으로 설명하여 그릇 위치를 확인한다. 본 논문에서는 시각장애인이 스마트폰의 카메라를 이용하여 자신의 식단을 비추면 각각의 음식 이미지를 인식하여 음성으로 음식의 이름을 알려주는 식사보조 시스템의 개발 내용에 대해 기술한다. 이 시스템은 음식과 식기도구(숟가락)의 이미지를 학습한 YOLO모델을 통해 숟가락이 놓인 음식을 추출해 내고, 이 음식이 무엇인지를 인식하여 이를 음성으로 알려준다. 본 시스템을 통해 시각장애인은 식사보조인의 도움없이 식사를 할 수 있음으로써 자립의지와 만족도를 높일 수 있을 것으로 기대한다.

딥러닝 기반 LNGC 화물창 스캐닝 점군 데이터의 비계 시스템 객체 탐지 및 후처리 (Object Detection and Post-processing of LNGC CCS Scaffolding System using 3D Point Cloud Based on Deep Learning)

  • 이동건;지승환;박본영
    • 대한조선학회논문집
    • /
    • 제58권5호
    • /
    • pp.303-313
    • /
    • 2021
  • Recently, quality control of the Liquefied Natural Gas Carrier (LNGC) cargo hold and block-erection interference areas using 3D scanners have been performed, focusing on large shipyards and the international association of classification societies. In this study, as a part of the research on LNGC cargo hold quality management advancement, a study on deep-learning-based scaffolding system 3D point cloud object detection and post-processing were conducted using a LNGC cargo hold 3D point cloud. The scaffolding system point cloud object detection is based on the PointNet deep learning architecture that detects objects using point clouds, achieving 70% prediction accuracy. In addition, the possibility of improving the accuracy of object detection through parameter adjustment is confirmed, and the standard of Intersection over Union (IoU), an index for determining whether the object is the same, is achieved. To avoid the manual post-processing work, the object detection architecture allows automatic task performance and can achieve stable prediction accuracy through supplementation and improvement of learning data. In the future, an improved study will be conducted on not only the flat surface of the LNGC cargo hold but also complex systems such as curved surfaces, and the results are expected to be applicable in process progress automation rate monitoring and ship quality control.

게이트심장혈액풀검사에서 딥러닝 기반 좌심실 영역 분할방법의 유용성 평가 (Evaluating Usefulness of Deep Learning Based Left Ventricle Segmentation in Cardiac Gated Blood Pool Scan)

  • 오주영;정의환;이주영;박훈희
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제45권2호
    • /
    • pp.151-158
    • /
    • 2022
  • The Cardiac Gated Blood Pool (GBP) scintigram, a nuclear medicine imaging, calculates the left ventricular Ejection Fraction (EF) by segmenting the left ventricle from the heart. However, in order to accurately segment the substructure of the heart, specialized knowledge of cardiac anatomy is required, and depending on the expert's processing, there may be a problem in which the left ventricular EF is calculated differently. In this study, using the DeepLabV3 architecture, GBP images were trained on 93 training data with a ResNet-50 backbone. Afterwards, the trained model was applied to 23 separate test sets of GBP to evaluate the reproducibility of the region of interest and left ventricular EF. Pixel accuracy, dice coefficient, and IoU for the region of interest were 99.32±0.20, 94.65±1.45, 89.89±2.62(%) at the diastolic phase, and 99.26±0.34, 90.16±4.19, and 82.33±6.69(%) at the systolic phase, respectively. Left ventricular EF was calculated to be an average of 60.37±7.32% in the ROI set by humans and 58.68±7.22% in the ROI set by the deep learning segmentation model. (p<0.05) The automated segmentation method using deep learning presented in this study similarly predicts the average human-set ROI and left ventricular EF when a random GBP image is an input. If the automatic segmentation method is developed and applied to the functional examination method that needs to set ROI in the field of cardiac scintigram in nuclear medicine in the future, it is expected to greatly contribute to improving the efficiency and accuracy of processing and analysis by nuclear medicine specialists.

Does the palatal vault form have an influence on the scan time and accuracy of intraoral scans of completely edentulous arches? An in-vitro study

  • Osman, Reham;Alharbi, Nawal
    • The Journal of Advanced Prosthodontics
    • /
    • 제14권5호
    • /
    • pp.294-304
    • /
    • 2022
  • PURPOSE. The purpose of this study was to evaluate the influence of different palatal vault configurations on the accuracy and scan speed of intraoral scans (IO) of completely edentulous arches. MATERIALS AND METHODS. Three different virtual models of a completely edentulous maxillary arch with different palatal vault heights- Cl I moderate (U-shaped), Cl II deep (steep) and Cl III shallow (flat)-were digitally designed using CAD software (Meshmixer; Autodesk, USA) and 3D-printed using SLA-based 3D-printer (XFAB; DWS, Italy) (n = 30; 10 specimens per group). Each model was scanned using intraoral scanner (Trios 3; 3ShapeTM, Denmark). Scanning time was recorded for all samples. Scanning accuracy (trueness and precision) were evaluated using digital subtraction technique using Geomagic Control X v2020 (Geomagic; 3DSystems, USA). One-way analysis of variance (ANOVA) test was used to detect differences in scanning time, trueness and precision among the test groups. Statistical significance was set at α = .05. RESULTS. The scan process could not be completed for Class II group and manufacturer's recommended technique had to be modified. ANOVA revealed no statistically significant difference in trueness and precision values among the test groups (P=.959 and P=.658, respectively). Deep palatal vault (Cl II) showed significantly longer scan time compared to Cl I and III. CONCLUSION. The selection of scan protocol in complex cases such as deep palatal vault is of utmost importance. The modified, adopted longer path scan protocol of deep vault cases resulted in increased scan time when compared to the other two groups.

Synthetic data augmentation for pixel-wise steel fatigue crack identification using fully convolutional networks

  • Zhai, Guanghao;Narazaki, Yasutaka;Wang, Shuo;Shajihan, Shaik Althaf V.;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • 제29권1호
    • /
    • pp.237-250
    • /
    • 2022
  • Structural health monitoring (SHM) plays an important role in ensuring the safety and functionality of critical civil infrastructure. In recent years, numerous researchers have conducted studies to develop computer vision and machine learning techniques for SHM purposes, offering the potential to reduce the laborious nature and improve the effectiveness of field inspections. However, high-quality vision data from various types of damaged structures is relatively difficult to obtain, because of the rare occurrence of damaged structures. The lack of data is particularly acute for fatigue crack in steel bridge girder. As a result, the lack of data for training purposes is one of the main issues that hinders wider application of these powerful techniques for SHM. To address this problem, the use of synthetic data is proposed in this article to augment real-world datasets used for training neural networks that can identify fatigue cracks in steel structures. First, random textures representing the surface of steel structures with fatigue cracks are created and mapped onto a 3D graphics model. Subsequently, this model is used to generate synthetic images for various lighting conditions and camera angles. A fully convolutional network is then trained for two cases: (1) using only real-word data, and (2) using both synthetic and real-word data. By employing synthetic data augmentation in the training process, the crack identification performance of the neural network for the test dataset is seen to improve from 35% to 40% and 49% to 62% for intersection over union (IoU) and precision, respectively, demonstrating the efficacy of the proposed approach.

AI기반 하천 부유쓰레기 모니터링 기술 연구 (A Study of AI-based Monitoring Techniques for Land-based Debris in Stream)

  • 이경수;윤해인;원종화;정상화
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2023년도 학술발표회
    • /
    • pp.137-137
    • /
    • 2023
  • 해양쓰레기는 해안의 심미적 가치 저하뿐만 아니라 생태계 파괴, 유령 어업에 따른 수산업 피해 등의 사회적·환경적 문제를 발생시키며, 그중 70% 이상은 육상 기인으로 플라스틱 및 기타 쓰레기가 주를 이루는 해외와 달리 국내의 경우 다량의 초목류를 포함하고 있다. 다양한 부유쓰레기에 대한 기존의 해양쓰레기량 추정의 한계와 하천·하구 쓰레기 수거의 효율화를 위해 해양으로 유입되는 부유쓰레기 방지를 위한 실효성 있는 대책 수립이 필요한 실정이다. 본 연구는 해양 유입 전 하천의 차단시설에 차집된 부유쓰레기의 수거 효율화 및 지속가능한 해양쓰레기 데이터 구축을 위해 AI기반의 기술을 통해 부유쓰레기 성상 분석 기법(Object Detection)과 차집량 분석 기법(Semantic Segmentation)을 활용하였다. 실제와 유사한 데이터 수집을 위해 다양한 하천 환경(정수조, 소하천, 급경사수로)에 대해 탁도(녹조, 유사), 광량, 쓰레기형상, 초목류 함량, 날씨(소하천), 유속(급경사수로) 등의 실험조건에 대하여 해양쓰레기 분류 기준 및 통계를 바탕으로 부유쓰레기 종류 선정하여 학습을 위한 데이터를 수집하였다. 학습 목적에 따라 구분하여 라벨링(Bounding box, Polygon)을 수행하고, 각 분석 기법별 전이학습을 통해 Phase 1(정수조), Phase 2(소하천), Phase 3(급경사수로) 순서로 모델을 고도화하였다. 성상 분석을 위해 YOLO v4를 활용하여 Train, Test DataSet(9:1)을 구성하고 학습 및 평가는 Iteration마다의 mAP, loss 값을 통해 비교하였으며, 학습 Phase에 따라 모델 고도화로 Test Set의 mAP 값이 성상별로 높아짐을 확인하였으며, 차집량 분석을 위해 Unet을 활용하여 Train, Test, Validation DataSet(8.5:1:0.5)을 구성하고 epoch별 IoU(intersection over Union), F1-score, loss 값을 비교하여 정성적, 정량적 평가 모두 Phase 3에서 가장 높은 성능을 확인하였다. 향후 하천 환경에서의 다양한 영양인자별 분석을 통해 주요 영향인자 도출 및 Hyper Parameter 최적화를 통한 모델 고도화로 인해 활용성이 높아질 것으로 판단된다.

  • PDF

학습된 머신러닝의 표류 현상에 관한 고찰 (A Study on Drift Phenomenon of Trained ML)

  • 신병춘;차윤석;김채윤;차병래
    • 스마트미디어저널
    • /
    • 제11권7호
    • /
    • pp.61-69
    • /
    • 2022
  • 학습된 머신러닝은 시간 경과에 따른 학습 모델과 학습 데이터 측면의 표류 현상이 발생과 동시에 머신러닝의 성능이 퇴화하게 된다. 이를 해결하기 위한 방안으로 머신러닝의 재학습 시기를 결정하기 위한 ML 표류의 개념과 평가 방법을 제안하고자 한다. 딸기와 선명도에 따른 XAI 테스트 및 사과 이미지의 XAI 테스트를 진행하였다. 딸기의 경우 선명도 값에 따른 ML 모델의 XAI 분석의 변화는 미미하였으며 사과 이미지의 XAI의 경우 사과는 정상적으로 객체 분류 및 히트맵 영역을 표시하였으나 사과꽃 및 꽃봉오리의 경우 그 결과가 딸기나 사과에 비해 미미하였다. 이는 사과꽃 및 꽃봉오리의 학습 이미지 수가 부족하기에 발생한 것으로 예상되며 추후 더 많은 사과꽃 및 꽃봉오리 이미지를 학습하여 테스트할 계획이다.

교사 내 플랜트 모델 유형별 적용에 따른 공기질 변화 (Changes in Air Quality through the Application of Three Types of Green-Wall Model within Classrooms)

  • 양호형;김형주;방성원;조흔우;이형석;한승원;김광진;김호현
    • 한국환경보건학회지
    • /
    • 제49권6호
    • /
    • pp.295-304
    • /
    • 2023
  • Background: Adolescents are relatively more sensitive than adults to exposure to indoor pollutants. The indoor air quality in classrooms where students spend time together must therefore be managed at a safe level because it can affect the health of students. Objectives: In this study, three types of green-wall models were applied to classrooms where students spend a long time in a limited space, and the resulting effects on reducing PM were evaluated. Methods: In the middle school classrooms which were selected as the experimental subjects, IoT-based indoor air quality monitoring equipment was installed for real-time monitoring. Three types of plant models (passive, active, and active+light) were installed in each classroom to evaluate the effects on improving indoor air quality. Results: The concentration of PM in the classroom is influenced by outdoor air quality, but repeated increases and decreases in concentration were observed due to the influence of students' activities. There was a PM reduction effect by applying the green-wall model. There was a difference in PM reduction efficiency depending on the type of green-wall model, and the reduction efficiency of the active model was higher than the passive model. Conclusions: The active green-wall model can be used as an efficient method of improving indoor air quality. Additionally, more research is needed to increase the efficiency of improving indoor air quality by setting conditions that can stimulate the growth of each type of plant.

딥러닝 기반 실내 디자인 인식 (Deep Learning-based Interior Design Recognition)

  • 이원규;박지훈;이종혁;정희철
    • 대한임베디드공학회논문지
    • /
    • 제19권1호
    • /
    • pp.47-55
    • /
    • 2024
  • We spend a lot of time in indoor space, and the space has a huge impact on our lives. Interior design plays a significant role to make an indoor space attractive and functional. However, it should consider a lot of complex elements such as color, pattern, and material etc. With the increasing demand for interior design, there is a growing need for technologies that analyze these design elements accurately and efficiently. To address this need, this study suggests a deep learning-based design analysis system. The proposed system consists of a semantic segmentation model that classifies spatial components and an image classification model that classifies attributes such as color, pattern, and material from the segmented components. Semantic segmentation model was trained using a dataset of 30000 personal indoor interior images collected for research, and during inference, the model separate the input image pixel into 34 categories. And experiments were conducted with various backbones in order to obtain the optimal performance of the deep learning model for the collected interior dataset. Finally, the model achieved good performance of 89.05% and 0.5768 in terms of accuracy and mean intersection over union (mIoU). In classification part convolutional neural network (CNN) model which has recorded high performance in other image recognition tasks was used. To improve the performance of the classification model we suggests an approach that how to handle data that has data imbalance and vulnerable to light intensity. Using our methods, we achieve satisfactory results in classifying interior design component attributes. In this paper, we propose indoor space design analysis system that automatically analyzes and classifies the attributes of indoor images using a deep learning-based model. This analysis system, used as a core module in the A.I interior recommendation service, can help users pursuing self-interior design to complete their designs more easily and efficiently.