• Title/Summary/Keyword: Automatic validation

Search Result 186, Processing Time 0.029 seconds

Computerized bone age estimation system based on China-05 standard

  • Yin, Chuangao;Zhang, Miao;Wang, Chang;Lin, Huihui;Li, Gengwu;Zhu, Lichun;Fei, Weimin;Wang, Xiaoyu
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.197-212
    • /
    • 2022
  • The purpose of this study is to develop an automatic software system for bone age evaluation and to evaluate its accuracy in testing and feasibility in clinical practice. 20394 left-hand radiographs of healthy children (2-18 years old) were collected from China Skeletal Development Survey data of 1998 and China Skeletal Development Survey data of 2005. Three experienced radiologists and China-05 standard maker jointly evaluate the stages of bone development and the reference bone age was determined by consensus. 1020 from 20394 radiographs were picked randomly as test set and the remaining 19374 radiographs as training set and validation set. Accuracy of the automatic software system for bone age assessment is evaluated in test set and two clinical test sets. Compared with the reference standard, the automatic software system based on RUS-CHN for bone age assessment has a 0.04 years old mean difference, ±0.40 years old in 95% confidence interval by single reading, a 85.6% percentage agreement of ratings, a 93.7% bone age accuracy rate, 0.17 years old of MAD, 0.29 years old of RMS; Compared with the reference standard, the automatic software system based on TW3-C RUS has a 0.04 years old mean difference, a ±0.38 years old in 95% confidence interval by single reading, a 90.9% percentage agreement of ratings, a 93.2% bone age accuracy rate, a 0.16 years of MAD, and a 0.28 years of RMS. Automatic software system, AI-China-05 showed reliably accuracy in bone age estimation and steady determination in different clinical test sets.

Fully Automatic Segmentation of Acute Ischemic Lesions on Diffusion-Weighted Imaging Using Convolutional Neural Networks: Comparison with Conventional Algorithms

  • Ilsang Woo;Areum Lee;Seung Chai Jung;Hyunna Lee;Namkug Kim;Se Jin Cho;Donghyun Kim;Jungbin Lee;Leonard Sunwoo;Dong-Wha Kang
    • Korean Journal of Radiology
    • /
    • v.20 no.8
    • /
    • pp.1275-1284
    • /
    • 2019
  • Objective: To develop algorithms using convolutional neural networks (CNNs) for automatic segmentation of acute ischemic lesions on diffusion-weighted imaging (DWI) and compare them with conventional algorithms, including a thresholding-based segmentation. Materials and Methods: Between September 2005 and August 2015, 429 patients presenting with acute cerebral ischemia (training:validation:test set = 246:89:94) were retrospectively enrolled in this study, which was performed under Institutional Review Board approval. Ground truth segmentations for acute ischemic lesions on DWI were manually drawn under the consensus of two expert radiologists. CNN algorithms were developed using two-dimensional U-Net with squeeze-and-excitation blocks (U-Net) and a DenseNet with squeeze-and-excitation blocks (DenseNet) with squeeze-and-excitation operations for automatic segmentation of acute ischemic lesions on DWI. The CNN algorithms were compared with conventional algorithms based on DWI and the apparent diffusion coefficient (ADC) signal intensity. The performances of the algorithms were assessed using the Dice index with 5-fold cross-validation. The Dice indices were analyzed according to infarct volumes (< 10 mL, ≥ 10 mL), number of infarcts (≤ 5, 6-10, ≥ 11), and b-value of 1000 (b1000) signal intensities (< 50, 50-100, > 100), time intervals to DWI, and DWI protocols. Results: The CNN algorithms were significantly superior to conventional algorithms (p < 0.001). Dice indices for the CNN algorithms were 0.85 for U-Net and DenseNet and 0.86 for an ensemble of U-Net and DenseNet, while the indices were 0.58 for ADC-b1000 and b1000-ADC and 0.52 for the commercial ADC algorithm. The Dice indices for small and large lesions, respectively, were 0.81 and 0.88 with U-Net, 0.80 and 0.88 with DenseNet, and 0.82 and 0.89 with the ensemble of U-Net and DenseNet. The CNN algorithms showed significant differences in Dice indices according to infarct volumes (p < 0.001). Conclusion: The CNN algorithm for automatic segmentation of acute ischemic lesions on DWI achieved Dice indices greater than or equal to 0.85 and showed superior performance to conventional algorithms.

New Automatic Taxonomy Generation Algorithm for the Audio Genre Classification (음악 장르 분류를 위한 새로운 자동 Taxonomy 구축 알고리즘)

  • Choi, Tack-Sung;Moon, Sun-Kook;Park, Young-Cheol;Youn, Dae-Hee;Lee, Seok-Pil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.3
    • /
    • pp.111-118
    • /
    • 2008
  • In this paper, we propose a new automatic taxonomy generation algorithm for the audio genre classification. The proposed algorithm automatically generates hierarchical taxonomy based on the estimated classification accuracy at all possible nodes. The estimation of classification accuracy in the proposed algorithm is conducted by applying the training data to classifier using k-fold cross validation. Subsequent classification accuracy is then to be tested at every node which consists of two clusters by applying one-versus-one support vector machine. In order to assess the performance of the proposed algorithm, we extracted various features which represent characteristics such as timbre, rhythm, pitch and so on. Then, we investigated classification performance using the proposed algorithm and previous flat classifiers. The classification accuracy reaches to 89 percent with proposed scheme, which is 5 to 25 percent higher than the previous flat classification methods. Using low-dimensional feature vectors, in particular, it is 10 to 25 percent higher than previous algorithms for classification experiments.

Automated Segmentation of Left Ventricular Myocardium on Cardiac Computed Tomography Using Deep Learning

  • Hyun Jung Koo;June-Goo Lee;Ji Yeon Ko;Gaeun Lee;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.21 no.6
    • /
    • pp.660-669
    • /
    • 2020
  • Objective: To evaluate the accuracy of a deep learning-based automated segmentation of the left ventricle (LV) myocardium using cardiac CT. Materials and Methods: To develop a fully automated algorithm, 100 subjects with coronary artery disease were randomly selected as a development set (50 training / 20 validation / 30 internal test). An experienced cardiac radiologist generated the manual segmentation of the development set. The trained model was evaluated using 1000 validation set generated by an experienced technician. Visual assessment was performed to compare the manual and automatic segmentations. In a quantitative analysis, sensitivity and specificity were calculated according to the number of pixels where two three-dimensional masks of the manual and deep learning segmentations overlapped. Similarity indices, such as the Dice similarity coefficient (DSC), were used to evaluate the margin of each segmented masks. Results: The sensitivity and specificity of automated segmentation for each segment (1-16 segments) were high (85.5-100.0%). The DSC was 88.3 ± 6.2%. Among randomly selected 100 cases, all manual segmentation and deep learning masks for visual analysis were classified as very accurate to mostly accurate and there were no inaccurate cases (manual vs. deep learning: very accurate, 31 vs. 53; accurate, 64 vs. 39; mostly accurate, 15 vs. 8). The number of very accurate cases for deep learning masks was greater than that for manually segmented masks. Conclusion: We present deep learning-based automatic segmentation of the LV myocardium and the results are comparable to manual segmentation data with high sensitivity, specificity, and high similarity scores.

Optimization of Multi-Atlas Segmentation with Joint Label Fusion Algorithm for Automatic Segmentation in Prostate MR Imaging

  • Choi, Yoon Ho;Kim, Jae-Hun;Kim, Chan Kyo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.24 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • Purpose: Joint label fusion (JLF) is a popular multi-atlas-based segmentation algorithm, which compensates for dependent errors that may exist between atlases. However, in order to get good segmentation results, it is very important to set the several free parameters of the algorithm to optimal values. In this study, we first investigate the feasibility of a JLF algorithm for prostate segmentation in MR images, and then suggest the optimal set of parameters for the automatic prostate segmentation by validating the results of each parameter combination. Materials and Methods: We acquired T2-weighted prostate MR images from 20 normal heathy volunteers and did a series of cross validations for every set of parameters of JLF. In each case, the atlases were rigidly registered for the target image. Then, we calculated their voting weights for label fusion from each combination of JLF's parameters (rpxy, rpz, rsxy, rsz, β). We evaluated the segmentation performances by five validation metrics of the Prostate MR Image Segmentation challenge. Results: As the number of voxels participating in the voting weight calculation and the number of referenced atlases is increased, the overall segmentation performance is gradually improved. The JLF algorithm showed the best results for dice similarity coefficient, 0.8495 ± 0.0392; relative volume difference, 15.2353 ± 17.2350; absolute relative volume difference, 18.8710 ± 13.1546; 95% Hausdorff distance, 7.2366 ± 1.8502; and average boundary distance, 2.2107 ± 0.4972; in parameters of rpxy = 10, rpz = 1, rsxy = 3, rsz = 1, and β = 3. Conclusion: The evaluated results showed the feasibility of the JLF algorithm for automatic segmentation of prostate MRI. This empirical analysis of segmentation results by label fusion allows for the appropriate setting of parameters.

Application of the artificial intelligence for automatic detection of shipping noise in shallow-water (천해역 선박 소음 자동 탐지를 위한 인공지능 기법 적용)

  • Kim, Sunhyo;Jung, Seom-Kyu;Kang, Donhyug;Kim, Mira;Cho, Sungho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.4
    • /
    • pp.279-285
    • /
    • 2020
  • The study on the temporal and spatial monitoring of passing vessels is important in terms of protection and management the marine ecosystem in the coastal area. In this paper, we propose the automatic detection technique of passing vessel by utilizing an artificial intelligence technology and broadband striation patterns which are characteristic of broadband noise radiated by passing vessel. Acoustic measurements to collect underwater noise spectrum images and ship navigation information were conducted in the southern region of Jeju Island in South Korea for 12 days (2016.07.15-07.26). And the convolution neural network model is optimized through learning and validation processes based on the collected images. The automatic detection performance of passing vessel is evaluated by precision (0.936), recall (0.830), average precision (0.824), and accuracy (0.949). In conclusion, the possibility of the automatic detection technique of passing vessel is confirmed by using an artificial intelligence technology, and a future study is proposed from the results of this study.

Evaluation of Regression Models with various Criteria and Optimization Methods for Pollutant Load Estimations (다양한 평가 지표와 최적화 기법을 통한 오염부하 산정 회귀 모형 평가)

  • Kim, Jonggun;Lim, Kyoung Jae;Park, Youn Shik
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2018.05a
    • /
    • pp.448-448
    • /
    • 2018
  • In this study, the regression models (Load ESTimator and eight-parameter model) were evaluated to estimate instantaneous pollutant loads under various criteria and optimization methods. As shown in the results, LOADEST commonly used in interpolating pollutant loads could not necessarily provide the best results with the automatic selected regression model. It is inferred that the various regression models in LOADEST need to be considered to find the best solution based on the characteristics of watersheds applied. The recently developed eight-parameter model integrated with Genetic Algorithm (GA) and Gradient Descent Method (GDM) were also compared with LOADEST indicating that the eight-parameter model performed better than LOADEST, but it showed different behaviors in calibration and validation. The eight-parameter model with GDM could reproduce the nitrogen loads properly outside of calibration period (validation). Furthermore, the accuracy and precision of model estimations were evaluated using various criteria (e.g., $R^2$ and gradient and constant of linear regression line). The results showed higher precisions with the $R^2$ values closed to 1.0 in LOADEST and better accuracy with the constants (in linear regression line) closed to 0.0 in the eight-parameter model with GDM. In hence, based on these finding we recommend that users need to evaluate the regression models under various criteria and calibration methods to provide the more accurate and precise results for pollutant load estimations.

  • PDF

Validation of Mid Air Collision Detection Model using Aviation Safety Data (항공안전 데이터를 이용한 항공기 공중충돌위험식별 모형 검증 및 고도화)

  • Paek, Hyunjin;Park, Bae-seon;Kim, Hyewook
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.29 no.4
    • /
    • pp.37-44
    • /
    • 2021
  • In case of South Korea, the airspace which airlines can operate is extremely limited due to the military operational area located within the Incheon flight information region. As a result, safety problems such as mid-air collision between aircraft or Traffic alert and Collision Avoidance System Resolution Advisory (TCAS RA) may occur with higher probability than in wider airspace. In order to prevent such safety problems, an mid-air collision risk detection model based on Detect-And-Avoid (DAA) well clear metrics is investigated. The model calculates the risk of mid-air collision between aircraft using aircraft trajectory data. In this paper, the practical use of DAA well clear metrics based model has been validated. Aviation safety data such as aviation safety mandatory report and Automatic Dependent Surveillance Broadcast is used to measure the performance of the model. The attributes of individual aircraft track data is analyzed to correct the threshold of each parameter of the model.

Blood pressure measurements and hypertension in infants, children, and adolescents: from the postmercury to mobile devices

  • Lim, Seon Hee;Kim, Seong Heon
    • Clinical and Experimental Pediatrics
    • /
    • v.65 no.2
    • /
    • pp.73-80
    • /
    • 2022
  • A mercury sphygmomanometer (MS) has been the gold standard for pediatric blood pressure (BP) measurements, and diagnosing hypertension is critical. However, because of environmental issues, other alternatives are needed. Noninvasive BP measurement devices are largely divided into auscultatory and oscillometric types. The aneroid sphygmomanometer, the currently used auscultatory method, is inferior to MS in terms of limitations such as validation and regular calibration and difficult to apply to infants, in whom Korotkoff sounds are not audible. The oscillometric method uses an automatic device that eliminates errors caused by human observers and has the advantage of being easy to use; however, owing to its measurement accuracy issues, the development of an international validation protocol for children is important. The hybrid method, which combines the auscultatory and electronic methods, solves some of these problems by eliminating the observer bias of terminal digit preference while maintaining measurement accuracy; however, the auscultatory method remains limited. As the age-related characteristics of the pediatric group are heterogeneous, it is necessary to reconsider the appropriate BP measurement method suitable for this indication. In addition, the mobile application-based BP measurement market is growing rapidly with the development of smartphone applications. Although more research is still needed on their accuracy, many experts expect that mobile application-based BP measurement will effectively reduce medical costs due to increased ease of access and early BP management.

Automatic Extraction of Component Collaboration in Java Web Applications by Using Servlet Filters and Wrappers (자바 웹 앱에서 서블릿 필터와 래퍼를 이용한 컴포넌트 협력 과정 자동 추출 기법)

  • Oh, Jaewon;Ahn, Woo Hyun;Kim, Taegong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.7
    • /
    • pp.329-336
    • /
    • 2017
  • As web apps have evolved faster and become more complex, their validation and verification have become essential for their development and maintenance. Efficient validation and verification require understanding of how web components collaborate with each other to meet user requests. Thus, this paper proposes a new approach to automatically extracting such collaboration when a user issues a request for a new page. The approach is dynamic and less sensitive to web development languages and technologies, compared to static extraction approaches. It considers an orignal web app as a black-box and does not change the app's behavior. The empirical evaluation shows that our approach can be applicable to extract component collaboration and understand the behavior of open source web apps.