• 제목/요약/키워드: datasets

검색결과 2,005건 처리시간 0.026초

Exploring indicators of genetic selection using the sniffer method to reduce methane emissions from Holstein cows

  • Yoshinobu Uemoto;Tomohisa Tomaru;Masahiro Masuda;Kota Uchisawa;Kenji Hashiba;Yuki Nishikawa;Kohei Suzuki;Takatoshi Kojima;Tomoyuki Suzuki;Fuminori Terada
    • Animal Bioscience
    • /
    • 제37권2호
    • /
    • pp.173-183
    • /
    • 2024
  • Objective: This study aimed to evaluate whether the methane (CH4) to carbon dioxide (CO2) ratio (CH4/CO2) and methane-related traits obtained by the sniffer method can be used as indicators for genetic selection of Holstein cows with lower CH4 emissions. Methods: The sniffer method was used to simultaneously measure the concentrations of CH4 and CO2 during milking in each milking box of the automatic milking system to obtain CH4/CO2. Methane-related traits, which included CH4 emissions, CH4 per energy-corrected milk, methane conversion factor (MCF), and residual CH4, were calculated. First, we investigated the impact of the model with and without body weight (BW) on the lactation stage and parity for predicting methane-related traits using a first on-farm dataset (Farm 1; 400 records for 74 Holstein cows). Second, we estimated the genetic parameters for CH4/CO2 and methane-related traits using a second on-farm dataset (Farm 2; 520 records for 182 Holstein cows). Third, we compared the repeatability and environmental effects on these traits in both farm datasets. Results: The data from Farm 1 revealed that MCF can be reliably evaluated during the lactation stage and parity, even when BW is excluded from the model. Farm 2 data revealed low heritability and moderate repeatability for CH4/CO2 (0.12 and 0.46, respectively) and MCF (0.13 and 0.38, respectively). In addition, the estimated genetic correlation of milk yield with CH4/CO2 was low (0.07) and that with MCF was moderate (-0.53). The on-farm data indicated that CH4/CO2 and MCF could be evaluated consistently during the lactation stage and parity with moderate repeatability on both farms. Conclusion: This study demonstrated the on-farm applicability of the sniffer method for selecting cows with low CH4 emissions.

Deep Learning Algorithm for Automated Segmentation and Volume Measurement of the Liver and Spleen Using Portal Venous Phase Computed Tomography Images

  • Yura Ahn;Jee Seok Yoon;Seung Soo Lee;Heung-Il Suk;Jung Hee Son;Yu Sub Sung;Yedaun Lee;Bo-Kyeong Kang;Ho Sung Kim
    • Korean Journal of Radiology
    • /
    • 제21권8호
    • /
    • pp.987-997
    • /
    • 2020
  • Objective: Measurement of the liver and spleen volumes has clinical implications. Although computed tomography (CT) volumetry is considered to be the most reliable noninvasive method for liver and spleen volume measurement, it has limited application in clinical practice due to its time-consuming segmentation process. We aimed to develop and validate a deep learning algorithm (DLA) for fully automated liver and spleen segmentation using portal venous phase CT images in various liver conditions. Materials and Methods: A DLA for liver and spleen segmentation was trained using a development dataset of portal venous CT images from 813 patients. Performance of the DLA was evaluated in two separate test datasets: dataset-1 which included 150 CT examinations in patients with various liver conditions (i.e., healthy liver, fatty liver, chronic liver disease, cirrhosis, and post-hepatectomy) and dataset-2 which included 50 pairs of CT examinations performed at ours and other institutions. The performance of the DLA was evaluated using the dice similarity score (DSS) for segmentation and Bland-Altman 95% limits of agreement (LOA) for measurement of the volumetric indices, which was compared with that of ground truth manual segmentation. Results: In test dataset-1, the DLA achieved a mean DSS of 0.973 and 0.974 for liver and spleen segmentation, respectively, with no significant difference in DSS across different liver conditions (p = 0.60 and 0.26 for the liver and spleen, respectively). For the measurement of volumetric indices, the Bland-Altman 95% LOA was -0.17 ± 3.07% for liver volume and -0.56 ± 3.78% for spleen volume. In test dataset-2, DLA performance using CT images obtained at outside institutions and our institution was comparable for liver (DSS, 0.982 vs. 0.983; p = 0.28) and spleen (DSS, 0.969 vs. 0.968; p = 0.41) segmentation. Conclusion: The DLA enabled highly accurate segmentation and volume measurement of the liver and spleen using portal venous phase CT images of patients with various liver conditions.

유방암에서 자기공명영상 근거 영상표현형과 유전자 발현 프로파일 근거 위험도의 관계 (Correlation between MR Image-Based Radiomics Features and Risk Scores Associated with Gene Expression Profiles in Breast Cancer)

  • 김가람;구유진;김준호;김은경
    • 대한영상의학회지
    • /
    • 제81권3호
    • /
    • pp.632-643
    • /
    • 2020
  • 목적 자기공명영상 근거 영상표현형과 생체분자학적 아형, 유전자 발현 프로파일 근거 위험도 등 유방암 유전체 특징의 관계를 분석하고자 하였다. 대상과 방법 The Cancer Genome Atlas와 and the Cancer Imaging Archive에 공개된 자료를 이용하였다. 122개의 유방암의 자기공명영상에서 영상표현형이 추출되었다. 유전자 발현 프로파일에 따라 PAM50아형을 분류하고 위험도를 지정하였다. 영상표현형과 생체분자학적 특징의 관계를 분석하였다. 예측모델을 알아보기 위해 penalized generalized regression analysis를 이용하였다. 결과 PAM50아형은 maximum 2D diameter (p = 0.0189), degree of correlation (p = 0.0386), 그리고 inverse difference moment normalized (p = 0.0337)와 유의하게 관련이 있었다. 위험도 시스템 중에 GGI와 GENE70이 통계적으로 유의하게 8개의 영상표현형 특징을 서로 공유하였다(p = 0.0008~0.0492). Maximum 2D diameter가 두 위험도 시스템에서 가장 유의하게 관련있는 특징이었으나(p = 0.0139, p = 0.0008) 예측모델의 전반적인 연관 정도는 약했고 가장 높은 연관계수는 GENE70이 0.2171이었다. 결론 영상표현형 중에 maximum 2D diameter, degree of correlation, 그리고 inverse difference moment normalized가 PAM50 아형 그리고 GENE70과 같은 유전자 발현 프로파일 근거 위험도와 그 연관도는 약하였으나 유의한 관련을 보였다.

지속가능한 자원관리를 위한 섬 지역 관광자원의 공간정보와 소셜미디어 빅데이터 분석 결과를 활용한 격차분석 (A Gap Analysis Using Spatial Data and Social Media Big Data Analysis Results of Island Tourism Resources for Sustainable Resource Management)

  • 이성희;이주경;손용훈;김용진
    • 농촌계획
    • /
    • 제30권2호
    • /
    • pp.13-24
    • /
    • 2024
  • This study conducts an analysis of social media big data pertaining to island tourism resources, aiming to discern the diverse forms and categories of island tourism favored by consumers, ascertain predominant resources, and facilitate objective decision-making grounded in scientific methodologies. To achieve this objective, an examination of blog posts published on Naver from 2022 to 2023 was undertaken, utilizing keywords such as 'Island tourism', 'Island travel', and 'Island backpacking' as focal points for analysis. Text mining techniques were applied to sift through the data. Among the resources identified, the port emerged as a significant asset, serving as a pivotal conduit linking the island and mainland and holding substantial importance as a focal point and resource for tourist access to the island. Furthermore, an analysis of the disparity between existing island tourism resources and those acknowledged by tourists who actively engage with and appreciate island destinations led to the identification of 186 newly emerging resources. These nascent resources predominantly clustered within five regions: Incheon Metropolitan City, Tongyeong/Geoje City, Jeju Island, Ulleung-gun, and Shinan-gun. A scrutiny of these resources, categorized according to the tourism resource classification system, revealed a notable presence of new resources, chiefly in the domains of 'rural landscape', 'tourist resort/training facility', 'transportation facility', and 'natural resource'. Notably, many of these emerging resources were previously overlooked in official management targets or resource inventories pertaining to existing island tourism resources. Noteworthy examples include ports, beaches, and mountains, which, despite constituting a substantial proportion of the newly identified tourist resources, were not accorded prominence in spatial information datasets. This study holds significance in its ability to unearth novel tourism resources recognized by island tourism consumers through a gap analysis approach that juxtaposes the existing status of island tourism resource data with techniques utilizing social media big data. Furthermore, the methodology delineated in this research offers a valuable framework for domestic local governments to gauge local tourism demand and embark on initiatives for tourism development or regional revitalization.

Low-Dose Three-Dimensional Rotational Angiography for Evaluating Intracranial Aneurysms: Analysis of Image Quality and Radiation Dose

  • Hee Jong Ki;Bum-soo Kim;Jun-Ki Kim;Jai Ho Choi;Yong Sam Shin;Yangsean Choi;Na-Young Shin;Jinhee Jang;Kook-jin Ahn
    • Korean Journal of Radiology
    • /
    • 제23권2호
    • /
    • pp.256-263
    • /
    • 2022
  • Objective: This study aimed to evaluate the image quality and dose reduction of low-dose three-dimensional (3D) rotational angiography (RA) for evaluating intracranial aneurysms. Materials and Methods: We retrospectively evaluated the clinical data and 3D RA datasets obtained from 146 prospectively registered patients (male:female, 46:100; median age, 58 years; range, 19-81 years). The subjective image quality of 79 examinations obtained from a conventional method and 67 examinations obtained from a low-dose (5-seconds and 0.10-μGy/frame) method was assessed by two neurointerventionists using a 3-point scale for four evaluation criteria. The total image quality score was then obtained as the average of the four scores. The image quality scores were compared between the two methods using a noninferiority statistical testing, with a margin of -0.2 (i.e., score of low-dose group - score of conventional group). For the evaluation of dose reduction, dose-area product (DAP) and air kerma (AK) were analyzed and compared between the two groups. Results: The mean total image quality score ± standard deviation of the 3D RA was 2.97 ± 0.17 by reader 1 and 2.95 ± 0.20 by reader 2 for conventional group and 2.92 ± 0.30 and 2.95 ± 0.22, respectively, for low-dose group. The image quality of the 3D RA in the low-dose group was not inferior to that of the conventional group according to the total image quality score as well as individual scores for the four criteria in both readers. The mean DAP and AK per rotation were 5.87 Gy-cm2 and 0.56 Gy, respectively, in the conventional group, and 1.32 Gy-cm2 (p < 0.001) and 0.17 Gy (p < 0.001), respectively, in the low-dose group. Conclusion: Low-dose 3D RA was not inferior in image quality and reduced the radiation dose by 70%-77% compared to the conventional 3D RA in evaluating intracranial aneurysms.

Bone Age Assessment Using Artificial Intelligence in Korean Pediatric Population: A Comparison of Deep-Learning Models Trained With Healthy Chronological and Greulich-Pyle Ages as Labels

  • Pyeong Hwa Kim;Hee Mang Yoon;Jeong Rye Kim;Jae-Yeon Hwang;Jin-Ho Choi;Jisun Hwang;Jaewon Lee;Jinkyeong Sung;Kyu-Hwan Jung;Byeonguk Bae;Ah Young Jung;Young Ah Cho;Woo Hyun Shim;Boram Bak;Jin Seong Lee
    • Korean Journal of Radiology
    • /
    • 제24권11호
    • /
    • pp.1151-1163
    • /
    • 2023
  • Objective: To develop a deep-learning-based bone age prediction model optimized for Korean children and adolescents and evaluate its feasibility by comparing it with a Greulich-Pyle-based deep-learning model. Materials and Methods: A convolutional neural network was trained to predict age according to the bone development shown on a hand radiograph (bone age) using 21036 hand radiographs of Korean children and adolescents without known bone development-affecting diseases/conditions obtained between 1998 and 2019 (median age [interquartile range {IQR}], 9 [7-12] years; male:female, 11794:9242) and their chronological ages as labels (Korean model). We constructed 2 separate external datasets consisting of Korean children and adolescents with healthy bone development (Institution 1: n = 343; median age [IQR], 10 [4-15] years; male: female, 183:160; Institution 2: n = 321; median age [IQR], 9 [5-14] years; male: female, 164:157) to test the model performance. The mean absolute error (MAE), root mean square error (RMSE), and proportions of bone age predictions within 6, 12, 18, and 24 months of the reference age (chronological age) were compared between the Korean model and a commercial model (VUNO Med-BoneAge version 1.1; VUNO) trained with Greulich-Pyle-based age as the label (GP-based model). Results: Compared with the GP-based model, the Korean model showed a lower RMSE (11.2 vs. 13.8 months; P = 0.004) and MAE (8.2 vs. 10.5 months; P = 0.002), a higher proportion of bone age predictions within 18 months of chronological age (88.3% vs. 82.2%; P = 0.031) for Institution 1, and a lower MAE (9.5 vs. 11.0 months; P = 0.022) and higher proportion of bone age predictions within 6 months (44.5% vs. 36.4%; P = 0.044) for Institution 2. Conclusion: The Korean model trained using the chronological ages of Korean children and adolescents without known bone development-affecting diseases/conditions as labels performed better in bone age assessment than the GP-based model in the Korean pediatric population. Further validation is required to confirm its accuracy.

Deep Learning-Assisted Diagnosis of Pediatric Skull Fractures on Plain Radiographs

  • Jae Won Choi;Yeon Jin Cho;Ji Young Ha;Yun Young Lee;Seok Young Koh;June Young Seo;Young Hun Choi;Jung-Eun Cheon;Ji Hoon Phi;Injoon Kim;Jaekwang Yang;Woo Sun Kim
    • Korean Journal of Radiology
    • /
    • 제23권3호
    • /
    • pp.343-354
    • /
    • 2022
  • Objective: To develop and evaluate a deep learning-based artificial intelligence (AI) model for detecting skull fractures on plain radiographs in children. Materials and Methods: This retrospective multi-center study consisted of a development dataset acquired from two hospitals (n = 149 and 264) and an external test set (n = 95) from a third hospital. Datasets included children with head trauma who underwent both skull radiography and cranial computed tomography (CT). The development dataset was split into training, tuning, and internal test sets in a ratio of 7:1:2. The reference standard for skull fracture was cranial CT. Two radiology residents, a pediatric radiologist, and two emergency physicians participated in a two-session observer study on an external test set with and without AI assistance. We obtained the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity along with their 95% confidence intervals (CIs). Results: The AI model showed an AUROC of 0.922 (95% CI, 0.842-0.969) in the internal test set and 0.870 (95% CI, 0.785-0.930) in the external test set. The model had a sensitivity of 81.1% (95% CI, 64.8%-92.0%) and specificity of 91.3% (95% CI, 79.2%-97.6%) for the internal test set and 78.9% (95% CI, 54.4%-93.9%) and 88.2% (95% CI, 78.7%-94.4%), respectively, for the external test set. With the model's assistance, significant AUROC improvement was observed in radiology residents (pooled results) and emergency physicians (pooled results) with the difference from reading without AI assistance of 0.094 (95% CI, 0.020-0.168; p = 0.012) and 0.069 (95% CI, 0.002-0.136; p = 0.043), respectively, but not in the pediatric radiologist with the difference of 0.008 (95% CI, -0.074-0.090; p = 0.850). Conclusion: A deep learning-based AI model improved the performance of inexperienced radiologists and emergency physicians in diagnosing pediatric skull fractures on plain radiographs.

양측 심방 연결을 형성하는 부분 폐정맥 환류 이상의 3D 프린팅 모델 (Three-Dimensional Printed Model of Partial Anomalous Pulmonary Venous Return with Biatrial Connection)

  • 김명경;김성목;김은경;장성아;전태국;최연현
    • 대한영상의학회지
    • /
    • 제81권6호
    • /
    • pp.1523-1528
    • /
    • 2020
  • 부분 폐정맥 환류 이상은 드문 선천성 폐정맥 기형의 한 종류로 진단 시 종종 간과될 수 있다. 대부분의 경우 비 침습적인 영상검사인 심장 초음파, CT 또는 MRI로 진단을 하게 되는데, 2D 모니터를 이용한 영상진단은 삼차원적으로 복잡한 심장의 구조를 이해하는데 제한이 있다. 최근에는 CT와 MRI에서 얻은 의료 영상 데이터를 기반으로 3D 프린팅 기술을 이용하여 심장의 모형을 만드는 기술이 소개되어 점차 이용이 증가되고 있다. 본 증례 보고에서 저자들은 우측 상 폐정맥과 우측 중 폐정맥이 상대정맥으로의 각각 배출되며 우측 중 폐정맥을 통해 양측 심방 간의 연결이 이루어진 환자의 CT 영상 및 3D 프린팅 모델에 대해 보고하고자 한다.

딥러닝 기반 비디오 캡셔닝의 연구동향 분석 (Analysis of Research Trends in Deep Learning-Based Video Captioning)

  • 려치;이은주;김영수
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제13권1호
    • /
    • pp.35-49
    • /
    • 2024
  • 컴퓨터 비전과 자연어 처리의 융합의 중요한 결과로서 비디오 캡셔닝은 인공지능 분야의 핵심 연구 방향이다. 이 기술은 비디오 콘텐츠의 자동이해와 언어 표현을 가능하게 함으로써, 컴퓨터가 비디오의 시각적 정보를 텍스트 형태로 변환한다. 본 논문에서는 딥러닝 기반 비디오 캡셔닝의 연구 동향을 초기 분석하여 CNN-RNN 기반 모델, RNN-RNN 기반 모델, Multimodal 기반 모델, 그리고 Transformer 기반 모델이라는 네 가지 주요 범주로 나누어 각각의 비디오 캡셔닝 모델의 개념과 특징 그리고 장단점을 논하였다. 그리고 이 논문은 비디오 캡셔닝 분야에서 일반적으로 자주 사용되는 데이터 집합과 성능 평가방안을 나열하였다. 데이터 세트는 다양한 도메인과 시나리오를 포괄하여 비디오 캡션 모델의 훈련 및 검증을 위한 광범위한 리소스를 제공한다. 모델 성능 평가방안에서는 주요한 평가 지표를 언급하며, 모델의 성능을 다양한 각도에서 평가할 수 있도록 연구자들에게 실질적인 참조를 제공한다. 마지막으로 비디오 캡셔닝에 대한 향후 연구과제로서 실제 응용 프로그램에서의 복잡성을 증가시키는 시간 일관성 유지 및 동적 장면의 정확한 서술과 같이 지속해서 개선해야 할 주요 도전과제와 시간 관계 모델링 및 다중 모달 데이터 통합과 같이 새롭게 연구되어야 하는 과제를 제시하였다.

비정형 데이터셋 표준포맷 기반 국방 비정형 데이터셋 표준화 방안 제안 (Proposal of Standardization Plan for Defense Unstructured Datasets based on Unstructured Dataset Standard Format)

  • 황윤영;손지성
    • 인터넷정보학회논문지
    • /
    • 제25권1호
    • /
    • pp.189-198
    • /
    • 2024
  • 민간에서뿐 아니라 국방분야에서도 인공지능은 국방의 발전을 위해 꼭 도입되어야 하는 첨단기술로 받아들여지고 있으며, 특히 국방과학기술혁신의 핵심 과제로 인공지능이 선정되고, 데이터의 중요성이 확대되고 있다. 국방은 폐쇄적인 데이터 정책에서 데이터 공유·활성화로 방향을 전환하고 있으며, 국방의 발전을 위해 필요한 양질의 데이터를 확보하기 위한 노력을 기울이고 있다. 특히 AI·빅데이터의 고유한 특성이 반영될 수 있도록 관련 절차 개선 및 대량·양질의 데이터가 충분히 확보된 상태에서 연구개발이 시작될 수 있도록 데이터 확보를 위한 사업예산과 제도 검토를 추진하고 있다. 그러나 국방 차원의 정형데이터 및 비정형 데이터의 표준화·품질 기준 마련이 필요한 상황이나 지금까지 국방은 정형데이터의 표준화·품질 기준을 제안하고 있는 수준으로 이에 대한 보완이 필요하다. 본 논문에서는 국방 인공지능에서 가장 필요한 국방 비정형 데이터셋을 위한 비정형 데이터셋 표준포맷을 제안하고, 이를 바탕으로 국방 비정형 데이터셋 표준화 방안을 제안한다.