• Title/Summary/Keyword: network performance

Search Result 13,841, Processing Time 0.038 seconds

Enhanced Lung Cancer Segmentation with Deep Supervision and Hybrid Lesion Focal Loss in Chest CT Images (흉부 CT 영상에서 심층 감독 및 하이브리드 병변 초점 손실 함수를 활용한 폐암 분할 개선)

  • Min Jin Lee;Yoon-Seon Oh;Helen Hong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.1
    • /
    • pp.11-17
    • /
    • 2024
  • Lung cancer segmentation in chest CT images is challenging due to the varying sizes of tumors and the presence of surrounding structures with similar intensity values. To address these issues, we propose a lung cancer segmentation network that incorporates deep supervision and utilizes UNet3+ as the backbone. Additionally, we propose a hybrid lesion focal loss function comprising three components: pixel-based, region-based, and shape-based, which allows us to focus on the smaller tumor regions relative to the background and consider shape information for handling ambiguous boundaries. We validate our proposed method through comparative experiments with UNet and UNet3+ and demonstrate that our proposed method achieves superior performance in terms of Dice Similarity Coefficient (DSC) for tumors of all sizes.

Relationship networks among nurses in acute nursing care units (종합병원 간호단위의 간호사 관계 네트워크 연구)

  • Park, Seungmi;Park, Eun-Jun
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.30 no.2
    • /
    • pp.182-191
    • /
    • 2024
  • Purpose: The purpose of this study was to explore the characteristics of social networks among registered nurses in acute nursing care units. Methods: This study used a survey design. Four nursing units from two acute hospitals were selected using a convenience method, and 83 nurses from those nursing units participated in the study in July 2022. The positive influences among nurses included friendship, collaboration, advice, and referent networks, and the negative influences included avoidance and bullying networks. Using the NetMiner program, the k-means clustering technique was applied to create groups of nodes with similar characteristics. The general characteristics of the participants were analyzed by mean, standard deviation, frequency, and ANOVA or chi-squared test. Results: As a result of dividing the 83 nurse participants into four clusters, positive influencers, silent peers, unwelcome peers, and active bullies were identified. Positive influence group nurses were frequently mentioned in the friendship, collaboration, advice, and referent networks. On the other hand, nurses in the unwelcome group and the active bullying group were frequently mentioned in the avoidance and bullying networks. Conclusion: Social networks that have a positive or negative impact on nursing performance are created through different relationships between nurses. Nurse managers can use the findings to create a more supportive and collaborative environment. Further research is needed to develop intervention programs to improve interactions and relationships between fellow nurses.

Comprehensive Lipid Profiling Recapitulates Enhanced Lipolysis and Fatty Acid Metabolism in Intimal Foamy Macrophages From Murine Atherosclerotic Aorta

  • Jae Won Seo;Kyu Seong Park;Gwang Bin Lee;Sang-eun Park;Jae-Hoon Choi;Myeong Hee Moon
    • IMMUNE NETWORK
    • /
    • v.23 no.4
    • /
    • pp.28.1-28.20
    • /
    • 2023
  • Lipid accumulation in macrophages is a prominent phenomenon observed in atherosclerosis. Previously, intimal foamy macrophages (FM) showed decreased inflammatory gene expression compared to intimal non-foamy macrophages (NFM). Since reprogramming of lipid metabolism in macrophages affects immunological functions, lipid profiling of intimal macrophages appears to be important for understanding the phenotypic changes of macrophages in atherosclerotic lesions. While lipidomic analysis has been performed in atherosclerotic aortic tissues and cultured macrophages, direct lipid profiling has not been performed in primary aortic macrophages from atherosclerotic aortas. We utilized nanoflow ultrahigh-performance liquid chromatography-tandem mass spectrometry to provide comprehensive lipid profiles of intimal non-foamy and foamy macrophages and adventitial macrophages from Ldlr-/- mouse aortas. We also analyzed the gene expression of each macrophage type related to lipid metabolism. FM showed increased levels of fatty acids, cholesterol esters, phosphatidylcholine, lysophosphatidylcholine, phosphatidylinositol, and sphingomyelin. However, phosphatidylethanolamine, phosphatidic acid, and ceramide levels were decreased in FM compared to those in NFM. Interestingly, FM showed decreased triacylglycerol (TG) levels. Expressions of lipolysis-related genes including Pnpla2 and Lpl were markedly increased but expressions of Lpin2 and Dgat1 related to TG synthesis were decreased in FM. Analysis of transcriptome and lipidome data revealed differences in the regulation of each lipid metabolic pathway in aortic macrophages. These comprehensive lipidomic data could clarify the phenotypes of macrophages in the atherosclerotic aorta.

Implementation of a Scheme Mobile Programming Application and Performance Evaluation of the Interpreter (Scheme 프로그래밍 모바일 앱 구현과 인터프리터 성능 평가)

  • Dongseob Kim;Sangkon Han;Gyun Woo
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.3
    • /
    • pp.122-129
    • /
    • 2024
  • Though programming education has been stressed recently, the elementary, middle, and high school students are having trouble in programming education. Most programming environments for them are based on block coding, which hinders them from moving to text coding. The traditional PC environment has also troubles such as maintenance problems. In this situation, mobile applications can be considered as alternative programming environments. This paper addresses the design and implementation of coding applications for mobile devices. As a prototype, a Scheme interpreter mobile app is proposed, where Scheme is used for programming courses at MIT since it supports multi-paradigm programming. The implementation has the advantage of not consuming the network bandwidth since it is designed as a standalone application. According to the benchmark result, the execution time on Android devices, relative to that on a desktop, was 131% for the Derivative and 157% for the Tak. Further, the maximum execution times for the benchmark programs on the Android device were 19.8ms for the Derivative and 131.15ms for the Tak benchmark. This confirms that when selecting an Android device for programming education purposes, there are no significant constraints for training.

Development of Deep Learning-based Automatic Classification of Architectural Objects in Point Clouds for BIM Application in Renovating Aging Buildings (딥러닝 기반 노후 건축물 리모델링 시 BIM 적용을 위한 포인트 클라우드의 건축 객체 자동 분류 기술 개발)

  • Kim, Tae-Hoon;Gu, Hyeong-Mo;Hong, Soon-Min;Choo, Seoung-Yeon
    • Journal of KIBIM
    • /
    • v.13 no.4
    • /
    • pp.96-105
    • /
    • 2023
  • This study focuses on developing a building object recognition technology for efficient use in the remodeling of buildings constructed without drawings. In the era of the 4th industrial revolution, smart technologies are being developed. This research contributes to the architectural field by introducing a deep learning-based method for automatic object classification and recognition, utilizing point cloud data. We use a TD3D network with voxels, optimizing its performance through adjustments in voxel size and number of blocks. This technology enables the classification of building objects such as walls, floors, and roofs from 3D scanning data, labeling them in polygonal forms to minimize boundary ambiguities. However, challenges in object boundary classifications were observed. The model facilitates the automatic classification of non-building objects, thereby reducing manual effort in data matching processes. It also distinguishes between elements to be demolished or retained during remodeling. The study minimized data set loss space by labeling using the extremities of the x, y, and z coordinates. The research aims to enhance the efficiency of building object classification and improve the quality of architectural plans by reducing manpower and time during remodeling. The study aligns with its goal of developing an efficient classification technology. Future work can extend to creating classified objects using parametric tools with polygon-labeled datasets, offering meaningful numerical analysis for remodeling processes. Continued research in this direction is anticipated to significantly advance the efficiency of building remodeling techniques.

Coating defect classification method for steel structures with vision-thermography imaging and zero-shot learning

  • Jun Lee;Kiyoung Kim;Hyeonjin Kim;Hoon Sohn
    • Smart Structures and Systems
    • /
    • v.33 no.1
    • /
    • pp.55-64
    • /
    • 2024
  • This paper proposes a fusion imaging-based coating-defect classification method for steel structures that uses zero-shot learning. In the proposed method, a halogen lamp generates heat energy on the coating surface of a steel structure, and the resulting heat responses are measured by an infrared (IR) camera, while photos of the coating surface are captured by a charge-coupled device (CCD) camera. The measured heat responses and visual images are then analyzed using zero-shot learning to classify the coating defects, and the estimated coating defects are visualized throughout the inspection surface of the steel structure. In contrast to older approaches to coating-defect classification that relied on visual inspection and were limited to surface defects, and older artificial neural network (ANN)-based methods that required large amounts of data for training and validation, the proposed method accurately classifies both internal and external defects and can classify coating defects for unobserved classes that are not included in the training. Additionally, the proposed model easily learns about additional classifying conditions, making it simple to add classes for problems of interest and field application. Based on the results of validation via field testing, the defect-type classification performance is improved 22.7% of accuracy by fusing visual and thermal imaging compared to using only a visual dataset. Furthermore, the classification accuracy of the proposed method on a test dataset with only trained classes is validated to be 100%. With word-embedding vectors for the labels of untrained classes, the classification accuracy of the proposed method is 86.4%.

Clinical Validation of a Deep Learning-Based Hybrid (Greulich-Pyle and Modified Tanner-Whitehouse) Method for Bone Age Assessment

  • Kyu-Chong Lee;Kee-Hyoung Lee;Chang Ho Kang;Kyung-Sik Ahn;Lindsey Yoojin Chung;Jae-Joon Lee;Suk Joo Hong;Baek Hyun Kim;Euddeum Shim
    • Korean Journal of Radiology
    • /
    • v.22 no.12
    • /
    • pp.2017-2025
    • /
    • 2021
  • Objective: To evaluate the accuracy and clinical efficacy of a hybrid Greulich-Pyle (GP) and modified Tanner-Whitehouse (TW) artificial intelligence (AI) model for bone age assessment. Materials and Methods: A deep learning-based model was trained on an open dataset of multiple ethnicities. A total of 102 hand radiographs (51 male and 51 female; mean age ± standard deviation = 10.95 ± 2.37 years) from a single institution were selected for external validation. Three human experts performed bone age assessments based on the GP atlas to develop a reference standard. Two study radiologists performed bone age assessments with and without AI model assistance in two separate sessions, for which the reading time was recorded. The performance of the AI software was assessed by comparing the mean absolute difference between the AI-calculated bone age and the reference standard. The reading time was compared between reading with and without AI using a paired t test. Furthermore, the reliability between the two study radiologists' bone age assessments was assessed using intraclass correlation coefficients (ICCs), and the results were compared between reading with and without AI. Results: The bone ages assessed by the experts and the AI model were not significantly different (11.39 ± 2.74 years and 11.35 ± 2.76 years, respectively, p = 0.31). The mean absolute difference was 0.39 years (95% confidence interval, 0.33-0.45 years) between the automated AI assessment and the reference standard. The mean reading time of the two study radiologists was reduced from 54.29 to 35.37 seconds with AI model assistance (p < 0.001). The ICC of the two study radiologists slightly increased with AI model assistance (from 0.945 to 0.990). Conclusion: The proposed AI model was accurate for assessing bone age. Furthermore, this model appeared to enhance the clinical efficacy by reducing the reading time and improving the inter-observer reliability.

Scientometrics-based R&D Topography Analysis to Identify Research Trends Related to Image Segmentation (이미지 분할(image segmentation) 관련 연구 동향 파악을 위한 과학계량학 기반 연구개발지형도 분석)

  • Young-Chan Kim;Byoung-Sam Jin;Young-Chul Bae
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.3
    • /
    • pp.563-572
    • /
    • 2024
  • Image processing and computer vision technologies are becoming increasingly important in a variety of application fields that require techniques and tools for sophisticated image analysis. In particular, image segmentation is a technology that plays an important role in image analysis. In this study, in order to identify recent research trends on image segmentation techniques, we used the Web of Science(WoS) database to analyze the R&D topography based on the network structure of the author's keyword co-occurrence matrix. As a result, from 2015 to 2023, as a result of the analysis of the R&D map of research articles on image segmentation, R&D in this field is largely focused on four areas of research and development: (1) researches on collecting and preprocessing image data to build higher-performance image segmentation models, (2) the researches on image segmentation using statistics-based models or machine learning algorithms, (3) the researches on image segmentation for medical image analysis, and (4) deep learning-based image segmentation-related R&D. The scientometrics-based analysis performed in this study can not only map the trajectory of R&D related to image segmentation, but can also serve as a marker for future exploration in this dynamic field.

Development of a Deep Learning-Based Automated Analysis System for Facial Vitiligo Treatment Evaluation (안면 백반증 치료 평가를 위한 딥러닝 기반 자동화 분석 시스템 개발)

  • Sena Lee;Yeon-Woo Heo;Solam Lee;Sung Bin Park
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.2
    • /
    • pp.95-100
    • /
    • 2024
  • Vitiligo is a condition characterized by the destruction or dysfunction of melanin-producing cells in the skin, resulting in a loss of skin pigmentation. Facial vitiligo, specifically affecting the face, significantly impacts patients' appearance, thereby diminishing their quality of life. Evaluating the efficacy of facial vitiligo treatment typically relies on subjective assessments, such as the Facial Vitiligo Area Scoring Index (F-VASI), which can be time-consuming and subjective due to its reliance on clinical observations like lesion shape and distribution. Various machine learning and deep learning methods have been proposed for segmenting vitiligo areas in facial images, showing promising results. However, these methods often struggle to accurately segment vitiligo lesions irregularly distributed across the face. Therefore, our study introduces a framework aimed at improving the segmentation of vitiligo lesions on the face and providing an evaluation of vitiligo lesions. Our framework for facial vitiligo segmentation and lesion evaluation consists of three main steps. Firstly, we perform face detection to minimize background areas and identify the face area of interest using high-quality ultraviolet photographs. Secondly, we extract facial area masks and vitiligo lesion masks using a semantic segmentation network-based approach with the generated dataset. Thirdly, we automatically calculate the vitiligo area relative to the facial area. We evaluated the performance of facial and vitiligo lesion segmentation using an independent test dataset that was not included in the training and validation, showing excellent results. The framework proposed in this study can serve as a useful tool for evaluating the diagnosis and treatment efficacy of vitiligo.

Missing Value Imputation Technique for Water Quality Dataset

  • Jin-Young Jun;Youn-A Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.39-46
    • /
    • 2024
  • Many researchers make efforts to evaluate water quality using various models. Such models require a dataset without missing values, but in real world, most datasets include missing values for various reasons. Simple deletion of samples having missing value(s) could distort distribution of the underlying data and pose a significant risk of biasing the model's inference when the missing mechanism is not MCAR. In this study, to explore the most appropriate technique for handing missing values in water quality data, several imputation techniques were experimented based on existing KNN and MICE imputation with/without the generative neural network model, Autoencoder(AE) and Denoising Autoencoder(DAE). The results shows that KNN and MICE combined imputation without generative networks provides the closest estimated values to the true values. When evaluating binary classification models based on support vector machine and ensemble algorithms after applying the combined imputation technique to the observed water quality dataset with missing values, it shows better performance in terms of Accuracy, F1 score, RoC-AuC score and MCC compared to those evaluated after deleting samples having missing values.