• 제목/요약/키워드: Convolutional Neural Networks

검색결과 666건 처리시간 0.037초

Deep learning-based apical lesion segmentation from panoramic radiographs

  • Il-Seok, Song;Hak-Kyun, Shin;Ju-Hee, Kang;Jo-Eun, Kim;Kyung-Hoe, Huh;Won-Jin, Yi;Sam-Sun, Lee;Min-Suk, Heo
    • Imaging Science in Dentistry
    • /
    • 제52권4호
    • /
    • pp.351-357
    • /
    • 2022
  • Purpose: Convolutional neural networks (CNNs) have rapidly emerged as one of the most promising artificial intelligence methods in the field of medical and dental research. CNNs can provide an effective diagnostic methodology allowing for the detection of early-staged diseases. Therefore, this study aimed to evaluate the performance of a deep CNN algorithm for apical lesion segmentation from panoramic radiographs. Materials and Methods: A total of 1000 panoramic images showing apical lesions were separated into training (n=800, 80%), validation (n=100, 10%), and test (n=100, 10%) datasets. The performance of identifying apical lesions was evaluated by calculating the precision, recall, and F1-score. Results: In the test group of 180 apical lesions, 147 lesions were segmented from panoramic radiographs with an intersection over union (IoU) threshold of 0.3. The F1-score values, as a measure of performance, were 0.828, 0.815, and 0.742, respectively, with IoU thresholds of 0.3, 0.4, and 0.5. Conclusion: This study showed the potential utility of a deep learning-guided approach for the segmentation of apical lesions. The deep CNN algorithm using U-Net demonstrated considerably high performance in detecting apical lesions.

A novel framework for correcting satellite-based precipitation products in Mekong river basin with discontinuous observed data

  • Xuan-Hien Le;Giang V. Nguyen;Sungho Jung;Giha Lee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 한국수자원학회 2023년도 학술발표회
    • /
    • pp.173-173
    • /
    • 2023
  • The Mekong River Basin (MRB) is a crucial watershed in Asia, impacting over 60 million people across six developing nations. Accurate satellite-based precipitation products (SPPs) are essential for effective hydrological and watershed management in this region. However, the performance of SPPs has been varied and limited. The APHRODITE product, a unique gauge-based dataset for MRB, is widely used but is only available until 2015. In this study, we present a novel framework for correcting SPPs in the MRB by employing a deep learning approach that combines convolutional neural networks and encoder-decoder architecture to address pixel-by-pixel bias and enhance accuracy. The DLF was applied to four widely used SPPs (TRMM, CMORPH, CHIRPS, and PERSIANN-CDR) in MRB. For the original SPPs, the TRMM product outperformed the other SPPs. Results revealed that the DLF effectively bridged the spatial-temporal gap between the SPPs and the gauge-based dataset (APHRODITE). Among the four corrected products, ADJ-TRMM demonstrated the best performance, followed by ADJ-CDR, ADJ-CHIRPS, and ADJ-CMORPH. The DLF offered a robust and adaptable solution for bias correction in the MRB and beyond, capable of detecting intricate patterns and learning from data to make appropriate adjustments. With the discontinuation of the APHRODITE product, DLF represents a promising solution for generating a more current and reliable dataset for MRB research. This research showcased the potential of deep learning-based methods for improving the accuracy of SPPs, particularly in regions like the MRB, where gauge-based datasets are limited or discontinued.

  • PDF

A Dual-Structured Self-Attention for improving the Performance of Vision Transformers (비전 트랜스포머 성능향상을 위한 이중 구조 셀프 어텐션)

  • Kwang-Yeob Lee;Hwang-Hee Moon;Tae-Ryong Park
    • Journal of IKEEE
    • /
    • 제27권3호
    • /
    • pp.251-257
    • /
    • 2023
  • In this paper, we propose a dual-structured self-attention method that improves the lack of regional features of the vision transformer's self-attention. Vision Transformers, which are more computationally efficient than convolutional neural networks in object classification, object segmentation, and video image recognition, lack the ability to extract regional features relatively. To solve this problem, many studies are conducted based on Windows or Shift Windows, but these methods weaken the advantages of self-attention-based transformers by increasing computational complexity using multiple levels of encoders. This paper proposes a dual-structure self-attention using self-attention and neighborhood network to improve locality inductive bias compared to the existing method. The neighborhood network for extracting local context information provides a much simpler computational complexity than the window structure. CIFAR-10 and CIFAR-100 were used to compare the performance of the proposed dual-structure self-attention transformer and the existing transformer, and the experiment showed improvements of 0.63% and 1.57% in Top-1 accuracy, respectively.

Comparative Analysis of Machine Learning Techniques for IoT Anomaly Detection Using the NSL-KDD Dataset

  • Zaryn, Good;Waleed, Farag;Xin-Wen, Wu;Soundararajan, Ezekiel;Maria, Balega;Franklin, May;Alicia, Deak
    • International Journal of Computer Science & Network Security
    • /
    • 제23권1호
    • /
    • pp.46-52
    • /
    • 2023
  • With billions of IoT (Internet of Things) devices populating various emerging applications across the world, detecting anomalies on these devices has become incredibly important. Advanced Intrusion Detection Systems (IDS) are trained to detect abnormal network traffic, and Machine Learning (ML) algorithms are used to create detection models. In this paper, the NSL-KDD dataset was adopted to comparatively study the performance and efficiency of IoT anomaly detection models. The dataset was developed for various research purposes and is especially useful for anomaly detection. This data was used with typical machine learning algorithms including eXtreme Gradient Boosting (XGBoost), Support Vector Machines (SVM), and Deep Convolutional Neural Networks (DCNN) to identify and classify any anomalies present within the IoT applications. Our research results show that the XGBoost algorithm outperformed both the SVM and DCNN algorithms achieving the highest accuracy. In our research, each algorithm was assessed based on accuracy, precision, recall, and F1 score. Furthermore, we obtained interesting results on the execution time taken for each algorithm when running the anomaly detection. Precisely, the XGBoost algorithm was 425.53% faster when compared to the SVM algorithm and 2,075.49% faster than the DCNN algorithm. According to our experimental testing, XGBoost is the most accurate and efficient method.

Image Clustering Using Machine Learning : Study of InceptionV3 with K-means Methods. (머신 러닝을 사용한 이미지 클러스터링: K-means 방법을 사용한 InceptionV3 연구)

  • Nindam, Somsauwt;Lee, Hyo Jong
    • Annual Conference of KIPS
    • /
    • 한국정보처리학회 2021년도 추계학술발표대회
    • /
    • pp.681-684
    • /
    • 2021
  • In this paper, we study image clustering without labeling using machine learning techniques. We proposed an unsupervised machine learning technique to design an image clustering model that automatically categorizes images into groups. Our experiment focused on inception convolutional neural networks (inception V3) with k-mean methods to cluster images. For this, we collect the public datasets containing Food-K5, Flowers, Handwritten Digit, Cats-dogs, and our dataset Rice Germination, and the owner dataset Palm print. Our experiment can expand into three-part; First, format all the images to un-label and move to whole datasets. Second, load dataset into the inception V3 extraction image features and transferred to the k-mean cluster group hold on six classes. Lastly, evaluate modeling accuracy using the confusion matrix base on precision, recall, F1 to analyze. In this our methods, we can get the results as 1) Handwritten Digit (precision = 1.000, recall = 1.000, F1 = 1.00), 2) Food-K5 (precision = 0.975, recall = 0.945, F1 = 0.96), 3) Palm print (precision = 1.000, recall = 0.999, F1 = 1.00), 4) Cats-dogs (precision = 0.997, recall = 0.475, F1 = 0.64), 5) Flowers (precision = 0.610, recall = 0.982, F1 = 0.75), and our dataset 6) Rice Germination (precision = 0.997, recall = 0.943, F1 = 0.97). Our experiment showed that modeling could get an accuracy rate of 0.8908; the outcomes state that the proposed model is strongest enough to differentiate the different images and classify them into clusters.

Vision-Based Activity Recognition Monitoring Based on Human-Object Interaction at Construction Sites

  • Chae, Yeon;Lee, Hoonyong;Ahn, Changbum R.;Jung, Minhyuk;Park, Moonseo
    • International conference on construction engineering and project management
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.877-885
    • /
    • 2022
  • Vision-based activity recognition has been widely attempted at construction sites to estimate productivity and enhance workers' health and safety. Previous studies have focused on extracting an individual worker's postural information from sequential image frames for activity recognition. However, various trades of workers perform different tasks with similar postural patterns, which degrades the performance of activity recognition based on postural information. To this end, this research exploited a concept of human-object interaction, the interaction between a worker and their surrounding objects, considering the fact that trade workers interact with a specific object (e.g., working tools or construction materials) relevant to their trades. This research developed an approach to understand the context from sequential image frames based on four features: posture, object, spatial features, and temporal feature. Both posture and object features were used to analyze the interaction between the worker and the target object, and the other two features were used to detect movements from the entire region of image frames in both temporal and spatial domains. The developed approach used convolutional neural networks (CNN) for feature extractors and activity classifiers and long short-term memory (LSTM) was also used as an activity classifier. The developed approach provided an average accuracy of 85.96% for classifying 12 target construction tasks performed by two trades of workers, which was higher than two benchmark models. This experimental result indicated that integrating a concept of the human-object interaction offers great benefits in activity recognition when various trade workers coexist in a scene.

  • PDF

Human Activity Classification Using Deep Transfer Learning (딥 전이 학습을 이용한 인간 행동 분류)

  • Nindam, Somsawut;Manmai, Thong-oon;Sung, Thaileang;Wu, Jiahua;Lee, Hyo Jong
    • Annual Conference of KIPS
    • /
    • 한국정보처리학회 2022년도 추계학술발표대회
    • /
    • pp.478-480
    • /
    • 2022
  • This paper studies human activity image classification using deep transfer learning techniques focused on the inception convolutional neural networks (InceptionV3) model. For this, we used UFC-101 public datasets containing a group of students' behaviors in mathematics classrooms at a school in Thailand. The video dataset contains Play Sitar, Tai Chi, Walking with Dog, and Student Study (our dataset) classes. The experiment was conducted in three phases. First, it extracts an image frame from the video, and a tag is labeled on the frame. Second, it loads the dataset into the inception V3 with transfer learning for image classification of four classes. Lastly, we evaluate the model's accuracy using precision, recall, F1-Score, and confusion matrix. The outcomes of the classifications for the public and our dataset are 1) Play Sitar (precision = 1.0, recall = 1.0, F1 = 1.0), 2), Tai Chi (precision = 1.0, recall = 1.0, F1 = 1.0), 3) Walking with Dog (precision = 1.0, recall = 1.0, F1 = 1.0), and 4) Student Study (precision = 1.0, recall = 1.0, F1 = 1.0), respectively. The results show that the overall accuracy of the classification rate is 100% which states the model is more powerful for learning UCF-101 and our dataset with higher accuracy.

Assessing Stream Vegetation Dynamics and Revetment Impact Using Time-Series RGB UAV Images and ResNeXt101 CNNs

  • Seung-Hwan Go;Kyeong-Soo Jeong;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • 제40권1호
    • /
    • pp.9-18
    • /
    • 2024
  • Small streams, despite their rich ecosystems, face challenges in vegetation assessment due to the limitations of traditional, time-consuming methods. This study presents a groundbreaking approach, combining unmanned aerial vehicles(UAVs), convolutional neural networks(CNNs), and the vegetation differential vegetation index (VDVI), to revolutionize both assessment and management of stream vegetation. Focusing on Idong Stream in South Korea (2.7 km long, 2.34 km2 basin area)with eight diverse revetment methods, we leveraged high-resolution RGB images captured by UAVs across five dates (July-December). These images trained a ResNeXt101 CNN model, achieving an impressive 89% accuracy in classifying vegetation cover(soil,water, and vegetation). This enabled detailed spatial and temporal analysis of vegetation distribution. Further, VDVI calculations on classified vegetation areas allowed assessment of vegetation vitality. Our key findings showcase the power of this approach:(a) TheCNN model generated highly accurate cover maps, facilitating precise monitoring of vegetation changes overtime and space. (b) August displayed the highest average VDVI(0.24), indicating peak vegetation growth crucial for stabilizing streambanks and resisting flow. (c) Different revetment methods impacted vegetation vitality. Fieldstone sections exhibited initial high vitality followed by decline due to leaf browning. Block-type sections and the control group showed a gradual decline after peak growth. Interestingly, the "H environment block" exhibited minimal change, suggesting potential benefits for specific ecological functions.(d) Despite initial differences, all sections converged in vegetation distribution trends after 15 years due to the influence of surrounding vegetation. This study demonstrates the immense potential of UAV-based remote sensing and CNNs for revolutionizing small-stream vegetation assessment and management. By providing high-resolution, temporally detailed data, this approach offers distinct advantages over traditional methods, ultimately benefiting both the environment and surrounding communities through informed decision-making for improved stream health and ecological conservation.

Enhancing Acute Kidney Injury Prediction through Integration of Drug Features in Intensive Care Units

  • Gabriel D. M. Manalu;Mulomba Mukendi Christian;Songhee You;Hyebong Choi
    • International journal of advanced smart convergence
    • /
    • 제12권4호
    • /
    • pp.434-442
    • /
    • 2023
  • The relationship between acute kidney injury (AKI) prediction and nephrotoxic drugs, or drugs that adversely affect kidney function, is one that has yet to be explored in the critical care setting. One contributing factor to this gap in research is the limited investigation of drug modalities in the intensive care unit (ICU) context, due to the challenges of processing prescription data into the corresponding drug representations and a lack in the comprehensive understanding of these drug representations. This study addresses this gap by proposing a novel approach that leverages patient prescription data as a modality to improve existing models for AKI prediction. We base our research on Electronic Health Record (EHR) data, extracting the relevant patient prescription information and converting it into the selected drug representation for our research, the extended-connectivity fingerprint (ECFP). Furthermore, we adopt a unique multimodal approach, developing machine learning models and 1D Convolutional Neural Networks (CNN) applied to clinical drug representations, establishing a procedure which has not been used by any previous studies predicting AKI. The findings showcase a notable improvement in AKI prediction through the integration of drug embeddings and other patient cohort features. By using drug features represented as ECFP molecular fingerprints along with common cohort features such as demographics and lab test values, we achieved a considerable improvement in model performance for the AKI prediction task over the baseline model which does not include the drug representations as features, indicating that our distinct approach enhances existing baseline techniques and highlights the relevance of drug data in predicting AKI in the ICU setting.

A Deep Learning Approach for Covid-19 Detection in Chest X-Rays

  • Sk. Shalauddin Kabir;Syed Galib;Hazrat Ali;Fee Faysal Ahmed;Mohammad Farhad Bulbul
    • International Journal of Computer Science & Network Security
    • /
    • 제24권3호
    • /
    • pp.125-134
    • /
    • 2024
  • The novel coronavirus 2019 is called COVID-19 has outspread swiftly worldwide. An early diagnosis is more important to control its quick spread. Medical imaging mechanics, chest calculated tomography or chest X-ray, are playing a vital character in the identification and testing of COVID-19 in this present epidemic. Chest X-ray is cost effective method for Covid-19 detection however the manual process of x-ray analysis is time consuming given that the number of infected individuals keep growing rapidly. For this reason, it is very important to develop an automated COVID-19 detection process to control this pandemic. In this study, we address the task of automatic detection of Covid-19 by using a popular deep learning model namely the VGG19 model. We used 1300 healthy and 1300 confirmed COVID-19 chest X-ray images in this experiment. We performed three experiments by freezing different blocks and layers of VGG19 and finally, we used a machine learning classifier SVM for detecting COVID-19. In every experiment, we used a five-fold cross-validation method to train and validated the model and finally achieved 98.1% overall classification accuracy. Experimental results show that our proposed method using the deep learning-based VGG19 model can be used as a tool to aid radiologists and play a crucial role in the timely diagnosis of Covid-19.