• Title/Summary/Keyword: 합성신경망

Search Result 649, Processing Time 0.032 seconds

Implementation of Pet Management System including Deep Learning-based Breed and Emotion Recognition SNS (딥러닝 기반 품종 및 감정인식 SNS를 포함하는 애완동물 관리 시스템 구현)

  • Inhwan Jung;Kitae Hwang;Jae-Moon Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.45-50
    • /
    • 2023
  • As the ownership of pets has steadily increased in recent years, the need for an effective pet management system has grown. In this study, we propose a pet management system with a deep learning-based emotion recognition SNS. The system detects emotions through pet facial expressions using a convolutional neural network (CNN) and shares them with a user community through SNS. Through SNS, pet owners can connect with other users, share their experiences, and receive support and advice for pet management. Additionally, the system provides comprehensive pet management, including tracking pet health and vaccination and reservation reminders. Furthermore, we added a function to manage and share pet walking records so that pet owners can share their walking experiences with other users. This study demonstrates the potential of utilizing AI technology to improve pet management systems and enhance the well-being of pets and their owners.

Identification of Multiple Cancer Cell Lines from Microscopic Images via Deep Learning (심층 학습을 통한 암세포 광학영상 식별기법)

  • Park, Jinhyung;Choe, Se-woon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.374-376
    • /
    • 2021
  • For the diagnosis of cancer-related diseases in clinical practice, pathological examination using biopsy is essential after basic diagnosis using imaging equipment. In order to proceed with such a biopsy, the assistance of an oncologist, clinical pathologist, etc. with specialized knowledge and the minimum required time are essential for confirmation. In recent years, research related to the establishment of a system capable of automatic classification of cancer cells using artificial intelligence is being actively conducted. However, previous studies show limitations in the type and accuracy of cells based on a limited algorithm. In this study, we propose a method to identify a total of 4 cancer cells through a convolutional neural network, a kind of deep learning. The optical images obtained through cell culture were learned through EfficientNet after performing pre-processing such as identification of the location of cells and image segmentation using OpenCV. The model used various hyper parameters based on EfficientNet, and trained InceptionV3 to compare and analyze the performance. As a result, cells were classified with a high accuracy of 96.8%, and this analysis method is expected to be helpful in confirming cancer.

  • PDF

Deep Learning-based Rheometer Quality Inspection Model Using Temporal and Spatial Characteristics

  • Jaehyun Park;Yonghun Jang;Bok-Dong Lee;Myung-Sub Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.43-52
    • /
    • 2023
  • Rubber produced by rubber companies is subjected to quality suitability inspection through rheometer test, followed by secondary processing for automobile parts. However, rheometer test is being conducted by humans and has the disadvantage of being very dependent on experts. In order to solve this problem, this paper proposes a deep learning-based rheometer quality inspection system. The proposed system combines LSTM(Long Short-Term Memory) and CNN(Convolutional Neural Network) to take advantage of temporal and spatial characteristics from the rheometer. Next, combination materials of each rubber was used as an auxiliary input to enable quality conformity inspection of various rubber products in one model. The proposed method examined its performance with 30,000 validation datasets. As a result, an F1-score of 0.9940 was achieved on average, and its excellence was proved.

Dual-mode diagnosis system for water quality and corrosion in pipe using convolutional neural networks (CNN) and ultrasound (합성곱 신경망과 초음파 기반 상수도관 수질 및 부식 분석용 이중모드 진단 시스템)

  • So Yeon Moon;Hyeon-Ju Jeon;Yeongho Sung;Min-Seo Kim;Daehun Kim;Jaeyeop Choi;Junghwan Oh;O-Joun Lee;Hae Gyun Lim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.685-686
    • /
    • 2023
  • 상수도관의 수질 및 부식도 검사에는 파이프에 손상을 입히지 않고 지속적인 방법이 필요하다. 초음파는 이를 만족하면서 상태를 확인할 수 있고 주파수가 높을수록 해상도가 좋아져 정밀한 측정이 가능하다는 장점이 있다. 이러한 특성을 이용해 상수도관 모니터링 시스템으로 초음파 기반의 Scanning Acoustic Microscopy(SAM)과 Convolutional Neural Network(CNN)을 사용하는 새로운 방법을 제안한다. 기존의 Non-Destructive Testing(NDT)방식의 단점을 보완하면서 더 높은 해상도로 상수도관을 점검하는 방식으로, SAM 을 이용하여 부식으로 인한 파이프 두께 변화와 부유물의 여부 및 수질을 동시에 감지하고 얻은 데이터를 CNN 으로 분석했다. CNN 의 높은 정확도 결과로 이 시스템의 파이프 부식도 및 수질 모니터링에 대한 적합성을 보여주었다.

Classification of Tabular Data using High-Dimensional Mapping and Deep Learning Network (고차원 매핑기법과 딥러닝 네트워크를 통한 정형데이터의 분류)

  • Kyeong-Taek Kim;Won-Du Chang
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.119-124
    • /
    • 2023
  • Deep learning has recently demonstrated conspicuous efficacy across diverse domains than traditional machine learning techniques, as the most popular approach for pattern recognition. The classification problems for tabular data, however, are remain for the area of traditional machine learning. This paper introduces a novel network module designed to tabular data into high-dimensional tensors. The module is integrated into conventional deep learning networks and subsequently applied to the classification of structured data. The proposed method undergoes training and validation on four datasets, culminating in an average accuracy of 90.22%. Notably, this performance surpasses that of the contemporary deep learning model, TabNet, by 2.55%p. The proposed approach acquires significance by virtue of its capacity to harness diverse network architectures, renowned for their superior performance in the domain of computer vision, for the analysis of tabular data.

The Study on Effect of sEMG Sampling Frequency on Learning Performance in CNN based Finger Number Recognition (CNN 기반 한국 숫자지화 인식 응용에서 표면근전도 샘플링 주파수가 학습 성능에 미치는 영향에 관한 연구)

  • Gerelbat BatGerel;Chun-Ki Kwon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.51-56
    • /
    • 2023
  • This study investigates the effect of sEMG sampling frequency on CNN learning performance at Korean finger number recognition application. Since the bigger sampling frequency of sEMG signals generates bigger size of input data and takes longer CNN's learning time. It makes making real-time system implementation more difficult and more costly. Thus, there might be appropriate sampling frequency when collecting sEMG signals. To this end, this work choose five different sampling frequencies which are 1,024Hz, 512Hz, 256Hz, 128Hz and 64Hz and investigates CNN learning performance with sEMG data taken at each sampling frequency. The comparative study shows that all CNN recognized Korean finger number one to five at the accuracy of 100% and CNN with sEMG signals collected at 256Hz sampling frequency takes the shortest learning time to reach the epoch at which korean finger number gestures are recognized at the accuracy of 100%.

Application of Mask R-CNN Algorithm to Detect Cracks in Concrete Structure (콘크리트 구조체 균열 탐지에 대한 Mask R-CNN 알고리즘 적용성 평가)

  • Bae, Byongkyu;Choi, Yongjin;Yun, Kangho;Ahn, Jaehun
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.3
    • /
    • pp.33-39
    • /
    • 2024
  • Inspecting cracks to determine a structure's condition is crucial for accurate safety diagnosis. However, visual crack inspection methods can be subjective and are dependent on field conditions, thereby resulting in low reliability. To address this issue, this study automates the detection of concrete cracks in image data using ResNet, FPN, and the Mask R-CNN components as the backbone, neck, and head of a convolutional neural network. The performance of the proposed model is analyzed using the intersection over the union (IoU). The experimental dataset contained 1,203 images divided into training (70%), validation (20%), and testing (10%) sets. The model achieved an IoU value of 95.83% for testing, and there were no cases where the crack was not detected. These findings demonstrate that the proposed model realized highly accurate detection of concrete cracks in image data.

Research on development of electroencephalography Measurement and Processing system (뇌전도 측정 및 처리 시스템 개발에 관한 연구)

  • Doo-hyun Lee;Yu-jun Oh;Jin-hee Hong;Jun-su chae;Young-gyu Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.1
    • /
    • pp.38-46
    • /
    • 2024
  • In general, EEG signal analysis has been the subject of several studies due to its ability to provide an objective mode of recording brain stimulation, which is widely used in brain-computer interface research with applications in medical diagnosis and rehabilitation engineering. In this study, we developed EEG reception hardware to measure electroencephalograms and implemented a processing system, classifying it into server and data processing. It was conducted as an intermediate-stage research on the implementation of a brain-computer interface using electroencephalograms, and was implemented in the form of predicting the user's arm movements according to measured electroencephalogram data. Electroencephalogram measurements were performed using input from four electrodes through an analog-to-digital converter. After sending this to the server through a communication process, we designed and implemented a system flow in which the server classifies the electroencephalogram input using a convolutional neural network model and displays the results on the user terminal.

Generation of virtual mandibular first molar teeth and accuracy analysis using deep convolutional generative adversarial network (심층 합성곱 생성적 적대 신경망을 활용한 하악 제1대구치 가상 치아 생성 및 정확도 분석)

  • Eun-Jeong Bae;Sun-Young Ihm
    • Journal of Technologic Dentistry
    • /
    • v.46 no.2
    • /
    • pp.36-41
    • /
    • 2024
  • Purpose: This study aimed to generate virtual mandibular left first molar teeth using deep convolutional generative adversarial networks (DCGANs) and analyze their matching accuracy with actual tooth morphology to propose a new paradigm for using medical data. Methods: Occlusal surface images of the mandibular left first molar scanned using a dental model scanner were analyzed using DCGANs. Overall, 100 training sets comprising 50 original and 50 background-removed images were created, thus generating 1,000 virtual teeth. These virtual teeth were classified based on the number of cusps and occlusal surface ratio, and subsequently, were analyzed for consistency by expert dental technicians over three rounds of examination. Statistical analysis was conducted using IBM SPSS Statistics ver. 23.0 (IBM), including intraclass correlation coefficient for intrarater reliability, one-way ANOVA, and Tukey's post-hoc analysis. Results: Virtual mandibular left first molars exhibited high consistency in the occlusal surface ratio but varied in other criteria. Moreover, consistency was the highest in the occlusal buccal lingual criteria at 91.9%, whereas discrepancies were observed most in the occusal buccal cusp criteria at 85.5%. Significant differences were observed among all groups (p<0.05). Conclusion: Based on the classification of the virtually generated left mandibular first molar according to several criteria, DCGANs can generate virtual data highly similar to real data. Thus, subsequent research in the dental field, including the development of improved neural network structures, is necessary.

Algorithm development for texture and color style transfer of cultural heritage images (문화유산 이미지의 질감과 색상 스타일 전이를 위한 알고리즘 개발 연구)

  • Baek Seohyun;Cho Yeeun;Ahn Sangdoo;Choi Jongwon
    • Conservation Science in Museum
    • /
    • v.31
    • /
    • pp.55-70
    • /
    • 2024
  • Style transfer algorithms are currently undergoing active research and are used, for example, to convert ordinary images into classical painting styles. However, such algorithms have yet to produce appropriate results when applied to Korean cultural heritage images, while the number of cases for such applications also remains insufficient. Accordingly, this study attempts to develop a style transfer algorithm that can be applied to styles found among Korean cultural heritage. The algorithm was produced by improving data comprehension by enabling it to learn meaningful characteristics of the styles through representation learning and to separate the cultural heritage from the background in the target images, allowing it to extract the style-relevant areas with the desired color and texture from the style images. This study confirmed that, by doing so, a new image can be created by effectively transferring the characteristics of the style image while maintaining the form of the target image, which thereby enables the transfer of a variety of cultural heritage styles.