• 제목/요약/키워드: Inception V2

검색결과 60건 처리시간 0.026초

딥러닝을 이용한 IOT 기기 인식 시스템 (A Deep Learning based IOT Device Recognition System)

  • 추연호;최영규
    • 반도체디스플레이기술학회지
    • /
    • 제18권2호
    • /
    • pp.1-5
    • /
    • 2019
  • As the number of IOT devices is growing rapidly, various 'see-thru connection' techniques have been reported for efficient communication with them. In this paper, we propose a deep learning based IOT device recognition system for interaction with these devices. The overall system consists of a TensorFlow based deep learning server and two Android apps for data collection and recognition purposes. As the basic neural network model, we adopted Google's inception-v3, and modified the output stage to classify 20 types of IOT devices. After creating a data set consisting of 1000 images of 20 categories, we trained our deep learning network using a transfer learning technology. As a result of the experiment, we achieve 94.5% top-1 accuracy and 98.1% top-2 accuracy.

Classification of Mouse Lung Metastatic Tumor with Deep Learning

  • Lee, Ha Neul;Seo, Hong-Deok;Kim, Eui-Myoung;Han, Beom Seok;Kang, Jin Seok
    • Biomolecules & Therapeutics
    • /
    • 제30권2호
    • /
    • pp.179-183
    • /
    • 2022
  • Traditionally, pathologists microscopically examine tissue sections to detect pathological lesions; the many slides that must be evaluated impose severe work burdens. Also, diagnostic accuracy varies by pathologist training and experience; better diagnostic tools are required. Given the rapid development of computer vision, automated deep learning is now used to classify microscopic images, including medical images. Here, we used a Inception-v3 deep learning model to detect mouse lung metastatic tumors via whole slide imaging (WSI); we cropped the images to 151 by 151 pixels. The images were divided into training (53.8%) and test (46.2%) sets (21,017 and 18,016 images, respectively). When images from lung tissue containing tumor tissues were evaluated, the model accuracy was 98.76%. When images from normal lung tissue were evaluated, the model accuracy ("no tumor") was 99.87%. Thus, the deep learning model distinguished metastatic lesions from normal lung tissue. Our approach will allow the rapid and accurate analysis of various tissues.

DC 코로나 방전이 적용된 에틸렌 정상 확산 화염의 Soot 배출 저감 (Reduction of Soot Emitted from a $C_2$$H_4$ Normal Diffusion Flame with Application of DC Corona Discharge)

  • 이재복;황정호
    • 대한기계학회논문집B
    • /
    • 제25권4호
    • /
    • pp.496-506
    • /
    • 2001
  • The effect of corona discharge on soot emission was experimentally investigated. Size and number concentrations of soot aggregates were measured and compared for various voltages. Regardless of the polarity of the applied voltage, the flame length decreased and the tip of flame spreaded with increasing voltage. For the experimental conditions selected, the flame was blown off toward the ground electrode by corona ionic wind. When the negative applied voltage was greater than 3kV(for electrode spacing = 3.5cm), soot particles in inception or growth region were affected by the corona discharge, resulting in the reduction of number concentration. The results show that the ionic wind favored soot oxidation and increased flame temperature. Number concentration and primary particle size greatly increased, when the corona electrodes were located the region of soot nucleation or growth(close to burner mouth).

전기수력학적 미립화에서 액적 형성에 영향을 미치는 인자에 관한 실험적 연구 (A Study on Influence Factors on Drop Formation in Electrohydrodynamic Atomization)

  • 성기안;이창식
    • 한국분무공학회지
    • /
    • 제8권2호
    • /
    • pp.24-30
    • /
    • 2003
  • An experimental study was performed to investigate the influence factors of drop formation in electrohydrodynamic atomization. The mode of electrohydrodynamic atomization depended on the various factors such as the flow rate of the liquid, the inner diameter of the nozzle, the distance between the nozzle tip and the ground electrode, the shape of the ground electrode. and the applied high voltage. This work was performed to investigate the experimental analysis for the flow pattern visualization of droplets, and the relationship between voltage application and the behavior of liquid atomization. Uniform drops of different sizes can be obtained at the inception of the spindle mode by charging the flow rate and the electric field. The drop size also decreased when the flow rate was raised for the spindle mode. The whipping motion occurred beyond 7kV and before the corona started to take effect.

  • PDF

White Blood Cell Types Classification Using Deep Learning Models

  • Bagido, Rufaidah Ali;Alzahrani, Manar;Arif, Muhammad
    • International Journal of Computer Science & Network Security
    • /
    • 제21권9호
    • /
    • pp.223-229
    • /
    • 2021
  • Classification of different blood cell types is an essential task for human's medical treatment. The white blood cells have different types of cells. Counting total White Blood Cells (WBC) and differential of the WBC types are required by the physicians to diagnose the disease correctly. This paper used transfer learning methods to the pre-trained deep learning models to classify different WBCs. The best pre-trained model was Inception ResNetV2 with Adam optimizer that produced classification accuracy of 98.4% for the dataset comprising four types of WBCs.

사과 병해충 분류를 위한 CNN 기반 모델 비교 (Comparison of CNN-based models for apple pest classification)

  • 이수민;이유현;이은솔;한세윤
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2022년도 춘계학술발표대회
    • /
    • pp.460-463
    • /
    • 2022
  • 세계에서 가장 중요한 온대 과일 작물 중 하나인 사과의 생산성과 품질은 병해충 여부에 큰 영향을 받는다. 이를 진단하기 위해서는 효율적이고 많은 전문 지식과 상당한 시간이 필요하다. 그러므로 이를 해결하기 위해 효율적이고 정확하게 다양한 병해충을 진단하는 시스템이 필요하다. 본 논문에서는 이미지 분석에 큰 효율을 보인 딥러닝 기반 CNN 들을 비교 분석하여 사과의 병해충 여부를 판별하고 최적의 모델을 제시한다. 딥러닝 기반 CNN 구조를 가진 AlexNet, VGGNet, Inception-ResNet-v2, DenseNet 을 채택해 사과 병해충 분류 성능 평가를 진행했다. 그 결과 DenseNet 이 가장 우수한 성능을 보여주었다.

CNN 모델을 활용한 콘크리트 균열 검출 및 시각화 방법 (Concrete Crack Detection and Visualization Method Using CNN Model)

  • 최주희;김영관;이한승
    • 한국건축시공학회:학술대회논문집
    • /
    • 한국건축시공학회 2022년도 봄 학술논문 발표대회
    • /
    • pp.73-74
    • /
    • 2022
  • Concrete structures occupy the largest proportion of modern infrastructure, and concrete structures often have cracking problems. Existing concrete crack diagnosis methods have limitations in crack evaluation because they rely on expert visual inspection. Therefore, in this study, we design a deep learning model that detects, visualizes, and outputs cracks on the surface of RC structures based on image data by using a CNN (Convolution Neural Networks) model that can process two- and three-dimensional data such as video and image data. do. An experimental study was conducted on an algorithm to automatically detect concrete cracks and visualize them using a CNN model. For the three deep learning models used for algorithm learning in this study, the concrete crack prediction accuracy satisfies 90%, and in particular, the 'InceptionV3'-based CNN model showed the highest accuracy. In the case of the crack detection visualization model, it showed high crack detection prediction accuracy of more than 95% on average for data with crack width of 0.2 mm or more.

  • PDF

SVM on Top of Deep Networks for Covid-19 Detection from Chest X-ray Images

  • Do, Thanh-Nghi;Le, Van-Thanh;Doan, Thi-Huong
    • Journal of information and communication convergence engineering
    • /
    • 제20권3호
    • /
    • pp.219-225
    • /
    • 2022
  • In this study, we propose training a support vector machine (SVM) model on top of deep networks for detecting Covid-19 from chest X-ray images. We started by gathering a real chest X-ray image dataset, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Instead of training deep networks from scratch, we fine-tuned recent pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to classify chest X-ray images into one of three classes (Covid-19, normal, and other lung). We propose training an SVM model on top of deep networks to perform a nonlinear combination of deep network outputs, improving classification over any single deep network. The empirical test results on the real chest X-ray image dataset show that deep network models, with an exception of ResNet50 with 82.44%, provide an accuracy of at least 92% on the test set. The proposed SVM on top of the deep network achieved the highest accuracy of 96.16%.

Food Detection by Fine-Tuning Pre-trained Convolutional Neural Network Using Noisy Labels

  • Alshomrani, Shroog;Aljoudi, Lina;Aljabri, Banan;Al-Shareef, Sarah
    • International Journal of Computer Science & Network Security
    • /
    • 제21권7호
    • /
    • pp.182-190
    • /
    • 2021
  • Deep learning is an advanced technology for large-scale data analysis, with numerous promising cases like image processing, object detection and significantly more. It becomes customarily to use transfer learning and fine-tune a pre-trained CNN model for most image recognition tasks. Having people taking photos and tag themselves provides a valuable resource of in-data. However, these tags and labels might be noisy as people who annotate these images might not be experts. This paper aims to explore the impact of noisy labels on fine-tuning pre-trained CNN models. Such effect is measured on a food recognition task using Food101 as a benchmark. Four pre-trained CNN models are included in this study: InceptionV3, VGG19, MobileNetV2 and DenseNet121. Symmetric label noise will be added with different ratios. In all cases, models based on DenseNet121 outperformed the other models. When noisy labels were introduced to the data, the performance of all models degraded almost linearly with the amount of added noise.

A computer vision-based approach for behavior recognition of gestating sows fed different fiber levels during high ambient temperature

  • Kasani, Payam Hosseinzadeh;Oh, Seung Min;Choi, Yo Han;Ha, Sang Hun;Jun, Hyungmin;Park, Kyu hyun;Ko, Han Seo;Kim, Jo Eun;Choi, Jung Woo;Cho, Eun Seok;Kim, Jin Soo
    • Journal of Animal Science and Technology
    • /
    • 제63권2호
    • /
    • pp.367-379
    • /
    • 2021
  • The objectives of this study were to evaluate convolutional neural network models and computer vision techniques for the classification of swine posture with high accuracy and to use the derived result in the investigation of the effect of dietary fiber level on the behavioral characteristics of the pregnant sow under low and high ambient temperatures during the last stage of gestation. A total of 27 crossbred sows (Yorkshire × Landrace; average body weight, 192.2 ± 4.8 kg) were assigned to three treatments in a randomized complete block design during the last stage of gestation (days 90 to 114). The sows in group 1 were fed a 3% fiber diet under neutral ambient temperature; the sows in group 2 were fed a diet with 3% fiber under high ambient temperature (HT); the sows in group 3 were fed a 6% fiber diet under HT. Eight popular deep learning-based feature extraction frameworks (DenseNet121, DenseNet201, InceptionResNetV2, InceptionV3, MobileNet, VGG16, VGG19, and Xception) used for automatic swine posture classification were selected and compared using the swine posture image dataset that was constructed under real swine farm conditions. The neural network models showed excellent performance on previously unseen data (ability to generalize). The DenseNet121 feature extractor achieved the best performance with 99.83% accuracy, and both DenseNet201 and MobileNet showed an accuracy of 99.77% for the classification of the image dataset. The behavior of sows classified by the DenseNet121 feature extractor showed that the HT in our study reduced (p < 0.05) the standing behavior of sows and also has a tendency to increase (p = 0.082) lying behavior. High dietary fiber treatment tended to increase (p = 0.064) lying and decrease (p < 0.05) the standing behavior of sows, but there was no change in sitting under HT conditions.