• Title/Summary/Keyword: Image machine learning

Search Result 595, Processing Time 0.028 seconds

Super-Resolution Transmission Electron Microscope Image of Nanomaterials Using Deep Learning (딥러닝을 이용한 나노소재 투과전자 현미경의 초해상 이미지 획득)

  • Nam, Chunghee
    • Korean Journal of Materials Research
    • /
    • v.32 no.8
    • /
    • pp.345-353
    • /
    • 2022
  • In this study, using deep learning, super-resolution images of transmission electron microscope (TEM) images were generated for nanomaterial analysis. 1169 paired images with 256 × 256 pixels (high resolution: HR) from TEM measurements and 32 × 32 pixels (low resolution: LR) produced using the python module openCV were trained with deep learning models. The TEM images were related to DyVO4 nanomaterials synthesized by hydrothermal methods. Mean-absolute-error (MAE), peak-signal-to-noise-ratio (PSNR), and structural similarity (SSIM) were used as metrics to evaluate the performance of the models. First, a super-resolution image (SR) was obtained using the traditional interpolation method used in computer vision. In the SR image at low magnification, the shape of the nanomaterial improved. However, the SR images at medium and high magnification failed to show the characteristics of the lattice of the nanomaterials. Second, to obtain a SR image, the deep learning model includes a residual network which reduces the loss of spatial information in the convolutional process of obtaining a feature map. In the process of optimizing the deep learning model, it was confirmed that the performance of the model improved as the number of data increased. In addition, by optimizing the deep learning model using the loss function, including MAE and SSIM at the same time, improved results of the nanomaterial lattice in SR images were achieved at medium and high magnifications. The final proposed deep learning model used four residual blocks to obtain the characteristic map of the low-resolution image, and the super-resolution image was completed using Upsampling2D and the residual block three times.

Recognition of PCB Components Using Faster-RCNN (Faster-RCNN을 이용한 PCB 부품 인식)

  • Ki, Cheol-min;Cho, Tai-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.166-169
    • /
    • 2017
  • Currently, studies using Deep Learning are actively carried out showing good results in many fields. A template matching method is mainly used to recognize parts mounted on PCB(Printed Circuit Board). However, template matching should have multiple templates depending on the shape, orientation and brightness. And it takes long time to perform matching because it searches for the entire image. And there is also a disadvantage that the recognition rate is considerably low. In this paper, we use the Faster-RCNN method for recognizing PCB components as machine learning for classifying several objects in one image. This method performs better than the template matching method, execution time and recognition.

  • PDF

A Study on the Detection Method of Lane Based on Deep Learning for Autonomous Driving (자율주행을 위한 딥러닝 기반의 차선 검출 방법에 관한 연구)

  • Park, Seung-Jun;Han, Sang-Yong;Park, Sang-Bae;Kim, Jung-Ha
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.6_2
    • /
    • pp.979-987
    • /
    • 2020
  • This study used the Deep Learning models used in previous studies, we selected the basic model. The selected model was selected as ZFNet among ZFNet, Googlenet and ResNet, and the object was detected using a ZFNet based FRCNN. In order to reduce the detection error rate of FRCNN, location of four types of objects detected inside the image was designed by SVM classifier and location-based filtering was applied. As simulation results, it showed similar performance to the lane marking classification method with conventional 경계 detection, with an average accuracy of about 88.8%. In addition, studies using the Linear-parabolic Model showed a processing speed of 165.65ms with a minimum resolution of 600 × 800, but in this study, the resolution was treated at about 33ms with an input resolution image of 1280 × 960, so it was possible to classify lane marking at a faster rate than the previous study by CNN-based End to End method.

Effective teaching using textbooks and AI web apps (교과서와 AI 웹앱을 활용한 효과적인 교육방식)

  • Sobirjon, Habibullaev;Yakhyo, Mamasoliev;Kim, Ki-Hawn
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.211-213
    • /
    • 2022
  • Images in the textbooks influence the learning process. Students often see pictures before reading the text and these pictures can enhance the power of imagination of the students. The findings of some researches show that the images in textbooks can increase students' creativity. However, when learning major subjects, reading a textbook or looking at a picture alone may not be enough to understand the topics and completely realize the concepts. Studies show that viewers remember 95% of a message when watching a video than reading a text. If we can combine textbooks and videos, this teaching method is fantastic. The "TEXT + IMAGE + VIDEO (Animation)" concept could be more beneficial than ordinary ones. We tried to give our solution by using machine learning Image Classification. This paper covers the features, approaches and detailed objectives of our project. For now, we have developed the prototype of this project as a web app and it only works when accessed via smartphone. Once you have accessed the web app through your smartphone, the web app asks for access to use the camera. Suppose you bring your smartphone's camera closer to the picture in the textbook. It will then display the video related to the photo below.

  • PDF

Damage detection in structures using modal curvatures gapped smoothing method and deep learning

  • Nguyen, Duong Huong;Bui-Tien, T.;Roeck, Guido De;Wahab, Magd Abdel
    • Structural Engineering and Mechanics
    • /
    • v.77 no.1
    • /
    • pp.47-56
    • /
    • 2021
  • This paper deals with damage detection using a Gapped Smoothing Method (GSM) combined with deep learning. Convolutional Neural Network (CNN) is a model of deep learning. CNN has an input layer, an output layer, and a number of hidden layers that consist of convolutional layers. The input layer is a tensor with shape (number of images) × (image width) × (image height) × (image depth). An activation function is applied each time to this tensor passing through a hidden layer and the last layer is the fully connected layer. After the fully connected layer, the output layer, which is the final layer, is predicted by CNN. In this paper, a complete machine learning system is introduced. The training data was taken from a Finite Element (FE) model. The input images are the contour plots of curvature gapped smooth damage index. A free-free beam is used as a case study. In the first step, the FE model of the beam was used to generate data. The collected data were then divided into two parts, i.e. 70% for training and 30% for validation. In the second step, the proposed CNN was trained using training data and then validated using available data. Furthermore, a vibration experiment on steel damaged beam in free-free support condition was carried out in the laboratory to test the method. A total number of 15 accelerometers were set up to measure the mode shapes and calculate the curvature gapped smooth of the damaged beam. Two scenarios were introduced with different severities of the damage. The results showed that the trained CNN was successful in detecting the location as well as the severity of the damage in the experimental damaged beam.

A Study on Image Labeling Technique for Deep-Learning-Based Multinational Tanks Detection Model

  • Kim, Taehoon;Lim, Dongkyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.58-63
    • /
    • 2022
  • Recently, the improvement of computational processing ability due to the rapid development of computing technology has greatly advanced the field of artificial intelligence, and research to apply it in various domains is active. In particular, in the national defense field, attention is paid to intelligent recognition among machine learning techniques, and efforts are being made to develop object identification and monitoring systems using artificial intelligence. To this end, various image processing technologies and object identification algorithms are applied to create a model that can identify friendly and enemy weapon systems and personnel in real-time. In this paper, we conducted image processing and object identification focused on tanks among various weapon systems. We initially conducted processing the tanks' image using a convolutional neural network, a deep learning technique. The feature map was examined and the important characteristics of the tanks crucial for learning were derived. Then, using YOLOv5 Network, a CNN-based object detection network, a model trained by labeling the entire tank and a model trained by labeling only the turret of the tank were created and the results were compared. The model and labeling technique we proposed in this paper can more accurately identify the type of tank and contribute to the intelligent recognition system to be developed in the future.

Proper Base-model and Optimizer Combination Improves Transfer Learning Performance for Ultrasound Breast Cancer Classification (다단계 전이 학습을 이용한 유방암 초음파 영상 분류 응용)

  • Ayana, Gelan;Park, Jinhyung;Choe, Se-woon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.655-657
    • /
    • 2021
  • It is challenging to find breast ultrasound image training dataset to develop an accurate machine learning model due to various regulations, personal information issues, and expensiveness of acquiring the images. However, studies targeting transfer learning for ultrasound breast cancer images classification have not been able to achieve high performance compared to radiologists. Here, we propose an improved transfer learning model for ultrasound breast cancer classification using publicly available dataset. We argue that with a proper combination of ImageNet pre-trained model and optimizer, a better performing model for ultrasound breast cancer image classification can be achieved. The proposed model provided a preliminary test accuracy of 99.5%. With more experiments involving various hyperparameters, the model is expected to achieve higher performance when subjected to new instances.

  • PDF

A Study on Designing Metadata Standard for Building AI Training Dataset of Landmark Images (랜드마크 이미지 AI 학습용 데이터 구축을 위한 메타데이터 표준 설계 방안 연구)

  • Kim, Jinmook
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.54 no.2
    • /
    • pp.419-434
    • /
    • 2020
  • The purpose of the study is to design and propose metadata standard for building AI training dataset of landmark images. In order to achieve the purpose, we first examined and analyzed the state of art of the types of image retrieval systems and their indexing methods, comprehensively. We then investigated open training dataset and machine learning tools for image object recognition. Sequentially, we selected metadata elements optimized for the AI training dataset of landmark images and defined the input data for each element. We then concluded the study with implications and suggestions for the development of application services using the results of the study.

A Cross-Platform Malware Variant Classification based on Image Representation

  • Naeem, Hamad;Guo, Bing;Ullah, Farhan;Naeem, Muhammad Rashid
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3756-3777
    • /
    • 2019
  • Recent internet development is helping malware researchers to generate malicious code variants through automated tools. Due to this reason, the number of malicious variants is increasing day by day. Consequently, the performance improvement in malware analysis is the critical requirement to stop the rapid expansion of malware. The existing research proved that the similarities among malware variants could be used for detection and family classification. In this paper, a Cross-Platform Malware Variant Classification System (CP-MVCS) proposed that converted malware binary into a grayscale image. Further, malicious features extracted from the grayscale image through Combined SIFT-GIST Malware (CSGM) description. Later, these features used to identify the relevant family of malware variant. CP-MVCS reduced computational time and improved classification accuracy by using CSGM feature description along machine learning classification. The experiment performed on four publically available datasets of Windows OS and Android OS. The experimental results showed that the computation time and malware classification accuracy of CP-MVCS was higher than traditional methods. The evaluation also showed that CP-MVCS was not only differentiated families of malware variants but also identified both malware and benign samples in mix fashion efficiently.

Deep Learning in Drebin: Android malware Image Texture Median Filter Analysis and Detection

  • Luo, Shi-qi;Ni, Bo;Jiang, Ping;Tian, Sheng-wei;Yu, Long;Wang, Rui-jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3654-3670
    • /
    • 2019
  • This paper proposes an Image Texture Median Filter (ITMF) to analyze and detect Android malware on Drebin datasets. We design a model of "ITMF" combined with Image Processing of Median Filter (MF) to reflect the similarity of the malware binary file block. At the same time, using the MAEVS (Malware Activity Embedding in Vector Space) to reflect the potential dynamic activity of malware. In order to ensure the improvement of the classification accuracy, the above-mentioned features(ITMF feature and MAEVS feature)are studied to train Restricted Boltzmann Machine (RBM) and Back Propagation (BP). The experimental results show that the model has an average accuracy rate of 95.43% with few false alarms. to Android malicious code, which is significantly higher than 95.2% of without ITMF, 93.8% of shallow machine learning model SVM, 94.8% of KNN, 94.6% of ANN.