• Title/Summary/Keyword: ImageNet

Search Result 806, Processing Time 0.028 seconds

A Study on the Performance of Enhanced Deep Fully Convolutional Neural Network Algorithm for Image Object Segmentation in Autonomous Driving Environment (자율주행 환경에서 이미지 객체 분할을 위한 강화된 DFCN 알고리즘 성능연구)

  • Kim, Yeonggwang;Kim, Jinsul
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • Recently, various studies are being conducted to integrate Image Segmentation into smart factory industries and autonomous driving fields. In particular, Image Segmentation systems using deep learning algorithms have been researched and developed enough to learn from large volumes of data with higher accuracy. In order to use image segmentation in the autonomous driving sector, sufficient amount of learning is needed with large amounts of data and the streaming environment that processes drivers' data in real time is important for the accuracy of safe operation through highways and child protection zones. Therefore, we proposed a novel DFCN algorithm that enhanced existing FCN algorithms that could be applied to various road environments, demonstrated that the performance of the DFCN algorithm improved 1.3% in terms of "loss" value compared to the previous FCN algorithms. Moreover, the proposed DFCN algorithm was applied to the existing U-Net algorithm to maintain the information of frequencies in the image to produce better results, resulting in a better performance than the classical FCN algorithm in the autonomous environment.

Hair and Fur Synthesizer via ConvNet Using Strand Geometry Images

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.85-92
    • /
    • 2022
  • In this paper, we propose a technique that can express low-resolution hair and fur simulations in high-resolution without noise using ConvNet and geometric images of strands in the form of lines. Pairs between low-resolution and high-resolution data can be obtained through physics-based simulation, and a low-resolution-high-resolution data pair is established using the obtained data. The data used for training is used by converting the position of the hair strands into a geometric image. The hair and fur network proposed in this paper is used for an image synthesizer that upscales a low-resolution image to a high-resolution image. If the high-resolution geometry image obtained as a result of the test is converted back to high-resolution hair, it is possible to express the elastic movement of hair, which is difficult to express with a single mapping function. As for the performance of the synthesis result, it showed faster performance than the traditional physics-based simulation, and it can be easily executed without knowing complex numerical analysis.

Use of deep learning in nano image processing through the CNN model

  • Xing, Lumin;Liu, Wenjian;Liu, Xiaoliang;Li, Xin;Wang, Han
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.185-195
    • /
    • 2022
  • Deep learning is another field of artificial intelligence (AI) utilized for computer aided diagnosis (CAD) and image processing in scientific research. Considering numerous mechanical repetitive tasks, reading image slices need time and improper with geographical limits, so the counting of image information is hard due to its strong subjectivity that raise the error ratio in misdiagnosis. Regarding the highest mortality rate of Lung cancer, there is a need for biopsy for determining its class for additional treatment. Deep learning has recently given strong tools in diagnose of lung cancer and making therapeutic regimen. However, identifying the pathological lung cancer's class by CT images in beginning phase because of the absence of powerful AI models and public training data set is difficult. Convolutional Neural Network (CNN) was proposed with its essential function in recognizing the pathological CT images. 472 patients subjected to staging FDG-PET/CT were selected in 2 months prior to surgery or biopsy. CNN was developed and showed the accuracy of 87%, 69%, and 69% in training, validation, and test sets, respectively, for T1-T2 and T3-T4 lung cancer classification. Subsequently, CNN (or deep learning) could improve the CT images' data set, indicating that the application of classifiers is adequate to accomplish better exactness in distinguishing pathological CT images that performs better than few deep learning models, such as ResNet-34, Alex Net, and Dense Net with or without Soft max weights.

Multi-classification Sensitive Image Detection Method Based on Lightweight Convolutional Neural Network

  • Yueheng Mao;Bin Song;Zhiyong Zhang;Wenhou Yang;Yu Lan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1433-1449
    • /
    • 2023
  • In recent years, the rapid development of social networks has led to a rapid increase in the amount of information available on the Internet, which contains a large amount of sensitive information related to pornography, politics, and terrorism. In the aspect of sensitive image detection, the existing machine learning algorithms are confronted with problems such as large model size, long training time, and slow detection speed when auditing and supervising. In order to detect sensitive images more accurately and quickly, this paper proposes a multiclassification sensitive image detection method based on lightweight Convolutional Neural Network. On the basis of the EfficientNet model, this method combines the Ghost Module idea of the GhostNet model and adds the SE channel attention mechanism in the Ghost Module for feature extraction training. The experimental results on the sensitive image data set constructed in this paper show that the accuracy of the proposed method in sensitive information detection is 94.46% higher than that of the similar methods. Then, the model is pruned through an ablation experiment, and the activation function is replaced by Hard-Swish, which reduces the parameters of the original model by 54.67%. Under the condition of ensuring accuracy, the detection time of a single image is reduced from 8.88ms to 6.37ms. The results of the experiment demonstrate that the method put forward has successfully enhanced the precision of identifying multi-class sensitive images, significantly decreased the number of parameters in the model, and achieved higher accuracy than comparable algorithms while using a more lightweight model design.

A Study on the automatic Lane keeping control method of a vehicle based upon a perception net (퍼셉션 넷에 기반한 차량의 자동 차선 위치 제어에 관한 연구)

  • 부광석;정문영
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.257-257
    • /
    • 2000
  • The objective of this research is to monitor and control the vehicle motion in order to remove out the existing safety risk based upon the human-machine cooperative vehicle control. A predictive control method is proposed to control the steering wheel of the vehicle to keep the lane. Desired angle of the steering wheel to control the vehicle motion could be calculated based upon vehicle dynamics, current and estimated pose of the vehicle every sample steps. The vehicle pose and the road curvature were calculated by geometrically fusing sensor data from camera image, tachometer and steering wheel encoder though the Perception Net, where not only the state variables, but also the corresponding uncertainties were propagated in forward and backward direction in such a way to satisfy the given constraint condition, maintain consistency, reduce the uncertainties, and guarantee robustness. A series of experiments was conducted to evaluate the control performance, in which a car Like robot was utilized to quit unwanted safety problem. As the results, the robot was keeping very well a given lane with arbitrary shape at moderate speed.

  • PDF

Fashion Clothing Image Classification Deep Learning (패션 의류 영상 분류 딥러닝)

  • Shin, Seong-Yoon;Wang, Guangxing;Shin, Kwang-Seong;Lee, Hyun-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.676-677
    • /
    • 2022
  • In this paper, we propose a new method based on a deep learning model with an optimized dynamic decay learning rate and improved model structure to achieve fast and accurate classification of fashion clothing images. Experiments are performed using the model proposed in the Fashion-MNIST dataset and compared with methods of CNN, LeNet, LSTM and BiLSTM.

  • PDF

Layer Segmentation of Retinal OCT Images using Deep Convolutional Encoder-Decoder Network (딥 컨볼루셔널 인코더-디코더 네트워크를 이용한 망막 OCT 영상의 층 분할)

  • Kwon, Oh-Heum;Song, Min-Gyu;Song, Ha-Joo;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.11
    • /
    • pp.1269-1279
    • /
    • 2019
  • In medical image analysis, segmentation is considered as a vital process since it partitions an image into coherent parts and extracts interesting objects from the image. In this paper, we consider automatic segmentations of OCT retinal images to find six layer boundaries using convolutional neural networks. Segmenting retinal images by layer boundaries is very important in diagnosing and predicting progress of eye diseases including diabetic retinopathy, glaucoma, and AMD (age-related macular degeneration). We applied well-known CNN architecture for general image segmentation, called Segnet, U-net, and CNN-S into this problem. We also proposed a shortest path-based algorithm for finding the layer boundaries from the outputs of Segnet and U-net. We analysed their performance on public OCT image data set. The experimental results show that the Segnet combined with the proposed shortest path-based boundary finding algorithm outperforms other two networks.

A study on the application of the agricultural reservoir water level recognition model using CCTV image data (농업용 저수지 CCTV 영상자료 기반 수위 인식 모델 적용성 검토)

  • Kwon, Soon Ho;Ha, Changyong;Lee, Seungyub
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.4
    • /
    • pp.245-259
    • /
    • 2023
  • The agricultural reservoir is a critical water supply system in South Korea, providing approximately 60% of the agricultural water demand. However, the reservoir faces several issues that jeopardize its efficient operation and management. To address this issues, we propose a novel deep-learning-based water level recognition model that uses CCTV image data to accurately estimate water levels in agricultural reservoirs. The model consists of three main parts: (1) dataset construction, (2) image segmentation using the U-Net algorithm, and (3) CCTV-based water level recognition using either CNN or ResNet. The model has been applied to two reservoirs G-reservoir and M-reservoir with observed CCTV image and water level time series data. The results show that the performance of the image segmentation model is superior, while the performance of the water level recognition model varies from 50 to 80% depending on water level classification criteria (i.e., classification guideline) and complexity of image data (i.e., variability of the image pixels). The performance of the model can be improved if more numbers of data can be collected.

Food Detection by Fine-Tuning Pre-trained Convolutional Neural Network Using Noisy Labels

  • Alshomrani, Shroog;Aljoudi, Lina;Aljabri, Banan;Al-Shareef, Sarah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.182-190
    • /
    • 2021
  • Deep learning is an advanced technology for large-scale data analysis, with numerous promising cases like image processing, object detection and significantly more. It becomes customarily to use transfer learning and fine-tune a pre-trained CNN model for most image recognition tasks. Having people taking photos and tag themselves provides a valuable resource of in-data. However, these tags and labels might be noisy as people who annotate these images might not be experts. This paper aims to explore the impact of noisy labels on fine-tuning pre-trained CNN models. Such effect is measured on a food recognition task using Food101 as a benchmark. Four pre-trained CNN models are included in this study: InceptionV3, VGG19, MobileNetV2 and DenseNet121. Symmetric label noise will be added with different ratios. In all cases, models based on DenseNet121 outperformed the other models. When noisy labels were introduced to the data, the performance of all models degraded almost linearly with the amount of added noise.

Smartphone-based structural crack detection using pruned fully convolutional networks and edge computing

  • Ye, X.W.;Li, Z.X.;Jin, T.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.141-151
    • /
    • 2022
  • In recent years, the industry and research communities have focused on developing autonomous crack inspection approaches, which mainly include image acquisition and crack detection. In these approaches, mobile devices such as cameras, drones or smartphones are utilized as sensing platforms to acquire structural images, and the deep learning (DL)-based methods are being developed as important crack detection approaches. However, the process of image acquisition and collection is time-consuming, which delays the inspection. Also, the present mobile devices such as smartphones can be not only a sensing platform but also a computing platform that can be embedded with deep neural networks (DNNs) to conduct on-site crack detection. Due to the limited computing resources of mobile devices, the size of the DNNs should be reduced to improve the computational efficiency. In this study, an architecture called pruned crack recognition network (PCR-Net) was developed for the detection of structural cracks. A dataset containing 11000 images was established based on the raw images from bridge inspections. A pruning method was introduced to reduce the size of the base architecture for the optimization of the model size. Comparative studies were conducted with image processing techniques (IPTs) and other DNNs for the evaluation of the performance of the proposed PCR-Net. Furthermore, a modularly designed framework that integrated the PCR-Net was developed to realize a DL-based crack detection application for smartphones. Finally, on-site crack detection experiments were carried out to validate the performance of the developed system of smartphone-based detection of structural cracks.