• Title/Summary/Keyword: Precision machine

Search Result 2,994, Processing Time 0.03 seconds

Synthetic data augmentation for pixel-wise steel fatigue crack identification using fully convolutional networks

  • Zhai, Guanghao;Narazaki, Yasutaka;Wang, Shuo;Shajihan, Shaik Althaf V.;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.237-250
    • /
    • 2022
  • Structural health monitoring (SHM) plays an important role in ensuring the safety and functionality of critical civil infrastructure. In recent years, numerous researchers have conducted studies to develop computer vision and machine learning techniques for SHM purposes, offering the potential to reduce the laborious nature and improve the effectiveness of field inspections. However, high-quality vision data from various types of damaged structures is relatively difficult to obtain, because of the rare occurrence of damaged structures. The lack of data is particularly acute for fatigue crack in steel bridge girder. As a result, the lack of data for training purposes is one of the main issues that hinders wider application of these powerful techniques for SHM. To address this problem, the use of synthetic data is proposed in this article to augment real-world datasets used for training neural networks that can identify fatigue cracks in steel structures. First, random textures representing the surface of steel structures with fatigue cracks are created and mapped onto a 3D graphics model. Subsequently, this model is used to generate synthetic images for various lighting conditions and camera angles. A fully convolutional network is then trained for two cases: (1) using only real-word data, and (2) using both synthetic and real-word data. By employing synthetic data augmentation in the training process, the crack identification performance of the neural network for the test dataset is seen to improve from 35% to 40% and 49% to 62% for intersection over union (IoU) and precision, respectively, demonstrating the efficacy of the proposed approach.

GCNXSS: An Attack Detection Approach for Cross-Site Scripting Based on Graph Convolutional Networks

  • Pan, Hongyu;Fang, Yong;Huang, Cheng;Guo, Wenbo;Wan, Xuelin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.4008-4023
    • /
    • 2022
  • Since machine learning was introduced into cross-site scripting (XSS) attack detection, many researchers have conducted related studies and achieved significant results, such as saving time and labor costs by not maintaining a rule database, which is required by traditional XSS attack detection methods. However, this topic came across some problems, such as poor generalization ability, significant false negative rate (FNR) and false positive rate (FPR). Moreover, the automatic clustering property of graph convolutional networks (GCN) has attracted the attention of researchers. In the field of natural language process (NLP), the results of graph embedding based on GCN are automatically clustered in space without any training, which means that text data can be classified just by the embedding process based on GCN. Previously, other methods required training with the help of labeled data after embedding to complete data classification. With the help of the GCN auto-clustering feature and labeled data, this research proposes an approach to detect XSS attacks (called GCNXSS) to mine the dependencies between the units that constitute an XSS payload. First, GCNXSS transforms a URL into a word homogeneous graph based on word co-occurrence relationships. Then, GCNXSS inputs the graph into the GCN model for graph embedding and gets the classification results. Experimental results show that GCNXSS achieved successful results with accuracy, precision, recall, F1-score, FNR, FPR, and predicted time scores of 99.97%, 99.75%, 99.97%, 99.86%, 0.03%, 0.03%, and 0.0461ms. Compared with existing methods, GCNXSS has a lower FNR and FPR with stronger generalization ability.

A Design and Implementation of Educational Delivery Robots for Learning of Autonomous Driving

  • Hur, Hwa-La;Park, Myeong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.107-114
    • /
    • 2022
  • In this paper, proposes a delivery robot that can be autonomous driving learning. The proposed robot is designed to be used in park-type apartments without ground parking facilities. Compared to the existing apartments with complex ground and underground routes, park-type apartments have a standardized movement path, allowing the robot to run stably, making it suitable for students' initial education environment. The delivery robot is configured to enable delivery of parcels through machine learning technology for route learning and autonomous driving using cameras and LiDAR sensors. In addition, the control MCU was designed by separating it into three parts to enable learning by level, and it was confirmed that it can be used as a delivery robot for learning through operation tests such as autonomous driving and obstacle recognition. In the future, we plan to develop it into an educational delivery robot for various delivery services by linking with the precision indoor location information recognition technology and the public technology platform of the apartment.

COVID-19 Diagnosis from CXR images through pre-trained Deep Visual Embeddings

  • Khalid, Shahzaib;Syed, Muhammad Shehram Shah;Saba, Erum;Pirzada, Nasrullah
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.175-181
    • /
    • 2022
  • COVID-19 is an acute respiratory syndrome that affects the host's breathing and respiratory system. The novel disease's first case was reported in 2019 and has created a state of emergency in the whole world and declared a global pandemic within months after the first case. The disease created elements of socioeconomic crisis globally. The emergency has made it imperative for professionals to take the necessary measures to make early diagnoses of the disease. The conventional diagnosis for COVID-19 is through Polymerase Chain Reaction (PCR) testing. However, in a lot of rural societies, these tests are not available or take a lot of time to provide results. Hence, we propose a COVID-19 classification system by means of machine learning and transfer learning models. The proposed approach identifies individuals with COVID-19 and distinguishes them from those who are healthy with the help of Deep Visual Embeddings (DVE). Five state-of-the-art models: VGG-19, ResNet50, Inceptionv3, MobileNetv3, and EfficientNetB7, were used in this study along with five different pooling schemes to perform deep feature extraction. In addition, the features are normalized using standard scaling, and 4-fold cross-validation is used to validate the performance over multiple versions of the validation data. The best results of 88.86% UAR, 88.27% Specificity, 89.44% Sensitivity, 88.62% Accuracy, 89.06% Precision, and 87.52% F1-score were obtained using ResNet-50 with Average Pooling and Logistic regression with class weight as the classifier.

Development and Performance Improvement of old Aluminum Extruder Remanufacturing Technology (노후된 알루미늄 압출기의 재제조 기술 개발 및 성능 개선)

  • Sang-Min Yoon;Hang-Chul Jung;Man-Seek Kong
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.1
    • /
    • pp.95-103
    • /
    • 2023
  • The domestic remanufacturing industry is concentrated in auto parts, so it is necessary to expand into various industries. In the domestic aluminum industry, the extrusion process accounts for more than 40% of the total, but the old and management of the extrusion equipment is not done properly. In particular, the extruder has a structure in which equipment is not replaced until major parts are damaged or worn, so there are problems such as lower process precision, productivity and production efficiency compared to new equipment, and high maintenance costs. In this study, the old extruder was remanufactured for major high-risk parts through Failure Mode and Effect Analysis(FMEA), and the process level and performance of the extruder were evaluated before and after remanufacturing. Compared to the existing extruder, the standard deviation of the remanufacture extruder was reduced by 93.5%, 57.9%, and 70.0%, respectively, in major process control items such as container temperature, billet temperature, and ram speed, keeping performance constant. In addition, it was possible to produce products with complex shapes that could not be produced before due to problems such as dimensional deviation within tolerances. In this study, remanufacturing guidelines were presented by analyzing the effect of failure modes of the old extruder, and the performance improvement of the extruder was confirmed.

Study on the Application of Artificial Intelligence Model for CT Quality Control (CT 정도관리를 위한 인공지능 모델 적용에 관한 연구)

  • Ho Seong Hwang;Dong Hyun Kim;Ho Chul Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.182-189
    • /
    • 2023
  • CT is a medical device that acquires medical images based on Attenuation coefficient of human organs related to X-rays. In addition, using this theory, it can acquire sagittal and coronal planes and 3D images of the human body. Then, CT is essential device for universal diagnostic test. But Exposure of CT scan is so high that it is regulated and managed with special medical equipment. As the special medical equipment, CT must implement quality control. In detail of quality control, Spatial resolution of existing phantom imaging tests, Contrast resolution and clinical image evaluation are qualitative tests. These tests are not objective, so the reliability of the CT undermine trust. Therefore, by applying an artificial intelligence classification model, we wanted to confirm the possibility of quantitative evaluation of the qualitative evaluation part of the phantom test. We used intelligence classification models (VGG19, DenseNet201, EfficientNet B2, inception_resnet_v2, ResNet50V2, and Xception). And the fine-tuning process used for learning was additionally performed. As a result, in all classification models, the accuracy of spatial resolution was 0.9562 or higher, the precision was 0.9535, the recall was 1, the loss value was 0.1774, and the learning time was from a maximum of 14 minutes to a minimum of 8 minutes and 10 seconds. Through the experimental results, it was concluded that the artificial intelligence model can be applied to CT implements quality control in spatial resolution and contrast resolution.

Multi-classification Sensitive Image Detection Method Based on Lightweight Convolutional Neural Network

  • Yueheng Mao;Bin Song;Zhiyong Zhang;Wenhou Yang;Yu Lan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1433-1449
    • /
    • 2023
  • In recent years, the rapid development of social networks has led to a rapid increase in the amount of information available on the Internet, which contains a large amount of sensitive information related to pornography, politics, and terrorism. In the aspect of sensitive image detection, the existing machine learning algorithms are confronted with problems such as large model size, long training time, and slow detection speed when auditing and supervising. In order to detect sensitive images more accurately and quickly, this paper proposes a multiclassification sensitive image detection method based on lightweight Convolutional Neural Network. On the basis of the EfficientNet model, this method combines the Ghost Module idea of the GhostNet model and adds the SE channel attention mechanism in the Ghost Module for feature extraction training. The experimental results on the sensitive image data set constructed in this paper show that the accuracy of the proposed method in sensitive information detection is 94.46% higher than that of the similar methods. Then, the model is pruned through an ablation experiment, and the activation function is replaced by Hard-Swish, which reduces the parameters of the original model by 54.67%. Under the condition of ensuring accuracy, the detection time of a single image is reduced from 8.88ms to 6.37ms. The results of the experiment demonstrate that the method put forward has successfully enhanced the precision of identifying multi-class sensitive images, significantly decreased the number of parameters in the model, and achieved higher accuracy than comparable algorithms while using a more lightweight model design.

IoT botnet attack detection using deep autoencoder and artificial neural networks

  • Deris Stiawan;Susanto ;Abdi Bimantara;Mohd Yazid Idris;Rahmat Budiarto
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1310-1338
    • /
    • 2023
  • As Internet of Things (IoT) applications and devices rapidly grow, cyber-attacks on IoT networks/systems also have an increasing trend, thus increasing the threat to security and privacy. Botnet is one of the threats that dominate the attacks as it can easily compromise devices attached to an IoT networks/systems. The compromised devices will behave like the normal ones, thus it is difficult to recognize them. Several intelligent approaches have been introduced to improve the detection accuracy of this type of cyber-attack, including deep learning and machine learning techniques. Moreover, dimensionality reduction methods are implemented during the preprocessing stage. This research work proposes deep Autoencoder dimensionality reduction method combined with Artificial Neural Network (ANN) classifier as botnet detection system for IoT networks/systems. Experiments were carried out using 3- layer, 4-layer and 5-layer pre-processing data from the MedBIoT dataset. Experimental results show that using a 5-layer Autoencoder has better results, with details of accuracy value of 99.72%, Precision of 99.82%, Sensitivity of 99.82%, Specificity of 99.31%, and F1-score value of 99.82%. On the other hand, the 5-layer Autoencoder model succeeded in reducing the dataset size from 152 MB to 12.6 MB (equivalent to a reduction of 91.2%). Besides that, experiments on the N_BaIoT dataset also have a very high level of accuracy, up to 99.99%.

A Hybrid Semantic-Geometric Approach for Clutter-Resistant Floorplan Generation from Building Point Clouds

  • Kim, Seongyong;Yajima, Yosuke;Park, Jisoo;Chen, Jingdao;Cho, Yong K.
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.792-799
    • /
    • 2022
  • Building Information Modeling (BIM) technology is a key component of modern construction engineering and project management workflows. As-is BIM models that represent the spatial reality of a project site can offer crucial information to stakeholders for construction progress monitoring, error checking, and building maintenance purposes. Geometric methods for automatically converting raw scan data into BIM models (Scan-to-BIM) often fail to make use of higher-level semantic information in the data. Whereas, semantic segmentation methods only output labels at the point level without creating object level models that is necessary for BIM. To address these issues, this research proposes a hybrid semantic-geometric approach for clutter-resistant floorplan generation from laser-scanned building point clouds. The input point clouds are first pre-processed by normalizing the coordinate system and removing outliers. Then, a semantic segmentation network based on PointNet++ is used to label each point as ceiling, floor, wall, door, stair, and clutter. The clutter points are removed whereas the wall, door, and stair points are used for 2D floorplan generation. A region-growing segmentation algorithm paired with geometric reasoning rules is applied to group the points together into individual building elements. Finally, a 2-fold Random Sample Consensus (RANSAC) algorithm is applied to parameterize the building elements into 2D lines which are used to create the output floorplan. The proposed method is evaluated using the metrics of precision, recall, Intersection-over-Union (IOU), Betti error, and warping error.

  • PDF

EDNN based prediction of strength and durability properties of HPC using fibres & copper slag

  • Gupta, Mohit;Raj, Ritu;Sahu, Anil Kumar
    • Advances in concrete construction
    • /
    • v.14 no.3
    • /
    • pp.185-194
    • /
    • 2022
  • For producing cement and concrete, the construction field has been encouraged by the usage of industrial soil waste (or) secondary materials since it decreases the utilization of natural resources. Simultaneously, for ensuring the quality, the analyses of the strength along with durability properties of that sort of cement and concrete are required. The prediction of strength along with other properties of High-Performance Concrete (HPC) by optimization and machine learning algorithms are focused by already available research methods. However, an error and accuracy issue are possessed. Therefore, the Enhanced Deep Neural Network (EDNN) based strength along with durability prediction of HPC was utilized by this research method. Initially, the data is gathered in the proposed work. Then, the data's pre-processing is done by the elimination of missing data along with normalization. Next, from the pre-processed data, the features are extracted. Hence, the data input to the EDNN algorithm which predicts the strength along with durability properties of the specific mixing input designs. Using the Switched Multi-Objective Jellyfish Optimization (SMOJO) algorithm, the weight value is initialized in the EDNN. The Gaussian radial function is utilized as the activation function. The proposed EDNN's performance is examined with the already available algorithms in the experimental analysis. Based on the RMSE, MAE, MAPE, and R2 metrics, the performance of the proposed EDNN is compared to the existing DNN, CNN, ANN, and SVM methods. Further, according to the metrices, the proposed EDNN performs better. Moreover, the effectiveness of proposed EDNN is examined based on the accuracy, precision, recall, and F-Measure metrics. With the already-existing algorithms i.e., JO, GWO, PSO, and GA, the fitness for the proposed SMOJO algorithm is also examined. The proposed SMOJO algorithm achieves a higher fitness value than the already available algorithm.