• Title/Summary/Keyword: neural network classification

Search Result 1,732, Processing Time 0.037 seconds

Real-time automated detection of construction noise sources based on convolutional neural networks

  • Jung, Seunghoon;Kang, Hyuna;Hong, Juwon;Hong, Taehoon;Lee, Minhyun;Kim, Jimin
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.455-462
    • /
    • 2020
  • Noise which is unwanted sound is a serious pollutant that can affect human health, as well as the working and living environment if exposed to humans. However, current noise management on the construction project is generally conducted after the noise exceeds the regulation standard, which increases the conflicts with inhabitants near the construction site and threats to the safety and productivity of construction workers. To overcome the limitations of the current noise management methods, the activities of construction equipment which is the main source of construction noise need to be managed throughout the construction period in real-time. Therefore, this paper proposed a framework for automatically detecting noise sources in construction sites in real-time based on convolutional neural networks (CNNs) according to the following four steps: (i) Step 1: Definition of the noise sources; (ii) Step 2: Data preparation; (iii) Step 3: Noise source classification using the audio CNN; and (iv) Step 4: Noise source detection using the visual CNN. The short-time Fourier transform (STFT) and temporal image processing are used to contain temporal features of the audio and visual data. In addition, the AlexNet and You Only Look Once v3 (YOLOv3) algorithms have been adopted to classify and detect the noise sources in real-time. As a result, the proposed framework is expected to immediately find construction activities as current noise sources on the video of the construction site. The proposed framework could be helpful for environmental construction managers to efficiently identify and control the noise by automatically detecting the noise sources among many activities carried out by various types of construction equipment. Thereby, not only conflicts between inhabitants and construction companies caused by construction noise can be prevented, but also the noise-related health risks and productivity degradation for construction workers and inhabitants near the construction site can be minimized.

  • PDF

Mobbing-Value Algorithm based on User Profile in Online Social Network (온라인 소셜 네트워크에서 사용자 프로파일 기반의 모빙지수(Mobbing-Value) 알고리즘)

  • Kim, Guk-Jin;Park, Gun-Woo;Lee, Sang-Hoon
    • The KIPS Transactions:PartD
    • /
    • v.16D no.6
    • /
    • pp.851-858
    • /
    • 2009
  • Mobbing is not restricted to problem of young people but the bigger recent problem occurs in workspaces. According to reports of ILO and domestic case mobbing in the workplace is increasing more and more numerically from 9.1%('03) to 30.7%('08). These mobbing brings personal and social losses. The proposed algorithm makes it possible to grasp not only current mobbing victims but also potential mobbing victims through user profile and contribute to efficient personnel management. This paper extracts user profile related to mobbing, in a way of selecting seven factors and fifty attributes that are related to this matter. Next, expressing extracting factors as '1' if they are related me or not '0'. And apply similarity function to attributes summation included in factors to calculate similarity between the users. Third, calculate optimizing weight choosing factors included attributes by applying neural network algorithm of SPSS Clementine and through this summation Mobbing-Value(MV) can be calculated . Finally by mapping MV of online social network users to G2 mobbing propensity classification model(4 Groups; Ideal Group of the online social network, Bullies, Aggressive victims, Victims) which is designed in this paper, can grasp mobbing propensity of users, which will contribute to efficient personnel management.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Research on Text Classification of Research Reports using Korea National Science and Technology Standards Classification Codes (국가 과학기술 표준분류 체계 기반 연구보고서 문서의 자동 분류 연구)

  • Choi, Jong-Yun;Hahn, Hyuk;Jung, Yuchul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.1
    • /
    • pp.169-177
    • /
    • 2020
  • In South Korea, the results of R&D in science and technology are submitted to the National Science and Technology Information Service (NTIS) in reports that have Korea national science and technology standard classification codes (K-NSCC). However, considering there are more than 2000 sub-categories, it is non-trivial to choose correct classification codes without a clear understanding of the K-NSCC. In addition, there are few cases of automatic document classification research based on the K-NSCC, and there are no training data in the public domain. To the best of our knowledge, this study is the first attempt to build a highly performing K-NSCC classification system based on NTIS report meta-information from the last five years (2013-2017). To this end, about 210 mid-level categories were selected, and we conducted preprocessing considering the characteristics of research report metadata. More specifically, we propose a convolutional neural network (CNN) technique using only task names and keywords, which are the most influential fields. The proposed model is compared with several machine learning methods (e.g., the linear support vector classifier, CNN, gated recurrent unit, etc.) that show good performance in text classification, and that have a performance advantage of 1% to 7% based on a top-three F1 score.

Evaluating the prediction models of leaf wetness duration for citrus orchards in Jeju, South Korea (제주 감귤 과수원에서의 이슬지속시간 예측 모델 평가)

  • Park, Jun Sang;Seo, Yun Am;Kim, Kyu Rang;Ha, Jong-Chul
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.20 no.3
    • /
    • pp.262-276
    • /
    • 2018
  • Models to predict Leaf Wetness Duration (LWD) were evaluated using the observed meteorological and dew data at the 11 citrus orchards in Jeju, South Korea from 2016 to 2017. The sensitivity and the prediction accuracy were evaluated with four models (i.e., Number of Hours of Relative Humidity (NHRH), Classification And Regression Tree/Stepwise Linear Discriminant (CART/SLD), Penman-Monteith (PM), Deep-learning Neural Network (DNN)). The sensitivity of models was evaluated with rainfall and seasonal changes. When the data in rainy days were excluded from the whole data set, the LWD models had smaller average error (Root Mean Square Error (RMSE) about 1.5hours). The seasonal error of the DNN model had the similar magnitude (RMSE about 3 hours) among all seasons excluding winter. The other models had the greatest error in summer (RMSE about 9.6 hours) and the lowest error in winter (RMSE about 3.3 hours). These models were also evaluated by the statistical error analysis method and the regression analysis method of mean squared deviation. The DNN model had the best performance by statistical error whereas the CART/SLD model had the worst prediction accuracy. The Mean Square Deviation (MSD) is a method of analyzing the linearity of a model with three components: squared bias (SB), nonunity slope (NU), and lack of correlation (LC). Better model performance was determined by lower SB and LC and higher NU. The results of MSD analysis indicated that the DNN model would provide the best performance and followed by the PM, the NHRH and the CART/SLD in order. This result suggested that the machine learning model would be useful to improve the accuracy of agricultural information using meteorological data.

Analysis of Surface Urban Heat Island and Land Surface Temperature Using Deep Learning Based Local Climate Zone Classification: A Case Study of Suwon and Daegu, Korea (딥러닝 기반 Local Climate Zone 분류체계를 이용한 지표면온도와 도시열섬 분석: 수원시와 대구광역시를 대상으로)

  • Lee, Yeonsu;Lee, Siwoo;Im, Jungho;Yoo, Cheolhee
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1447-1460
    • /
    • 2021
  • Urbanization increases the amount of impervious surface and artificial heat emission, resulting in urban heat island (UHI) effect. Local climate zones (LCZ) are a classification scheme for urban areas considering urban land cover characteristics and the geometry and structure of buildings, which can be used for analyzing urban heat island effect in detail. This study aimed to examine the UHI effect by urban structure in Suwon and Daegu using the LCZ scheme. First, the LCZ maps were generated using Landsat 8 images and convolutional neural network (CNN) deep learning over the two cities. Then, Surface UHI (SUHI), which indicates the land surface temperature (LST) difference between urban and rural areas, was analyzed by LCZ class. The results showed that the overall accuracies of the CNN models for LCZ classification were relatively high 87.9% and 81.7% for Suwon and Daegu, respectively. In general, Daegu had higher LST for all LCZ classes than Suwon. For both cities, LST tended to increase with increasing building density with relatively low building height. For both cities, the intensity of SUHI was very high in summer regardless of LCZ classes and was also relatively high except for a few classes in spring and fall. In winter the SUHI intensity was low, resulting in negative values for many LCZ classes. This implies that UHI is very strong in summer, and some urban areas often are colder than rural areas in winter. The research findings demonstrated the applicability of the LCZ data for SUHI analysis and can provide a basis for establishing timely strategies to respond urban on-going climate change over urban areas.

Image-Based Automatic Bridge Component Classification Using Deep Learning (딥러닝을 활용한 이미지 기반 교량 구성요소 자동분류 네트워크 개발)

  • Cho, Munwon;Lee, Jae Hyuk;Ryu, Young-Moo;Park, Jeongjun;Yoon, Hyungchul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.6
    • /
    • pp.751-760
    • /
    • 2021
  • Most bridges in Korea are over 20 years old, and many problems linked to their deterioration are being reported. The current practice for bridge inspection mainly depends on expert evaluation, which can be subjective. Recent studies have introduced data-driven methods using building information modeling, which can be more efficient and objective, but these methods require manual procedures that consume time and money. To overcome this, this study developed an image-based automaticbridge component classification network to reduce the time and cost required for converting the visual information of bridges to a digital model. The proposed method comprises two convolutional neural networks. The first network estimates the type of the bridge based on the superstructure, and the second network classifies the bridge components. In avalidation test, the proposed system automatically classified the components of 461 bridge images with 96.6 % of accuracy. The proposed approach is expected to contribute toward current bridge maintenance practice.

Web Attack Classification Model Based on Payload Embedding Pre-Training (페이로드 임베딩 사전학습 기반의 웹 공격 분류 모델)

  • Kim, Yeonsu;Ko, Younghun;Euom, Ieckchae;Kim, Kyungbaek
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.4
    • /
    • pp.669-677
    • /
    • 2020
  • As the number of Internet users exploded, attacks on the web increased. In addition, the attack patterns have been diversified to bypass existing defense techniques. Traditional web firewalls are difficult to detect attacks of unknown patterns.Therefore, the method of detecting abnormal behavior by artificial intelligence has been studied as an alternative. Specifically, attempts have been made to apply natural language processing techniques because the type of script or query being exploited consists of text. However, because there are many unknown words in scripts and queries, natural language processing requires a different approach. In this paper, we propose a new classification model which uses byte pair encoding (BPE) technology to learn the embedding vector, that is often used for web attack payloads, and uses an attention mechanism-based Bi-GRU neural network to extract a set of tokens that learn their order and importance. For major web attacks such as SQL injection, cross-site scripting, and command injection attacks, the accuracy of the proposed classification method is about 0.9990 and its accuracy outperforms the model suggested in the previous study.

Performance Evaluation on the Learning Algorithm for Automatic Classification of Q&A Documents (고객 질의 문서 자동 분류를 위한 학습 알고리즘 성능 평가)

  • Choi Jung-Min;Lee Byoung-Soo
    • The KIPS Transactions:PartD
    • /
    • v.13D no.1 s.104
    • /
    • pp.133-138
    • /
    • 2006
  • Electric commerce of surpassing the traditional one appeared before the public and has currently led the change in the management of enterprises. To establish and maintain good relations with customers, electric commerce has various channels for customers that understand what they want to and suggest it to them. The bulletin board and e-mail among em are inbound information that enterprises can directly listen to customers' opinions and are different from other channels in characters. Enterprises can effectively manage the bulletin board and e-mail by understanding customers' ideas as many as possible and provide them with optimum answers. It is one of the important factors to improve the reliability of the notice board and e-mail as well as the whole electric commerce. Therefore this thesis researches into methods to classify various kinds of documents automatically in electric commerce; they are possible to solve existing problems of the bulletin board and e-mail, to operate effectively and to manage systematically. Moreover, it researches what the most suitable algorithm is in the automatic classification of Q&A documents by experiment the classifying performance of Naive Bayesian, TFIDF, Neural Network, k-NN