• Title/Summary/Keyword: neural network learning

Search Result 4,140, Processing Time 0.031 seconds

The Implementable Functions of the CoreNet of a Multi-Valued Single Neuron Network (단층 코어넷 다단입력 인공신경망회로의 함수에 관한 구현가능 연구)

  • Park, Jong Joon
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.593-602
    • /
    • 2014
  • One of the purposes of an artificial neural netowrk(ANNet) is to implement the largest number of functions as possible with the smallest number of nodes and layers. This paper presents a CoreNet which has a multi-leveled input value and a multi-leveled output value with a 2-layered ANNet, which is the basic structure of an ANNet. I have suggested an equation for calculating the capacity of the CoreNet, which has a p-leveled input and a q-leveled output, as $a_{p,q}={\frac{1}{2}}p(p-1)q^2-{\frac{1}{2}}(p-2)(3p-1)q+(p-1)(p-2)$. I've applied this CoreNet into the simulation model 1(5)-1(6), which has 5 levels of an input and 6 levels of an output with no hidden layers. The simulation result of this model gives, the maximum 219 convergences for the number of implementable functions using the cot(${\sqrt{x}}$) input leveling method. I have also shown that, the 27 functions are implementable by the calculation of weight values(w, ${\theta}$) with the multi-threshold lines in the weight space, which are diverged in the simulation results. Therefore the 246 functions are implementable in the 1(5)-1(6) model, and this coincides with the value from the above eqution $a_{5,6}(=246)$. I also show the implementable function numbering method in the weight space.

A Study on Classification of CNN-based Linux Malware using Image Processing Techniques (영상처리기법을 이용한 CNN 기반 리눅스 악성코드 분류 연구)

  • Kim, Se-Jin;Kim, Do-Yeon;Lee, Hoo-Ki;Lee, Tae-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.9
    • /
    • pp.634-642
    • /
    • 2020
  • With the proliferation of Internet of Things (IoT) devices, using the Linux operating system in various architectures has increased. Also, security threats against Linux-based IoT devices are increasing, and malware variants based on existing malware are constantly appearing. In this paper, we propose a system where the binary data of a visualized Executable and Linkable Format (ELF) file is applied to Local Binary Pattern (LBP) image processing techniques and a median filter to classify malware in a Convolutional Neural Network (CNN). As a result, the original image showed the highest accuracy and F1-score at 98.77%, and reproducibility also showed the highest score at 98.55%. For the median filter, the highest precision was 99.19%, and the lowest false positive rate was 0.008%. Using the LBP technique confirmed that the overall result was lower than putting the original ELF file through the median filter. When the results of putting the original file through image processing techniques were classified by majority, it was confirmed that the accuracy, precision, F1-score, and false positive rate were better than putting the original file through the median filter. In the future, the proposed system will be used to classify malware families or add other image processing techniques to improve the accuracy of majority vote classification. Or maybe we mean "the use of Linux O/S distributions for various architectures has increased" instead? If not, please rephrase as intended.

A Research about Time Domain Estimation Method for Greenhouse Environmental Factors based on Artificial Intelligence (인공지능 기반 온실 환경인자의 시간영역 추정)

  • Lee, JungKyu;Oh, JongWoo;Cho, YongJin;Lee, Donghoon
    • Journal of Bio-Environment Control
    • /
    • v.29 no.3
    • /
    • pp.277-284
    • /
    • 2020
  • To increase the utilization of the intelligent methodology of smart farm management, estimation modeling techniques are required to assess prior examination of crops and environment changes in realtime. A mandatory environmental factor such as CO2 is challenging to establish a reliable estimation model in time domain accounted for indoor agricultural facilities where various correlated variables are highly coupled. Thus, this study was conducted to develop an artificial neural network for reducing time complexity by using environmental information distributed in adjacent areas from a time perspective as input and output variables as CO2. The environmental factors in the smart farm were continuously measured using measuring devices that integrated sensors through experiments. Modeling 1 predicted by the mean data of the experiment period and modeling 2 predicted by the day-to-day data were constructed to predict the correlation of CO2. Modeling 2 predicted by the previous day's data learning performed better than Modeling 1 predicted by the 60-day average value. Until 30 days, most of them showed a coefficient of determination between 0.70 and 0.88, and Model 2 was about 0.05 higher. However, after 30 days, the modeling coefficients of both models showed low values below 0.50. According to the modeling approach, comparing and analyzing the values of the determinants showed that data from adjacent time zones were relatively high performance at points requiring prediction rather than a fixed neural network model.

Empirical Research on Search model of Web Service Repository (웹서비스 저장소의 검색기법에 관한 실증적 연구)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.173-193
    • /
    • 2010
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component-based software development to promote application interaction and integration within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web services repositories not only be well-structured but also provide efficient tools for an environment supporting reusable software components for both service providers and consumers. As the potential of Web services for service-oriented computing is becoming widely recognized, the demand for an integrated framework that facilitates service discovery and publishing is concomitantly growing. In our research, we propose a framework that facilitates Web service discovery and publishing by combining clustering techniques and leveraging the semantics of the XML-based service specification in WSDL files. We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the Web service domain. We have developed a Web service discovery tool based on the proposed approach using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web services repositories. We believe that both service providers and consumers in a service-oriented computing environment can benefit from our Web service discovery approach.

CNN-Based Hand Gesture Recognition for Wearable Applications (웨어러블 응용을 위한 CNN 기반 손 제스처 인식)

  • Moon, Hyeon-Chul;Yang, Anna;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.246-252
    • /
    • 2018
  • Hand gestures are attracting attention as a NUI (Natural User Interface) of wearable devices such as smart glasses. Recently, to support efficient media consumption in IoT (Internet of Things) and wearable environments, the standardization of IoMT (Internet of Media Things) is in the progress in MPEG. In IoMT, it is assumed that hand gesture detection and recognition are performed on a separate device, and thus provides an interoperable interface between these modules. Meanwhile, deep learning based hand gesture recognition techniques have been recently actively studied to improve the recognition performance. In this paper, we propose a method of hand gesture recognition based on CNN (Convolutional Neural Network) for various applications such as media consumption in wearable devices which is one of the use cases of IoMT. The proposed method detects hand contour from stereo images acquisitioned by smart glasses using depth information and color information, constructs data sets to learn CNN, and then recognizes gestures from input hand contour images. Experimental results show that the proposed method achieves the average 95% hand gesture recognition rate.

Convergence of Artificial Intelligence Techniques and Domain Specific Knowledge for Generating Super-Resolution Meteorological Data (기상 자료 초해상화를 위한 인공지능 기술과 기상 전문 지식의 융합)

  • Ha, Ji-Hun;Park, Kun-Woo;Im, Hyo-Hyuk;Cho, Dong-Hee;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.63-70
    • /
    • 2021
  • Generating a super-resolution meteological data by using a high-resolution deep neural network can provide precise research and useful real-life services. We propose a new technique of generating improved training data for super-resolution deep neural networks. To generate high-resolution meteorological data with domain specific knowledge, Lambert conformal conic projection and objective analysis were applied based on observation data and ERA5 reanalysis field data of specialized institutions. As a result, temperature and humidity analysis data based on domain specific knowledge showed improved RMSE by up to 42% and 46%, respectively. Next, a super-resolution generative adversarial network (SRGAN) which is one of the aritifial intelligence techniques was used to automate the manual data generation technique using damain specific techniques as described above. Experiments were conducted to generate high-resolution data with 1 km resolution from global model data with 10 km resolution. Finally, the results generated with SRGAN have a higher resoltuion than the global model input data, and showed a similar analysis pattern to the manually generated high-resolution analysis data, but also showed a smooth boundary.

A Development for Sea Surface Salinity Algorithm Using GOCI in the East China Sea (GOCI를 이용한 동중국해 표층 염분 산출 알고리즘 개발)

  • Kim, Dae-Won;Kim, So-Hyun;Jo, Young-Heon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1307-1315
    • /
    • 2021
  • The Changjiang Diluted Water (CDW) spreads over the East China Sea every summer and significantly affects the sea surface salinity changes in the seas around Jeju Island and the southern coast of Korea peninsula. Sometimes its effect extends to the eastern coast of Korea peninsula through the Korea Strait. Specifically, the CDW has a significant impact on marine physics and ecology and causes damage to fisheries and aquaculture. However, due to the limited field surveys, continuous observation of the CDW in the East China Sea is practically difficult. Many studies have been conducted using satellite measurements to monitor CDW distribution in near-real time. In this study, an algorithm for estimating Sea Surface Salinity (SSS) in the East China Sea was developed using the Geostationary Ocean Color Imager (GOCI). The Multilayer Perceptron Neural Network (MPNN) method was employed for developing an algorithm, and Soil Moisture Active Passive (SMAP) SSS data was selected for the output. In the previous study, an algorithm for estimating SSS using GOCI was trained by 2016 observation data. By comparison, the train data period was extended from 2015 to 2020 to improve the algorithm performance. The validation results with the National Institute of Fisheries Science (NIFS) serial oceanographic observation data from 2011 to 2019 show 0.61 of coefficient of determination (R2) and 1.08 psu of Root Mean Square Errors (RMSE). This study was carried out to develop an algorithm for monitoring the surface salinity of the East China Sea using GOCI and is expected to contribute to the development of the algorithm for estimating SSS by using GOCI-II.

Dimensionality Reduction of Feature Set for API Call based Android Malware Classification

  • Hwang, Hee-Jin;Lee, Soojin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.11
    • /
    • pp.41-49
    • /
    • 2021
  • All application programs, including malware, call the Application Programming Interface (API) upon execution. Recently, using those characteristics, attempts to detect and classify malware based on API Call information have been actively studied. However, datasets containing API Call information require a large amount of computational cost and processing time. In addition, information that does not significantly affect the classification of malware may affect the classification accuracy of the learning model. Therefore, in this paper, we propose a method of extracting a essential feature set after reducing the dimensionality of API Call information by applying various feature selection methods. We used CICAndMal2020, a recently announced Android malware dataset, for the experiment. After extracting the essential feature set through various feature selection methods, Android malware classification was conducted using CNN (Convolutional Neural Network) and the results were analyzed. The results showed that the selected feature set or weight priority varies according to the feature selection methods. And, in the case of binary classification, malware was classified with 97% accuracy even if the feature set was reduced to 15% of the total size. In the case of multiclass classification, an average accuracy of 83% was achieved while reducing the feature set to 8% of the total size.

Seq2Seq model-based Prognostics and Health Management of Robot Arm (Seq2Seq 모델 기반의 로봇팔 고장예지 기술)

  • Lee, Yeong-Hyeon;Kim, Kyung-Jun;Lee, Seung-Ik;Kim, Dong-Ju
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.242-250
    • /
    • 2019
  • In this paper, we propose a method to predict the failure of industrial robot using Seq2Seq (Sequence to Sequence) model, which is a model for transforming time series data among Artificial Neural Network models. The proposed method uses the data of the joint current and angular value, which can be measured by the robot itself, without additional sensor for fault diagnosis. After preprocessing the measured data for the model to learn, the Seq2Seq model was trained to convert the current to angle. Abnormal degree for fault diagnosis uses RMSE (Root Mean Squared Error) during unit time between predicted angle and actual angle. The performance evaluation of the proposed method was performed using the test data measured under different conditions of normal and defective condition of the robot. When the Abnormal degree exceed the threshold, it was classified as a fault, and the accuracy of the fault diagnosis was 96.67% from the experiment. The proposed method has the merit that it can perform fault prediction without additional sensor, and it has been confirmed from the experiment that high diagnostic performance and efficiency are available without requiring deep expert knowledge of the robot.

Line-Segment Feature Analysis Algorithm for Handwritten-Digits Data Reduction (필기체 숫자 데이터 차원 감소를 위한 선분 특징 분석 알고리즘)

  • Kim, Chang-Min;Lee, Woo-Beom
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.4
    • /
    • pp.125-132
    • /
    • 2021
  • As the layers of artificial neural network deepens, and the dimension of data used as an input increases, there is a problem of high arithmetic operation requiring a lot of arithmetic operation at a high speed in the learning and recognition of the neural network (NN). Thus, this study proposes a data dimensionality reduction method to reduce the dimension of the input data in the NN. The proposed Line-segment Feature Analysis (LFA) algorithm applies a gradient-based edge detection algorithm using median filters to analyze the line-segment features of the objects existing in an image. Concerning the extracted edge image, the eigenvalues corresponding to eight kinds of line-segment are calculated, using 3×3 or 5×5-sized detection filters consisting of the coefficient values, including [0, 1, 2, 4, 8, 16, 32, 64, and 128]. Two one-dimensional 256-sized data are produced, accumulating the same response values from the eigenvalue calculated with each detection filter, and the two data elements are added up. Two LFA256 data are merged to produce 512-sized LAF512 data. For the performance evaluation of the proposed LFA algorithm to reduce the data dimension for the recognition of handwritten numbers, as a result of a comparative experiment, using the PCA technique and AlexNet model, LFA256 and LFA512 showed a recognition performance respectively of 98.7% and 99%.