• Title/Summary/Keyword: Neural Tensor Network

Search Result 59, Processing Time 0.025 seconds

Discernment of Android User Interaction Data Distribution Using Deep Learning

  • Ho, Jun-Won
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.143-148
    • /
    • 2022
  • In this paper, we employ deep neural network (DNN) to discern Android user interaction data distribution from artificial data distribution. We utilize real Android user interaction trace dataset collected from [1] to evaluate our DNN design. In particular, we use sequential model with 4 dense hidden layers and 1 dense output layer in TensorFlow and Keras. We also deploy sigmoid activation function for a dense output layer with 1 neuron and ReLU activation function for each dense hidden layer with 32 neurons. Our evaluation shows that our DNN design fulfills high test accuracy of at least 0.9955 and low test loss of at most 0.0116 in all cases of artificial data distributions.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

AB9: A neural processor for inference acceleration

  • Cho, Yong Cheol Peter;Chung, Jaehoon;Yang, Jeongmin;Lyuh, Chun-Gi;Kim, HyunMi;Kim, Chan;Ham, Je-seok;Choi, Minseok;Shin, Kyoungseon;Han, Jinho;Kwon, Youngsu
    • ETRI Journal
    • /
    • v.42 no.4
    • /
    • pp.491-504
    • /
    • 2020
  • We present AB9, a neural processor for inference acceleration. AB9 consists of a systolic tensor core (STC) neural network accelerator designed to accelerate artificial intelligence applications by exploiting the data reuse and parallelism characteristics inherent in neural networks while providing fast access to large on-chip memory. Complementing the hardware is an intuitive and user-friendly development environment that includes a simulator and an implementation flow that provides a high degree of programmability with a short development time. Along with a 40-TFLOP STC that includes 32k arithmetic units and over 36 MB of on-chip SRAM, our baseline implementation of AB9 consists of a 1-GHz quad-core setup with other various industry-standard peripheral intellectual properties. The acceleration performance and power efficiency were evaluated using YOLOv2, and the results show that AB9 has superior performance and power efficiency to that of a general-purpose graphics processing unit implementation. AB9 has been taped out in the TSMC 28-nm process with a chip size of 17 × 23 ㎟. Delivery is expected later this year.

A Method of License Plate Location and Character Recognition based on CNN

  • Fang, Wei;Yi, Weinan;Pang, Lin;Hou, Shuonan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3488-3500
    • /
    • 2020
  • At the present time, the economy continues to flourish, and private cars have become the means of choice for most people. Therefore, the license plate recognition technology has become an indispensable part of intelligent transportation, with research and application value. In recent years, the convolution neural network for image classification is an application of deep learning on image processing. This paper proposes a strategy to improve the YOLO model by studying the deep learning convolutional neural network (CNN) and related target detection methods, and combines the OpenCV and TensorFlow frameworks to achieve efficient recognition of license plate characters. The experimental results show that target detection method based on YOLO is beneficial to shorten the training process and achieve a good level of accuracy.

Optimizing 2-stage Tiling-based Matrix Multiplication in FPGA-based Neural Network Accelerator (FPGA기반 뉴럴네트워크 가속기에서 2차 타일링 기반 행렬 곱셈 최적화)

  • Jinse, Kwon;Jemin, Lee;Yongin, Kwon;Jeman, Park;Misun, Yu;Taeho, Kim;Hyungshin, Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.367-374
    • /
    • 2022
  • The acceleration of neural networks has become an important topic in the field of computer vision. An accelerator is absolutely necessary for accelerating the lightweight model. Most accelerator-supported operators focused on direct convolution operations. If the accelerator does not provide GEMM operation, it is mostly replaced by CPU operation. In this paper, we proposed an optimization technique for 2-stage tiling-based GEMM routines on VTA. We improved performance of the matrix multiplication routine by maximizing the reusability of the input matrix and optimizing the operation pipelining. In addition, we applied the proposed technique to the DarkNet framework to check the performance improvement of the matrix multiplication routine. The proposed GEMM method showed a performance improvement of more than 2.4 times compared to the non-optimized GEMM method. The inference performance of our DarkNet framework has also improved by at least 2.3 times.

Muscular Adaptations and Novel Magnetic Resonance Characterizations of Spinal Cord Injury

  • Lim, Woo-Taek
    • Physical Therapy Korea
    • /
    • v.22 no.2
    • /
    • pp.70-80
    • /
    • 2015
  • The spinal cord is highly complex, consisting of a specialized neural network that comprised both neuronal and non-neuronal cells. Any kind of injury and/or insult to the spinal cord leads to a series of damaging events resulting in motor and/or sensory deficits below the level of injury. As a result, muscle paralysis (or paresis) leading to muscle atrophy or shrinking of the muscle along with changes in muscle fiber type, and contractile properties have been observed. Traditionally, histology had been used as a gold standard to characterize spinal cord injury (SCI)-induced adaptation in spinal cord and skeletal muscle. However, histology measurements is invasive and cannot be used for longitudinal analysis. Therefore, the use of conventional magnetic resonance imaging (MRI) is promoted to be used as an alternative non-invasive method, which allows the repeated measurements over time and secures the safety against radiation by using radiofrequency pulse. Currently, many of pathological changes and adaptations occurring after SCI can be measured by MRI methods, specifically 3-dimensional MRI with the advanced diffusion tensor imaging technique. Both techniques have shown to be sensitive in measuring morphological and structural changes in skeletal muscle and the spinal cord.

Time Series Classification of Cryptocurrency Price Trend Based on a Recurrent LSTM Neural Network

  • Kwon, Do-Hyung;Kim, Ju-Bong;Heo, Ju-Sung;Kim, Chan-Myung;Han, Youn-Hee
    • Journal of Information Processing Systems
    • /
    • v.15 no.3
    • /
    • pp.694-706
    • /
    • 2019
  • In this study, we applied the long short-term memory (LSTM) model to classify the cryptocurrency price time series. We collected historic cryptocurrency price time series data and preprocessed them in order to make them clean for use as train and target data. After such preprocessing, the price time series data were systematically encoded into the three-dimensional price tensor representing the past price changes of cryptocurrencies. We also presented our LSTM model structure as well as how to use such price tensor as input data of the LSTM model. In particular, a grid search-based k-fold cross-validation technique was applied to find the most suitable LSTM model parameters. Lastly, through the comparison of the f1-score values, our study showed that the LSTM model outperforms the gradient boosting model, a general machine learning model known to have relatively good prediction performance, for the time series classification of the cryptocurrency price trend. With the LSTM model, we got a performance improvement of about 7% compared to using the GB model.

Design of detection method for malicious URL based on Deep Neural Network (뉴럴네트워크 기반에 악성 URL 탐지방법 설계)

  • Kwon, Hyun;Park, Sangjun;Kim, Yongchul
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.5
    • /
    • pp.30-37
    • /
    • 2021
  • Various devices are connected to the Internet, and attacks using the Internet are occurring. Among such attacks, there are attacks that use malicious URLs to make users access to wrong phishing sites or distribute malicious viruses. Therefore, how to detect such malicious URL attacks is one of the important security issues. Among recent deep learning technologies, neural networks are showing good performance in image recognition, speech recognition, and pattern recognition. This neural network can be applied to research that analyzes and detects patterns of malicious URL characteristics. In this paper, performance analysis according to various parameters was performed on a method of detecting malicious URLs using neural networks. In this paper, malicious URL detection performance was analyzed while changing the activation function, learning rate, and neural network structure. The experimental data was crawled by Alexa top 1 million and Whois to build the data, and the machine learning library used TensorFlow. As a result of the experiment, when the number of layers is 4, the learning rate is 0.005, and the number of nodes in each layer is 100, the accuracy of 97.8% and the f1 score of 92.94% are obtained.

Extracting Neural Networks via Meltdown (멜트다운 취약점을 이용한 인공신경망 추출공격)

  • Jeong, Hoyong;Ryu, Dohyun;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1031-1041
    • /
    • 2020
  • Cloud computing technology plays an important role in the deep learning industry as deep learning services are deployed frequently on top of cloud infrastructures. In such cloud environment, virtualization technology provides logically independent and isolated computing space for each tenant. However, recent studies demonstrate that by leveraging vulnerabilities of virtualization techniques and shared processor architectures in the cloud system, various side-channels can be established between cloud tenants. In this paper, we propose a novel attack scenario that can steal internal information of deep learning models by exploiting the Meltdown vulnerability in a multi-tenant system environment. On the basis of our experiment, the proposed attack method could extract internal information of a TensorFlow deep-learning service with 92.875% accuracy and 1.325kB/s extraction speed.

Prediction of pollution loads in the Geum River upstream using the recurrent neural network algorithm

  • Lim, Heesung;An, Hyunuk;Kim, Haedo;Lee, Jeaju
    • Korean Journal of Agricultural Science
    • /
    • v.46 no.1
    • /
    • pp.67-78
    • /
    • 2019
  • The purpose of this study was to predict the water quality using the RNN (recurrent neutral network) and LSTM (long short-term memory). These are advanced forms of machine learning algorithms that are better suited for time series learning compared to artificial neural networks; however, they have not been investigated before for water quality prediction. Three water quality indexes, the BOD (biochemical oxygen demand), COD (chemical oxygen demand), and SS (suspended solids) are predicted by the RNN and LSTM. TensorFlow, an open source library developed by Google, was used to implement the machine learning algorithm. The Okcheon observation point in the Geum River basin in the Republic of Korea was selected as the target point for the prediction of the water quality. Ten years of daily observed meteorological (daily temperature and daily wind speed) and hydrological (water level and flow discharge) data were used as the inputs, and irregularly observed water quality (BOD, COD, and SS) data were used as the learning materials. The irregularly observed water quality data were converted into daily data with the linear interpolation method. The water quality after one day was predicted by the machine learning algorithm, and it was found that a water quality prediction is possible with high accuracy compared to existing physical modeling results in the prediction of the BOD, COD, and SS, which are very non-linear. The sequence length and iteration were changed to compare the performances of the algorithms.