• Title/Summary/Keyword: Tensor Flow

Search Result 185, Processing Time 0.031 seconds

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Measurement of Two-Dimensional Velocity Distribution of Spatio-Temporal Image Velocimeter using Cross-Correlation Analysis (상호상관법을 이용한 시공간 영상유속계의 2차원 유속분포 측정)

  • Yu, Kwonkyu;Kim, Seojun;Kim, Dongsu
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.6
    • /
    • pp.537-546
    • /
    • 2014
  • Surface image velocimetry was introduced as an efficient and sage alternative to conventional river flow measurement methods during floods. The conventional surface image velocimetry uses a pair of images to estimate velocity fields using cross-correlation analysis. This method is appropriate to analyzing images taken with a short time interval. It, however, has some drawbacks; it takes a while to analyze images for the verage velocity of long time intervals and is prone to include errors or uncertainties due to flow characteristics and/or image taking conditions. Methods using spatio-temporal images, called STIV, were developed to overcome the drawbacks of conventional surface image velocimetry. The grayscale-gradient tensor method, one of various STIVs, has shown to be effectively reducing the analysis time and is fairly insusceptible to any measurement noise. It, unfortunately, can only be applied to the main flow direction. This means that it can not measure any two-dimensional flow field, e.g. flow in the vicinity of river structures and flow around river bends. The present study aimed to develop a new method of analyzing spatio-temporal images in two-dimension using cross-correlation analysis. Unlike the conventional STIV, the developed method can be used to measure two-dimensional flow substantially. The method also has very high spatial resolution and reduces the analysis time. A verification test using artificial images with lid-driven cavity flow showed that the maximum error of the method is less than 10 % and the average error is less than 5 %. This means that the developed scheme seems to be fairly accurate, even for two-dimensional flow.

Particle Based Discrete Element Modeling of Hydraulic Stimulation of Geothermal Reservoirs, Induced Seismicity and Fault Zone Deformation (수리자극에 의한 지열저류층에서의 유도지진과 단층대의 변형에 관한 입자기반 개별요소법 모델링 연구)

  • Yoon, Jeoung Seok;Hakimhashemi, Amir;Zang, Arno;Zimmermann, Gunter
    • Tunnel and Underground Space
    • /
    • v.23 no.6
    • /
    • pp.493-505
    • /
    • 2013
  • This numerical study investigates seismicity and fault slip induced by fluid injection in deep geothermal reservoir with pre-existing fractures and fault. Particle Flow Code 2D is used with additionally implemented hydro-mechanical coupled fluid flow algorithm and acoustic emission moment tensor inversion algorithm. The output of the model includes spatio-temporal evolution of induced seismicity (hypocenter locations and magnitudes) and fault deformation (failure and slip) in relation to fluid pressure distribution. The model is applied to a case of fluid injection with constant rates changing in three steps using different fluid characters, i.e. the viscosity, and different injection locations. In fractured reservoir, spatio-temporal distribution of the induced seismicity differs significantly depending on the viscosity of the fracturing fluid. In a fractured reservoir, injection of low viscosity fluid results in larger volume of induced seismicity cloud as the fluid can migrate easily to the reservoir and cause large number and magnitude of induced seismicity in the post-shut-in period. In a faulted reservoir, fault deformation (co-seismic failure and aseismic slip) can occur by a small perturbation of fracturing fluid (<0.1 MPa) can be induced when the injection location is set close to the fault. The presented numerical model technique can practically be used in geothermal industry to predict the induced seismicity pattern and magnitude distribution resulting from hydraulic stimulation of geothermal reservoirs prior to actual injection operation.

AB9: A neural processor for inference acceleration

  • Cho, Yong Cheol Peter;Chung, Jaehoon;Yang, Jeongmin;Lyuh, Chun-Gi;Kim, HyunMi;Kim, Chan;Ham, Je-seok;Choi, Minseok;Shin, Kyoungseon;Han, Jinho;Kwon, Youngsu
    • ETRI Journal
    • /
    • v.42 no.4
    • /
    • pp.491-504
    • /
    • 2020
  • We present AB9, a neural processor for inference acceleration. AB9 consists of a systolic tensor core (STC) neural network accelerator designed to accelerate artificial intelligence applications by exploiting the data reuse and parallelism characteristics inherent in neural networks while providing fast access to large on-chip memory. Complementing the hardware is an intuitive and user-friendly development environment that includes a simulator and an implementation flow that provides a high degree of programmability with a short development time. Along with a 40-TFLOP STC that includes 32k arithmetic units and over 36 MB of on-chip SRAM, our baseline implementation of AB9 consists of a 1-GHz quad-core setup with other various industry-standard peripheral intellectual properties. The acceleration performance and power efficiency were evaluated using YOLOv2, and the results show that AB9 has superior performance and power efficiency to that of a general-purpose graphics processing unit implementation. AB9 has been taped out in the TSMC 28-nm process with a chip size of 17 × 23 ㎟. Delivery is expected later this year.

A nonlinear Co-rotational Quasi-Conforming 4-node Shell Element Using Ivanov-Ilyushin Yield Criteria (이바노브-율리신 항복조건을 이용한 4절점 비선형 준적합 쉘요소)

  • Panot, Songsak Pramin;Kim, Ki Du
    • Journal of Korean Society of Steel Construction
    • /
    • v.20 no.3
    • /
    • pp.409-419
    • /
    • 2008
  • A co-rotational quasi-conforming formulation of four- node stress resultant shell elements using Ivanov-Ilyushin yield criteria are presented for the nonlinear analysis of plate and shell structure. The formulation of the geometrical stiffness is defined by the full definition of the Green strain tensor and it is efficient for analyzing stability problems of moderately thick plates and shells as it incorporates the bending moment and transverse shear resultant force. As a result of the explicit integration of the tangent stiffness matrix, this formulation is computationally very efficient in incremental nonlinear analysis. This formulation also integrates the elasto-plastic material behaviour using Ivanov Ilyushin yield condition with isotropic strain hardening and its asocia ted flow rules. The Ivanov Ilyushin plasticity, which avoids multi-layer integration, is computationally efficient in large-scale modeling of elasto-plastic shell structures. The numerical examples herein illustrate a satisfactory concordance with test ed and published references.

The Study on Implementation of Crime Terms Classification System for Crime Issues Response

  • Jeong, Inkyu;Yoon, Cheolhee;Kang, Jang Mook
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.3
    • /
    • pp.61-72
    • /
    • 2020
  • The fear of crime, discussed in the early 1960s in the United States, is a psychological response, such as anxiety or concern about crime, the potential victim of a crime. These anxiety factors lead to the burden of the individual in securing the psychological stability and indirect costs of the crime against the society. Fear of crime is not a good thing, and it is a part that needs to be adjusted so that it cannot be exaggerated and distorted by the policy together with the crime coping and resolution. This is because fear of crime has as much harm as damage caused by criminal act. Eric Pawson has argued that the popular impression of violent crime is not formed because of media reports, but by official statistics. Therefore, the police should watch and analyze news related to fear of crime to reduce the social cost of fear of crime and prepare a preemptive response policy before the people have 'fear of crime'. In this paper, we propose a deep - based news classification system that helps police cope with crimes related to crimes reported in the media efficiently and quickly and precisely. The goal is to establish a system that can quickly identify changes in security issues that are rapidly increasing by categorizing news related to crime among news articles. To construct the system, crime data was learned so that news could be classified according to the type of crime. Deep learning was applied by using Google tensor flow. In the future, it is necessary to continue research on the importance of keyword according to early detection of issues that are rapidly increasing by crime type and the power of the press, and it is also necessary to constantly supplement crime related corpus.

Experiment and Implementation of a Machine-Learning Based k-Value Prediction Scheme in a k-Anonymity Algorithm (k-익명화 알고리즘에서 기계학습 기반의 k값 예측 기법 실험 및 구현)

  • Muh, Kumbayoni Lalu;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.1
    • /
    • pp.9-16
    • /
    • 2020
  • The k-anonymity scheme has been widely used to protect private information when Big Data are distributed to a third party for research purposes. When the scheme is applied, an optimal k value determination is one of difficult problems to be resolved because many factors should be considered. Currently, the determination has been done almost manually by human experts with their intuition. This leads to degrade performance of the anonymization, and it takes much time and cost for them to do a task. To overcome this problem, a simple idea has been proposed that is based on machine learning. This paper describes implementations and experiments to realize the proposed idea. In thi work, a deep neural network (DNN) is implemented using tensorflow libraries, and it is trained and tested using input dataset. The experiment results show that a trend of training errors follows a typical pattern in DNN, but for validation errors, our model represents a different pattern from one shown in typical training process. The advantage of the proposed approach is that it can reduce time and cost for experts to determine k value because it can be done semi-automatically.

Solidification Process of a Binary Mixture with Anisotropy of the Mushy Region (머시영역의 비등방성을 고려한 2성분혼합물의 응고과정)

  • 유호선
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.17 no.1
    • /
    • pp.162-171
    • /
    • 1993
  • This paper deals with the anisotropy of the mushy region during solidification process of a binary mixture. A theoretical model which specifies a permeability tensor in terms of pricipal values is proposed. Also, the governing equations are modified into convenient forms for the numerical analysis with the existing algorithm. Some test computations are performed for soeidification of aqueous ammonium chloride solution contained in a square cavity. Results show that not only the present model is capable of resolving fundamental characteristics of the tranport phenomena, but also the anisotropy significantly affects the interdendritic flow structure, i.e., double-diffusive convection and macrosegregation patterns.

Trend Analysis of Korea Papers in the Fields of 'Artificial Intelligence', 'Machine Learning' and 'Deep Learning' ('인공지능', '기계학습', '딥 러닝' 분야의 국내 논문 동향 분석)

  • Park, Hong-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.4
    • /
    • pp.283-292
    • /
    • 2020
  • Artificial intelligence, which is one of the representative images of the 4th industrial revolution, has been highly recognized since 2016. This paper analyzed domestic paper trends for 'Artificial Intelligence', 'Machine Learning', and 'Deep Learning' among the domestic papers provided by the Korea Academic Education and Information Service. There are approximately 10,000 searched papers, and word count analysis, topic modeling and semantic network is used to analyze paper's trends. As a result of analyzing the extracted papers, compared to 2015, in 2016, it increased 600% in the field of artificial intelligence, 176% in machine learning, and 316% in the field of deep learning. In machine learning, a support vector machine model has been studied, and in deep learning, convolutional neural networks using TensorFlow are widely used in deep learning. This paper can provide help in setting future research directions in the fields of 'artificial intelligence', 'machine learning', and 'deep learning'.

Development of water elevation prediction algorithm using unstructured data : Application to Cheongdam Bridge, Korea (비정형화 데이터를 활용한 수위예측 알고리즘 개발 : 청담대교 적용)

  • Lee, Seung Yeon;Yoo, Hyung Ju;Lee, Seung Oh
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2019.05a
    • /
    • pp.121-121
    • /
    • 2019
  • 특정 지역에 집중적으로 비가 내리는 현상인 국지성호우가 빈번히 발생함에 따라 하천 주변 사회기반시설의 침수 위험성이 증가하고 있다. 침수 위험성 판단 여부는 주로 수위정보를 이용하며 수위 예측은 대부분 수치모형을 이용한다. 본 연구에서는 빅데이터 기반의 RNN(Recurrent Neural Networks)기법 알고리즘을 활용하여 수위를 예측하였다. 연구대상지는 조위의 영향을 많이 받는 한강 전역을 대상으로 하였다. 2008년~2018년(10개년)의 실제 침수 피해 실적을 조사한 결과 잠수교, 한강대교, 청담대교 등에서 침수 피해 발생률이 높게 나타났고 SNS(Social Network Services)와 같은 비정형화 자료에서는 청담대교가 가장 많이 태그(Tag)되어 청담대교를 연구범위로 설정하였다. 본 연구에서는 Python에서 제공하는 Tensor flow Library를 이용하여 수위예측 알고리즘을 적용하였다. 데이터는 정형화 데이터와 비정형 데이터를 사용하였으며 정형화 데이터는 한강홍수 통제소나 기상청에서 제공하는 최근 10년간의 (2008~2018) 수위 및 강우량 자료를 수집하였다. 비정형화 데이터는 SNS를 이용하여 민간 정보를 수집하여 정형화된 자료와 함께 전체자료를 구축하였다. 민감도 분석을 통하여 모델의 은닉층(5), 학습률(0.02) 및 반복횟수(100)의 최적값을 설정하였고, 24시간 동안의 데이터를 이용하여 3시간 후의 수위를 예측하였다. 2008년~ 2017년 까지의 데이터는 학습 데이터로 사용하였으며 2018년의 수위를 예측 및 평가하였다. 2018년의 관측수위 자료와 비교한 결과 90% 이상의 데이터가 10% 이내의 오차를 나타내었으며, 첨두수위도 비교적 정확하게 예측되는 것을 확인하였다. 향후 수위와 강우량뿐만 아니라 다양한 인자들도 고려한다면 보다 신속하고 정확한 예측 정보를 얻을 수 있을 것으로 기대된다.

  • PDF