• Title/Summary/Keyword: rule-based model

Search Result 1,015, Processing Time 0.031 seconds

The Impact of an Ontological Knowledge Representation on Information Retrieval: An Evaluation Study of OCLC's FRBR-Based FictionFinder (정보검색에 온톨로지 지식 표현이 미치는 영향에 대한 연구: OCLC의 FRBR기반 FictionFinder의 평가를 중심으로)

  • Cho, Myung-Dae
    • Journal of the Korean Society for information Management
    • /
    • v.25 no.2
    • /
    • pp.183-198
    • /
    • 2008
  • With the purpose of enriching existing catalogues with FRBR, which is the Functional Requirements for Bibliographic Records, in mind, this paper aims to evaluate the impact of bibliographic ontology on the overall system's performance in the field of literature. In doing this, OCLC's FictionFinder(http://fictionfinder.oclc.org) was selected and qualitatively evaluated. In this study 40 university seniors evaluated the following three aspects using the 'transferring thoughts onto paper method': 1) In which ways is this FRBR-aware bibliographical ontology helpful? 2) Are the things which are initially attempted to be helped being helped? 3) Would users seeking one work in particular also see all other related works? In conclusion, this study revealed that, as Cutter claimed in his $2^{nd}$ rule of the library, collocations give added-value to the users and overall ontology provides better interface and usefulness. It also revealed that a system's evaluation with qualitative methodology helped to build full pictures of the system and to grip the information needs of the users when the system is developed. Qualitative evaluations, therefore, could be used as indicators for the evaluation of any information retrieval systems.

Performance Comparison of Clustering using Discritization Algorithm (이산화 알고리즘을 이용한 계층적 클러스터링의 실험적 성능 평가)

  • Won, Jae Kang;Lee, Jeong Chan;Jung, Yong Gyu;Lee, Young Ho
    • Journal of Service Research and Studies
    • /
    • v.3 no.2
    • /
    • pp.53-60
    • /
    • 2013
  • Datamining from the large data in the form of various techniques for obtaining information have been developed. In recent years one of the most sought areas of pattern recognition and machine learning method is created with most of existing learning algorithms based on categorical attributes to a rule or decision model. However, the real-world data, it may consist of numeric attributes in many cases. In addition it contains attributes with numerical values to the normal categorical attribute. In this case, therefore, it is required processes in order to use the data to learn an appropriate value for the type attribute. In this paper, the domain of the numeric attributes are divided into several segments using learning algorithm techniques of discritization. It is described Clustering with other data mining techniques. Large amount of first cluster with characteristics is similar records from the database into smaller groups that split multiple given finite patterns in the pattern space. It is close to each other of a set of patterns that together make up a bunch. Among the set without specifying a particular category in a given data by extracting a pattern. It will be described similar grouping of data clustering technique to classify the data.

  • PDF

Longitudinal Behavior of Prestressed Steel-Box-Girder Bridge (프리스트레스를 도입한 강합성형 교량의 교축방향 거동)

  • Park, Nam Hoi;Kang, Young Jong;Lee, Man Seop;Go, Seok Bong
    • Journal of Korean Society of Steel Construction
    • /
    • v.15 no.3
    • /
    • pp.321-329
    • /
    • 2003
  • To effectively use the cross section of concrete decks, analytical and experimental studies on prestressed steel-box-girder bridges were performed in this study. The method of applying prestress was determined in the analytical study and the longitudinal behavior of the prestressed steel-box-girder bridge was considered in the experimental study. The object model for these studies was a two-span continuous bridge. The method of applying prestress determined herein was divided into two parts: one is that apply prestress to the concrete deck at its intermediate support, and the other is that apply prestress to the lower flange of the steel-box-girder bridge at its end support. The prototype bridge for the experiment was simulated based on the rule of similitude and was fabricated according to construction steps to apply prestress effectively. From the results of the experimental study, it has demonstrated that the prestressed steel-box-girder bridge provides better performance than the general steel-box-girder bridge in view of the increase of the design live load, the reduction of the tensile stress of the concrete deck at intermediate support, and the reduction of the displacement.

Numerical Studies on Combined VM Loading and Eccentricity Factor of Circular Footings on Sand (모래지반에서 원형기초의 수직-모멘트 조합하중 지지력과 편심계수에 대한 수치해석 연구)

  • Kim, Dong-Joon;Youn, Jun-Ung;Jee, Sung-Hyun;Choo, Yun Wook
    • Journal of the Korean Geotechnical Society
    • /
    • v.30 no.3
    • /
    • pp.59-72
    • /
    • 2014
  • For circular rigid footings with a rough base on sand, combined vertical - moment loading capacity was studied by three-dimensional numerical modelling. Mohr-Coulomb plasticity model with the associated flow-rule was used for the soil. After comparing the results of the swipe loading method, which can construct the interaction diagram with smaller number of analyses, and those of the probe loading method, which can simulate the load-paths in the conventional load tests, it was found that both loading methods give similar results. Conventional methods based on the effective width or area concept and the results by eccentricity factor ($e_{\gamma}$) were reviewed. The results by numerical modelling of this study were compared with those of previous studies. The combined loading capacity for vertical (V) - moment (M) loading was barely affected by the internal friction angle. It was found that the effective width concept expressed in the form of eccentricity factor can be applied to circular footings. The numerical results of this study were smaller than the previous experimental results and the differences between them increased with the eccentricity and moment load. Discussions are made on the reason of the disparities between the numerical and experimental results, and the areas for further researches are mentioned.

A Proposal of Urban Park Design Using DT Cafe in Post-COVID Era (포스트 코로나 시대에 드라이브 스루 카페를 활용한 도시공원 디자인 제안)

  • Kil, Sue Yeon;Shin, Hae Min;Choi, Joo Hyun;Kim, Yoo Sun
    • Journal of the Korean Society of Floral Art and Design
    • /
    • no.45
    • /
    • pp.31-46
    • /
    • 2021
  • With the advent of the post-COVID19 era, people must maintain social distancing to quarantine. However, this rule deprives people of freedom. Therefore, this study proposes a new normal plan for urban park design using drive-thru to recreate space for people to maintain and enjoy their previous lives while complying with quarantine rules. Olympic Park has a large floating population, and is one of the places where drive-thru is available. Therefore, the study designed this place to be the only cafe that could be operated if other cafes were shut down due to social distancing. The cafes in the park were designed into five spaces based on Olympic Park's flag motifs. The results were as follows. The cafe's name is CUPPY (Cup+Coffee), while each logo letter is expressed using the colors of the Olympic flag: blue, yellow, black, green, and red. The cafe spaces were divided into five continents (Europe, Asia, Africa, Oceania, and America), as symbolized by the Olympic flag, with the driving route shaped like the Olympic logo to match the five spaces. Human beings need change and adaptation in various fields to live in a post-COVID19 era that they have never experienced before. Just as the New Normal changes with time, and should, research is essential for presenting a New Normal in urban park design that reflects this disaster situation following the COVID-19 crisis. On this very point, we expect that this research will serve as a reference for urban park design. Additionally, it is believed that continuous suggestions and research will be necessary to apply the model to more diverse environments.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Stock-Index Invest Model Using News Big Data Opinion Mining (뉴스와 주가 : 빅데이터 감성분석을 통한 지능형 투자의사결정모형)

  • Kim, Yoo-Sin;Kim, Nam-Gyu;Jeong, Seung-Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.143-156
    • /
    • 2012
  • People easily believe that news and stock index are closely related. They think that securing news before anyone else can help them forecast the stock prices and enjoy great profit, or perhaps capture the investment opportunity. However, it is no easy feat to determine to what extent the two are related, come up with the investment decision based on news, or find out such investment information is valid. If the significance of news and its impact on the stock market are analyzed, it will be possible to extract the information that can assist the investment decisions. The reality however is that the world is inundated with a massive wave of news in real time. And news is not patterned text. This study suggests the stock-index invest model based on "News Big Data" opinion mining that systematically collects, categorizes and analyzes the news and creates investment information. To verify the validity of the model, the relationship between the result of news opinion mining and stock-index was empirically analyzed by using statistics. Steps in the mining that converts news into information for investment decision making, are as follows. First, it is indexing information of news after getting a supply of news from news provider that collects news on real-time basis. Not only contents of news but also various information such as media, time, and news type and so on are collected and classified, and then are reworked as variable from which investment decision making can be inferred. Next step is to derive word that can judge polarity by separating text of news contents into morpheme, and to tag positive/negative polarity of each word by comparing this with sentimental dictionary. Third, positive/negative polarity of news is judged by using indexed classification information and scoring rule, and then final investment decision making information is derived according to daily scoring criteria. For this study, KOSPI index and its fluctuation range has been collected for 63 days that stock market was open during 3 months from July 2011 to September in Korea Exchange, and news data was collected by parsing 766 articles of economic news media M company on web page among article carried on stock information>news>main news of portal site Naver.com. In change of the price index of stocks during 3 months, it rose on 33 days and fell on 30 days, and news contents included 197 news articles before opening of stock market, 385 news articles during the session, 184 news articles after closing of market. Results of mining of collected news contents and of comparison with stock price showed that positive/negative opinion of news contents had significant relation with stock price, and change of the price index of stocks could be better explained in case of applying news opinion by deriving in positive/negative ratio instead of judging between simplified positive and negative opinion. And in order to check whether news had an effect on fluctuation of stock price, or at least went ahead of fluctuation of stock price, in the results that change of stock price was compared only with news happening before opening of stock market, it was verified to be statistically significant as well. In addition, because news contained various type and information such as social, economic, and overseas news, and corporate earnings, the present condition of type of industry, market outlook, the present condition of market and so on, it was expected that influence on stock market or significance of the relation would be different according to the type of news, and therefore each type of news was compared with fluctuation of stock price, and the results showed that market condition, outlook, and overseas news was the most useful to explain fluctuation of news. On the contrary, news about individual company was not statistically significant, but opinion mining value showed tendency opposite to stock price, and the reason can be thought to be the appearance of promotional and planned news for preventing stock price from falling. Finally, multiple regression analysis and logistic regression analysis was carried out in order to derive function of investment decision making on the basis of relation between positive/negative opinion of news and stock price, and the results showed that regression equation using variable of market conditions, outlook, and overseas news before opening of stock market was statistically significant, and classification accuracy of logistic regression accuracy results was shown to be 70.0% in rise of stock price, 78.8% in fall of stock price, and 74.6% on average. This study first analyzed relation between news and stock price through analyzing and quantifying sensitivity of atypical news contents by using opinion mining among big data analysis techniques, and furthermore, proposed and verified smart investment decision making model that could systematically carry out opinion mining and derive and support investment information. This shows that news can be used as variable to predict the price index of stocks for investment, and it is expected the model can be used as real investment support system if it is implemented as system and verified in the future.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

Water shortage assessment by applying future climate change for boryeong dam using SWAT (SWAT을 이용한 기후변화에 따른 보령댐의 물부족 평가)

  • Kim, Won Jin;Jung, Chung Gil;Kim, Jin Uk;Kim, Seong Joon
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.12
    • /
    • pp.1195-1205
    • /
    • 2018
  • In the study, the water shortage of Boryeong Dam watershed ($163.6km^2$) was evaluated under future climate change scenario. The Soil and Water Assessment Tool (SWAT) was used considering future dam release derived from multiple linear regression (MLR) analysis. The SWAT was calibrated and verified by using daily observed dam inflow and storage for 12 years (2005 to 2016) with average Nash-Sutcliffe efficiency of 0.59 and 0.91 respectively. The monthly dam release by 12 years MLR showed coefficient of determination ($R^2$) of above 0.57. Among the 27 RCP 4.5 scenarios and 26 RCP 8.5 scenarios of GCM (General Circulation Model), the RCP 8.5 BCC-CSM1-1-M scenario was selected as future extreme drought scenario by analyzing SPI severity, duration, and the longest dry period. The scenario showed -23.6% change of yearly dam storage, and big changes of -34.0% and -24.1% for spring and winter dam storage during 2037~2047 period comparing with 2007~2016 period. Based on Runs theory of analyzing severity and magnitude, the future frequency of 5 to 10 years increased from 3 in 2007~2016 to 5 in 2037~2046 period. When considering the future shortened water shortage return period and the big decreases of winter and spring dam storage, a new dam operation rule from autumn is necessary for future possible water shortage condition.