• Title/Summary/Keyword: rule-based model

Search Result 1,019, Processing Time 0.029 seconds

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF

Scalable RDFS Reasoning using Logic Programming Approach in a Single Machine (단일머신 환경에서의 논리적 프로그래밍 방식 기반 대용량 RDFS 추론 기법)

  • Jagvaral, Batselem;Kim, Jemin;Lee, Wan-Gon;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.41 no.10
    • /
    • pp.762-773
    • /
    • 2014
  • As the web of data is increasingly producing large RDFS datasets, it becomes essential in building scalable reasoning engines over large triples. There have been many researches used expensive distributed framework, such as Hadoop, to reason over large RDFS triples. However, in many cases we are required to handle millions of triples. In such cases, it is not necessary to deploy expensive distributed systems because logic program based reasoners in a single machine can produce similar reasoning performances with that of distributed reasoner using Hadoop. In this paper, we propose a scalable RDFS reasoner using logical programming methods in a single machine and compare our empirical results with that of distributed systems. We show that our logic programming based reasoner using a single machine performs as similar as expensive distributed reasoner does up to 200 million RDFS triples. In addition, we designed a meta data structure by decomposing the ontology triples into separate sectors. Instead of loading all the triples into a single model, we selected an appropriate subset of the triples for each ontology reasoning rule. Unification makes it easy to handle conjunctive queries for RDFS schema reasoning, therefore, we have designed and implemented RDFS axioms using logic programming unifications and efficient conjunctive query handling mechanisms. The throughputs of our approach reached to 166K Triples/sec over LUBM1500 with 200 million triples. It is comparable to that of WebPIE, distributed reasoner using Hadoop and Map Reduce, which performs 185K Triples/sec. We show that it is unnecessary to use the distributed system up to 200 million triples and the performance of logic programming based reasoner in a single machine becomes comparable with that of expensive distributed reasoner which employs Hadoop framework.

Assessing the Positioning Accuracy of High density Point Clouds produced from Rotary Wing Quadrocopter Unmanned Aerial System based Imagery (회전익 UAS 영상기반 고밀도 측점자료의 위치 정확도 평가)

  • Lee, Yong Chang
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.2
    • /
    • pp.39-48
    • /
    • 2015
  • Lately, Unmanned Aerial Vehicles(UAV), Unmanned Aerial Systems(UAS) or also often known as drones, as a data acquisition platform and as a measurement instrument are becoming attractive for many photogrammetric surveying applications, especially generation of the high density point clouds(HDPC). This paper presents the performance evaluation of a low-cost rotary wing quadrocopter UAS for generation of the HDPC in a test bed environment. Its performance was assessed by comparing the coordinates of UAS based HDPC to the results of Network RTK GNSS surveying with 62 ground check points. The results indicate that the position RMSE of the check points are ${\sigma}_H={\pm}0.102m$ in Horizonatal plane, and ${\sigma}_V={\pm}0.209m$ in vertical, and the maxium deviation of Elevation was 0.570m within block area of ortho-photo mosaic. Therefore the required level of accuracy at NGII for production of ortho-images mosaic at a scale of 1:1000 was reached, UAS based imagery was found to make use of it to update scale 1:1000 map. And also, since this results are less than or equal to the required level in working rule agreement for airborne laser scanning surveying of NGII for Digital Elevation Model generation of grids $1m{\times}1m$ and 1:1000 scale, could be applied with production of topographic map and ortho-image mosaic at a scale of 1:1000~1:2500 over small-scale areas.

A Study on Application of Resource Types of RDA to KCR4 (RDA 자원유형의 KCR4 적용에 관한 연구)

  • Lee, Mi-Hwa
    • Journal of the Korean Society for information Management
    • /
    • v.28 no.3
    • /
    • pp.103-121
    • /
    • 2011
  • This study is to seek to apply resource types of RDA to KCR4. It is difficult to choose appropriate term and to embody FRBR model because GMD of KCR4 is the mixture of content-based vocabularies and carrier-based vocabularies. SMD is to need to reflect the current technological terms. Resource type of RDA was already developed to overcome limitation of AACR2's GMD, and would affect the world cataloging environment, therefore it is need to apply resource type of RDA to Korean cataloging rule. For this study, case study and survey were used. In case study, it was to scan all GMD term for one university library to build by programming and to grape librarian and users' potential need. In the survey by cataloging librarian, it was to figure out the current description of resource type in university library and to test RDA resource type. As a result, it was needed to revise the vocabulary to the obvious and user-understandable list. Also it was different in correction rate in RDA testing by resource type. Based on the case study and the survey, RDA resource type was applied to KCR4 resource list by adding term such as computer game in content type, and by inserting terms such as DVD, CD-ROM, Blu-Ray, computer file in carrier type. It also applied RDA description method and display means to KCR4. This study would apply RDA resource type to KCR4 and contribute to revise KCR4 resource type.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

Selection Model of System Trading Strategies using SVM (SVM을 이용한 시스템트레이딩전략의 선택모형)

  • Park, Sungcheol;Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.59-71
    • /
    • 2014
  • System trading is becoming more popular among Korean traders recently. System traders use automatic order systems based on the system generated buy and sell signals. These signals are generated from the predetermined entry and exit rules that were coded by system traders. Most researches on system trading have focused on designing profitable entry and exit rules using technical indicators. However, market conditions, strategy characteristics, and money management also have influences on the profitability of the system trading. Unexpected price deviations from the predetermined trading rules can incur large losses to system traders. Therefore, most professional traders use strategy portfolios rather than only one strategy. Building a good strategy portfolio is important because trading performance depends on strategy portfolios. Despite of the importance of designing strategy portfolio, rule of thumb methods have been used to select trading strategies. In this study, we propose a SVM-based strategy portfolio management system. SVM were introduced by Vapnik and is known to be effective for data mining area. It can build good portfolios within a very short period of time. Since SVM minimizes structural risks, it is best suitable for the futures trading market in which prices do not move exactly the same as the past. Our system trading strategies include moving-average cross system, MACD cross system, trend-following system, buy dips and sell rallies system, DMI system, Keltner channel system, Bollinger Bands system, and Fibonacci system. These strategies are well known and frequently being used by many professional traders. We program these strategies for generating automated system signals for entry and exit. We propose SVM-based strategies selection system and portfolio construction and order routing system. Strategies selection system is a portfolio training system. It generates training data and makes SVM model using optimal portfolio. We make $m{\times}n$ data matrix by dividing KOSPI 200 index futures data with a same period. Optimal strategy portfolio is derived from analyzing each strategy performance. SVM model is generated based on this data and optimal strategy portfolio. We use 80% of the data for training and the remaining 20% is used for testing the strategy. For training, we select two strategies which show the highest profit in the next day. Selection method 1 selects two strategies and method 2 selects maximum two strategies which show profit more than 0.1 point. We use one-against-all method which has fast processing time. We analyse the daily data of KOSPI 200 index futures contracts from January 1990 to November 2011. Price change rates for 50 days are used as SVM input data. The training period is from January 1990 to March 2007 and the test period is from March 2007 to November 2011. We suggest three benchmark strategies portfolio. BM1 holds two contracts of KOSPI 200 index futures for testing period. BM2 is constructed as two strategies which show the largest cumulative profit during 30 days before testing starts. BM3 has two strategies which show best profits during testing period. Trading cost include brokerage commission cost and slippage cost. The proposed strategy portfolio management system shows profit more than double of the benchmark portfolios. BM1 shows 103.44 point profit, BM2 shows 488.61 point profit, and BM3 shows 502.41 point profit after deducting trading cost. The best benchmark is the portfolio of the two best profit strategies during the test period. The proposed system 1 shows 706.22 point profit and proposed system 2 shows 768.95 point profit after deducting trading cost. The equity curves for the entire period show stable pattern. With higher profit, this suggests a good trading direction for system traders. We can make more stable and more profitable portfolios if we add money management module to the system.

Application of Linear Tracking to the Multi-reservours System Operation in Han River for Hydro-power Maximization (한강수계 복합 저수지 시스템의 최적 수력발전 운영을 위한 LINEAR TRACKING의 적용)

  • Yu, Ju-Hwan;Kim, Jae-Han;Jeong, Gwan-Su
    • Journal of Korea Water Resources Association
    • /
    • v.32 no.5
    • /
    • pp.579-591
    • /
    • 1999
  • The operation of a reservoir system is necessary for establishing the operation rule as well as designing the reservoirs for water resources planning or management. Increasingly complex water resource systems require more advanced operation techniques. As a result, various techniques have been introduced and applied until now. In this study Linear Tracking model based on optimal control theory is applied to the operation of the largest scale multi-reservoir system in the Han river and its applicability proved. This system normally supplies the water resources required downstream for hydro-power and plays a role in satisfying the water demand of the Capital region. For the optimal use of the water resources the Linear Tracking model is designed with the objective to maximize the hydro-power energy subject to the water supply demand. The multi-reservoir system includes the seven main reservoirs in IIan river such as Hwachon, Soyanggang, Chunchon, Uiam, Cheongpyong, Chungju and Paldang. These reservoirs have been monthly operated for the past 21 years. Operation results are analyzed with respect to both hydro"power energy and water supply. Additionally the efficiency of the technique is assessed.sessed.

  • PDF

Orthophoto and DEM Generation in Small Slope Areas Using Low Specification UAV (저사양 무인항공기를 이용한 소규모 경사지역의 정사영상 및 수치표고모델 제작)

  • Park, Jin Hwan;Lee, Won Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.3
    • /
    • pp.283-290
    • /
    • 2016
  • Even though existing methods for orthophoto production in traditional photogrammetry are effective in large areas, they are inefficient when dealing with change detection of geometric features and image production for short time periods in small areas. In recent years, the UAV (Unmanned Aerial Vehicle), equipped with various sensors, is rapidly developing and has been implemented in various ways throughout the geospatial information field. The data and imagery of specific areas can be quickly acquired by UAVs at low costs and with frequent updates. Furthermore, the redundancy of geospatial information data can be minimized in the UAV-based orthophoto generation. In this paper, the orthophoto and DEM (Digital Elevation Model) are generated using a standard low-end UAV in small sloped areas which have a rather low accuracy compared to flat areas. The RMSE of the check points is σH = ±0.12 m on a horizontal plane and σV = ±0.09 m on a vertical plane. As a result, the maximum and mean RMSE are in accordance with the working rule agreement for the airborne laser scanning surveying of the NGII (National Geographic Information Institute) on a 1/500 scale digital map. Through this study, we verify the possibilities of the orthophoto generation in small slope areas using general-purpose low specification UAV rather than a high cost surveying UAV.

Design and Performance Evaluation of Low-Temperature Vacuum Blackbody System (저온-진공 흑체시스템의 설계 및 성능 평가)

  • Kim, Ghiseok;Chang, Ki Soo;Lee, Sang-Yong;Kim, Geon-Hee;Kim, Dong-Ik
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.33 no.4
    • /
    • pp.336-341
    • /
    • 2013
  • In this paper, the design concept of a low-temperature vacuum blackbody was described, and thermophysical model of the blackbody was numerically evaluated. Also the working performance of low-temperature vacuum blackbody was evaluated using infrared camera system. The blackbody system was constructed to operate under high-vacuum conditions ($2.67{\times}10^{-2}$ Pa) to reduce temperature uncertainty, which is caused by vapor condensation at low temperatures usually below 273 K. In addition, both heat sink and heat shield including cold shield were installed around radiator to prevent heat loss from the blackbody. Simplified mathematical model of blackbody radiator was analyzed using modified Stefan-Boltzmann's rule. The infrared radiant performance of the blackbody was evaluated using infrared camera. Based on the results of measurements, and simulation, temperature stability of the low-temperature vacuum blackbody demonstrated that the blackbody system can serve as a highly stable reference source for the calibration of an infrared optical system.