• Title/Summary/Keyword: Rule-Based Model

Search Result 1,025, Processing Time 0.025 seconds

Rule-Inferring Strategies for Abductive Reasoning in the Process of Solving an Earth-Environmental Problem (지구환경적 문제 해결 과정에서 귀추적 추론을 위한 규칙 추리 전략들)

  • Oh, Phil-Seok
    • Journal of The Korean Association For Science Education
    • /
    • v.26 no.4
    • /
    • pp.546-558
    • /
    • 2006
  • The purpose of this study was to identify heuristically how abduction was used in a context of solving an earth-environmental problem. Thirty two groups of participants with different institutional backgrounds, i,e., inservice earth science teachers, preservice science teachers, and high school students, solved an open-ended earth-environmental problem and produced group texts in which their ways of solving the problem were written, The inferential processes in the texts were rearranged according to the syllogistic form of abduction and then analyzed iteratively so as to find thinking strategies used in the abductive reasoning. The result showed that abduction was employed in the process of solving the earth-environmental problem and that several thinking strategies were used for inferring rules from which abductive conclusions were drawn. The strategies found included data reconstruction, chained abduction, adapting novel information, model construction and manipulation, causal combination, elimination, case-based analogy, and existential strategy. It was suggested that abductive problems could be used to enhance students' thinking abilities and their understanding of the nature of earth science and earth-environmental problems.

A Study on the Reduction of the Vibration in PKM Using a Propeller Damper (프로펠러 감쇄기를 이용한 고속정 진동 감소방안 연구)

  • Kim, Hye-Jin;Lee, Heun-Hwa;Seong, Woo-Jae;Pyo, Sang-Woo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.3
    • /
    • pp.103-110
    • /
    • 2008
  • Strategically, the Patrol Killer Medium (PKM) ought to run at high speed that produces largely vibration by propeller, in a consequence, the vibration gradually deterioratescrews' working condition and increases the possibility of SONAR detection. In this paper, we propose the propeller damper, which is one of waysto reduce the vibration induced by the propeller, and simulate the ability of the damper numerically. The propeller damper was designed to apply to an isolated plate at the bottom flat board of ship which is directly affected by the fluctuating pressure. The dynamic pressure for the stern part of the PKM is calculated by using the DnV rule and the numerical analysis when the propeller damper applied or not, is performed with ANSYS at the isolated plate that simplified. From the analysis, the damping effect of the proposed propeller damper is confirmed and the reduction ratio for each compartment is estimated based on the experimental data in the PKM.

Image Analysis Fuzzy System

  • Abdelwahed Motwakel;Adnan Shaout;Anwer Mustafa Hilal;Manar Ahmed Hamza
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.1
    • /
    • pp.163-177
    • /
    • 2024
  • The fingerprint image quality relies on the clearness of separated ridges by valleys and the uniformity of the separation. The condition of skin still dominate the overall quality of the fingerprint. However, the identification performance of such system is very sensitive to the quality of the captured fingerprint image. Fingerprint image quality analysis and enhancement are useful in improving the performance of fingerprint identification systems. A fuzzy technique is introduced in this paper for both fingerprint image quality analysis and enhancement. First, the quality analysis is performed by extracting four features from a fingerprint image which are the local clarity score (LCS), global clarity score (GCS), ridge_valley thickness ratio (RVTR), and the Global Contrast Factor (GCF). A fuzzy logic technique that uses Mamdani fuzzy rule model is designed. The fuzzy inference system is able to analyse and determinate the fingerprint image type (oily, dry or neutral) based on the extracted feature values and the fuzzy inference rules. The percentages of the test fuzzy inference system for each type is as follow: For dry fingerprint the percentage is 81.33, for oily the percentage is 54.75, and for neutral the percentage is 68.48. Secondly, a fuzzy morphology is applied to enhance the dry and oily fingerprint images. The fuzzy morphology method improves the quality of a fingerprint image, thus improving the performance of the fingerprint identification system significantly. All experimental work which was done for both quality analysis and image enhancement was done using the DB_ITS_2009 database which is a private database collected by the department of electrical engineering, institute of technology Sepuluh Nopember Surabaya, Indonesia. The performance evaluation was done using the Feature Similarity index (FSIM). Where the FSIM is an image quality assessment (IQA) metric, which uses computational models to measure the image quality consistently with subjective evaluations. The new proposed system outperformed the classical system by 900% for the dry fingerprint images and 14% for the oily fingerprint images.

Analysis of Human Body Suitability for Mattresses by Using the Level of PsychoPhysiological Relaxation and Development of Regression Model

  • Min, Seung Nam;Kim, Jung Yong;Kim, Dong Joon;Park, Yong Duck;Kim, Seoung Eun;Lee, Ho Sang
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.3
    • /
    • pp.199-215
    • /
    • 2015
  • Objective: The purpose of this study is to find the level of physical relaxation of individual subject by monitoring psychophysiological biofeedback to different types of mattresses. And, the study also aims to find a protocol to make a selection of the best mattress based on the measured information. Background: In Korea, there are an increasing number of people using western style bed. However, they are often fastidious in choosing the right mattress for them. In fact, people use their past experience with their old mattress as well as the spontaneous experience they encounter in a show room to finally decide to buy a bed. Method: Total five mattresses were tested in this study. After measuring the elasticity of the mattresses, they were sorted into five different classes. Physiological and psychological variables including Electromyography (EMG), heart rates (HR), oxygen saturations (SaO2) were used. In addition, the peak body pressure concentration rate was used to find uncomfortably pressured body part. Finally, the personal factors and subjective satisfaction were also examined. A protocol was made to select the best mattress for individual subject. The selection rule for the protocol considered all the variables tested in this study. Results: The result revealing psychological comfort range of 0.68 to 0.95, dermal comfort range of 3.15 to 6.07, back muscle relaxation range of 0.25 to 1.64 and personal habit range of 2.0 to 3.4 was drawn in this study. Also a regression model was developed to predict biofeedback with the minimal use of biofeedback devices. Moreover results from the proposed protocol with the regression equation and subjective satisfaction were compared with each other for validation. Ten out of twenty subjects recorded the same level of relaxation, and eight subjects showed one-level difference while two subjects showed two-levels difference. Conclusion: The psychophysiological variables and suitability selection process used in this study seem to be used for selecting and assessing ergonomic products mechanically or emotionally. Application: This regression model can be applied to the mattress industry to estimate back muscle relaxation using dermal, psychophysiology and personal habit values.

Doubly-robust Q-estimation in observational studies with high-dimensional covariates (고차원 관측자료에서의 Q-학습 모형에 대한 이중강건성 연구)

  • Lee, Hyobeen;Kim, Yeji;Cho, Hyungjun;Choi, Sangbum
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.309-327
    • /
    • 2021
  • Dynamic treatment regimes (DTRs) are decision-making rules designed to provide personalized treatment to individuals in multi-stage randomized trials. Unlike classical methods, in which all individuals are prescribed the same type of treatment, DTRs prescribe patient-tailored treatments which take into account individual characteristics that may change over time. The Q-learning method, one of regression-based algorithms to figure out optimal treatment rules, becomes more popular as it can be easily implemented. However, the performance of the Q-learning algorithm heavily relies on the correct specification of the Q-function for response, especially in observational studies. In this article, we examine a number of double-robust weighted least-squares estimating methods for Q-learning in high-dimensional settings, where treatment models for propensity score and penalization for sparse estimation are also investigated. We further consider flexible ensemble machine learning methods for the treatment model to achieve double-robustness, so that optimal decision rule can be correctly estimated as long as at least one of the outcome model or treatment model is correct. Extensive simulation studies show that the proposed methods work well with practical sample sizes. The practical utility of the proposed methods is proven with real data example.

A Study for BIM based Evaluation and Process for Architectural Design Competition -Case Study of Domestic and International BIM-based Competition (BIM기반의 건축설계경기 평가 및 절차에 관한 연구 -국내외 BIM기반 건축설계경기 사례를 기반으로-)

  • Park, Seung-Hwa;Hong, Chang-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.2
    • /
    • pp.23-30
    • /
    • 2017
  • In the AEC(Architecture, Engineering and Construction) industry, BIM(Building Information Modeling) technology not only helps design intent efficiently, but also realizes an object-oriented design including building's life cycle information. Thus it can manage all data created in each building stage and the roles of BIM are greatly expanded. Contractors and designers have been trying to adopt BIM to design competitions and validate it for the best result in various aspects. Via the computational simulation which differs from the existing process, effective evaluation can be done. For this process, a modeling guideline for each kind of BIM tool and a validation system for the confidential assessment are required. This paper explains a new process about design evaluation methods and process using BIM technologies which follow the new paradigm in construction industry through complement points by an example of a competition activity of the Korea Power Exchange(KPX) headquarter office. In conclusion, this paper provides a basic data input guideline based on open BIM for automatic assessment and interoperability between different BIM systems and suggests a practical usage of the rule-based Model Checker.

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF

Scalable RDFS Reasoning using Logic Programming Approach in a Single Machine (단일머신 환경에서의 논리적 프로그래밍 방식 기반 대용량 RDFS 추론 기법)

  • Jagvaral, Batselem;Kim, Jemin;Lee, Wan-Gon;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.41 no.10
    • /
    • pp.762-773
    • /
    • 2014
  • As the web of data is increasingly producing large RDFS datasets, it becomes essential in building scalable reasoning engines over large triples. There have been many researches used expensive distributed framework, such as Hadoop, to reason over large RDFS triples. However, in many cases we are required to handle millions of triples. In such cases, it is not necessary to deploy expensive distributed systems because logic program based reasoners in a single machine can produce similar reasoning performances with that of distributed reasoner using Hadoop. In this paper, we propose a scalable RDFS reasoner using logical programming methods in a single machine and compare our empirical results with that of distributed systems. We show that our logic programming based reasoner using a single machine performs as similar as expensive distributed reasoner does up to 200 million RDFS triples. In addition, we designed a meta data structure by decomposing the ontology triples into separate sectors. Instead of loading all the triples into a single model, we selected an appropriate subset of the triples for each ontology reasoning rule. Unification makes it easy to handle conjunctive queries for RDFS schema reasoning, therefore, we have designed and implemented RDFS axioms using logic programming unifications and efficient conjunctive query handling mechanisms. The throughputs of our approach reached to 166K Triples/sec over LUBM1500 with 200 million triples. It is comparable to that of WebPIE, distributed reasoner using Hadoop and Map Reduce, which performs 185K Triples/sec. We show that it is unnecessary to use the distributed system up to 200 million triples and the performance of logic programming based reasoner in a single machine becomes comparable with that of expensive distributed reasoner which employs Hadoop framework.

Assessing the Positioning Accuracy of High density Point Clouds produced from Rotary Wing Quadrocopter Unmanned Aerial System based Imagery (회전익 UAS 영상기반 고밀도 측점자료의 위치 정확도 평가)

  • Lee, Yong Chang
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.2
    • /
    • pp.39-48
    • /
    • 2015
  • Lately, Unmanned Aerial Vehicles(UAV), Unmanned Aerial Systems(UAS) or also often known as drones, as a data acquisition platform and as a measurement instrument are becoming attractive for many photogrammetric surveying applications, especially generation of the high density point clouds(HDPC). This paper presents the performance evaluation of a low-cost rotary wing quadrocopter UAS for generation of the HDPC in a test bed environment. Its performance was assessed by comparing the coordinates of UAS based HDPC to the results of Network RTK GNSS surveying with 62 ground check points. The results indicate that the position RMSE of the check points are ${\sigma}_H={\pm}0.102m$ in Horizonatal plane, and ${\sigma}_V={\pm}0.209m$ in vertical, and the maxium deviation of Elevation was 0.570m within block area of ortho-photo mosaic. Therefore the required level of accuracy at NGII for production of ortho-images mosaic at a scale of 1:1000 was reached, UAS based imagery was found to make use of it to update scale 1:1000 map. And also, since this results are less than or equal to the required level in working rule agreement for airborne laser scanning surveying of NGII for Digital Elevation Model generation of grids $1m{\times}1m$ and 1:1000 scale, could be applied with production of topographic map and ortho-image mosaic at a scale of 1:1000~1:2500 over small-scale areas.