• Title/Summary/Keyword: Rule-based approach

Search Result 545, Processing Time 0.027 seconds

Axisymmetric vibration analysis of a sandwich porous plate in thermal environment rested on Kerr foundation

  • Zhang, Zhe;Yang, Qijian;Jin, Cong
    • Steel and Composite Structures
    • /
    • v.43 no.5
    • /
    • pp.581-601
    • /
    • 2022
  • The main objective of this research work is to investigate the free vibration behavior of annular sandwich plates resting on the Kerr foundation at thermal conditions. This sandwich configuration is composed of two FGM face sheets as coating layer and a porous GPLRC (GPL reinforced composite) core. It is supposed that the GPL nanofillers and the porosity coefficient vary continuously along the core thickness direction. To model closed-cell FG porous material reinforced with GPLs, Halpin-Tsai micromechanical modeling in conjunction with Gaussian-Random field scheme is used, while the Poisson's ratio and density are computed by the rule of mixtures. Besides, the material properties of two FGM face sheets change continuously through the thickness according to the power-law distribution. To capture fundamental frequencies of the annular sandwich plate resting on the Kerr foundation in a thermal environment, the analysis procedure is with the aid of Reddy's shear-deformation plate theory based high-order shear deformation plate theory (HSDT) to derive and solve the equations of motion and boundary conditions. The governing equations together with related boundary conditions are discretized using the generalized differential quadrature (GDQ) method in the spatial domain. Numerical results are compared with those published in the literature to examine the accuracy and validity of the present approach. A parametric solution for temperature variation across the thickness of the sandwich plate is employed taking into account the thermal conductivity, the inhomogeneity parameter, and the sandwich schemes. The numerical results indicate the influence of volume fraction index, GPLs volume fraction, porosity coefficient, three independent coefficients of Kerr elastic foundation, and temperature difference on the free vibration behavior of annular sandwich plate. This study provides essential information to engineers seeking innovative ways to promote composite structures in a practical way.

Recommending Core and Connecting Keywords of Research Area Using Social Network and Data Mining Techniques (소셜 네트워크와 데이터 마이닝 기법을 활용한 학문 분야 중심 및 융합 키워드 추천 서비스)

  • Cho, In-Dong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.127-138
    • /
    • 2011
  • The core service of most research portal sites is providing relevant research papers to various researchers that match their research interests. This kind of service may only be effective and easy to use when a user can provide correct and concrete information about a paper such as the title, authors, and keywords. However, unfortunately, most users of this service are not acquainted with concrete bibliographic information. It implies that most users inevitably experience repeated trial and error attempts of keyword-based search. Especially, retrieving a relevant research paper is more difficult when a user is novice in the research domain and does not know appropriate keywords. In this case, a user should perform iterative searches as follows : i) perform an initial search with an arbitrary keyword, ii) acquire related keywords from the retrieved papers, and iii) perform another search again with the acquired keywords. This usage pattern implies that the level of service quality and user satisfaction of a portal site are strongly affected by the level of keyword management and searching mechanism. To overcome this kind of inefficiency, some leading research portal sites adopt the association rule mining-based keyword recommendation service that is similar to the product recommendation of online shopping malls. However, keyword recommendation only based on association analysis has limitation that it can show only a simple and direct relationship between two keywords. In other words, the association analysis itself is unable to present the complex relationships among many keywords in some adjacent research areas. To overcome this limitation, we propose the hybrid approach for establishing association network among keywords used in research papers. The keyword association network can be established by the following phases : i) a set of keywords specified in a certain paper are regarded as co-purchased items, ii) perform association analysis for the keywords and extract frequent patterns of keywords that satisfy predefined thresholds of confidence, support, and lift, and iii) schematize the frequent keyword patterns as a network to show the core keywords of each research area and connecting keywords among two or more research areas. To estimate the practical application of our approach, we performed a simple experiment with 600 keywords. The keywords are extracted from 131 research papers published in five prominent Korean journals in 2009. In the experiment, we used the SAS Enterprise Miner for association analysis and the R software for social network analysis. As the final outcome, we presented a network diagram and a cluster dendrogram for the keyword association network. We summarized the results in Section 4 of this paper. The main contribution of our proposed approach can be found in the following aspects : i) the keyword network can provide an initial roadmap of a research area to researchers who are novice in the domain, ii) a researcher can grasp the distribution of many keywords neighboring to a certain keyword, and iii) researchers can get some idea for converging different research areas by observing connecting keywords in the keyword association network. Further studies should include the following. First, the current version of our approach does not implement a standard meta-dictionary. For practical use, homonyms, synonyms, and multilingual problems should be resolved with a standard meta-dictionary. Additionally, more clear guidelines for clustering research areas and defining core and connecting keywords should be provided. Finally, intensive experiments not only on Korean research papers but also on international papers should be performed in further studies.

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

Ligand Based Pharmacophore Identification and Molecular Docking Studies for Grb2 Inhibitors

  • Arulalapperumal, Venkatesh;Sakkiah, Sugunadevi;Thangapandian, Sundarapandian;Lee, Yun-O;Meganathan, Chandrasekaran;Hwang, Swan;Lee, Keun-Woo
    • Bulletin of the Korean Chemical Society
    • /
    • v.33 no.5
    • /
    • pp.1707-1714
    • /
    • 2012
  • Grb2 is an adapter protein involved in the signal transduction and cell communication. The Grb2 is responsible for initiation of kinase signaling by Ras activation which leads to the modification in transcription. Ligand based pharmacophore approach was applied to built the suitable pharmacophore model for Grb2. The best pharmacophore model was selected based on the statistical values and then validated by Fischer's randomization method and test set. Hypo1 was selected as a best pharmacophore model based on its statistical values like high cost difference (182.22), lowest RMSD (1.273), and total cost (80.68). It contains four chemical features, one hydrogen bond acceptor (HBA), two hydrophobic (HY), and one ring aromatic (RA). Fischer's randomization results also shows that Hypo1 have a 95% significant level. The correlation coefficient of test set was 0.97 which was close to the training set value (0.94). Thus Hypo1 was used for virtual screening to find the potent inhibitors from various chemical databases. The screened compounds were filtered by Lipinski's rule of five, ADMET and subjected to molecular docking studies. Totally, 11 compounds were selected as a best potent leads from docking studies based on the consensus scoring function and critical interactions with the amino acids in Grb2 active site.

Enhanced Variable Structure Control With Fuzzy Logic System

  • Charnprecharut, Veeraphon;Phaitoonwattanakij, Kitti;Tiacharoen, Somporn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.999-1004
    • /
    • 2005
  • An algorithm for a hybrid controller consists of a sliding mode control part and a fuzzy logic part which ar purposely for nonlinear systems. The sliding mode part of the solution is based on "eigenvalue/vector"-type controller is used as the backstepping approach for tracking errors. The fuzzy logic part is a Mamdani fuzzy model. This is designed by applying sliding mode control (SMC) method to the dynamic model. The main objective is to keep the update dynamics in a stable region by used SMC. After that the plant behavior is presented to train procedure of adaptive neuro-fuzzy inference systems (ANFIS). ANFIS architecture is determined and the relevant formulation for the approach is given. Using the error (e) and rate of error (de), occur due to the difference between the desired output value (yd) and the actual output value (y) of the system. A dynamic adaptation law is proposed and proved the particularly chosen form of the adaptation strategy. Subsequently VSC creates a sliding mode in the plant behavior while the parameters of the controller are also in a sliding mode (stable trainer). This study considers the ANFIS structure with first order Sugeno model containing nine rules. Bell shaped membership functions with product inference rule are used at the fuzzification level. Finally the Mamdani fuzzy logic which is depends on adaptive neuro-fuzzy inference systems structure designed. At the transferable stage from ANFIS to Mamdani fuzzy model is adjusted for the membership function of the input value (e, de) and the actual output value (y) of the system could be changed to trapezoidal and triangular functions through tuning the parameters of the membership functions and rules base. These help adjust the contributions of both fuzzy control and variable structure control to the entire control value. The application example, control of a mass-damper system is considered. The simulation has been done using MATLAB. Three cases of the controller will be considered: for backstepping sliding-mode controller, for hybrid controller, and for adaptive backstepping sliding-mode controller. A numerical example is simulated to verify the performances of the proposed control strategy, and the simulation results show that the controller designed is more effective than the adaptive backstepping sliding mode controller.

  • PDF

Identification of Emerging Research at the national level: Scientometric Approach using Scopus (국가적 차원의 유망연구영역 탐색: Scopus 데이터베이스를 이용한 과학계량학적 접근)

  • Yeo, Woon-Dong;Sohn, Eun-Soo;Jung, Eui-Seob;Lee, Chang-Hoan
    • Journal of Information Management
    • /
    • v.39 no.3
    • /
    • pp.95-113
    • /
    • 2008
  • In todays environment in which scientific technologies are changing very fast than ever, companies have to monitor and search emerging technologies to gain competitiveness. Actually many nations try to do that. Most of them use Dephi approach based on experts review as a searching method. But experts review has been criticised for probability of inclination and its derivative problems in the sense that it is accomplished only by expert's subjectivity. To overcome such problems, we used Scientometric Method for identifying emerging technology that had been done by Delphi as a rule. We made three particular efforts in order to improve the Quality of the result. Firstly, we selected one alternative database between SCI and Scopus hoping to see evenly-distributing results in wide fields on the front burner. Secondly we used Fractional citation counting in counting citation number in the stage of linear regression analysis. Lastly, we verified Scientometric result with experts opinions to minimize probable errors in a Scientometric research. As a result, we derived 290 emerging technologies from Scientometric analysis with Scopus Database, and visualized them on 2-dimension map with data mining system named KnowledgeMatrix which was developed by KISTI.

Sentiment Analysis and Issue Mining on All-Solid-State Battery Using Social Media Data (소셜미디어 분석을 통한 전고체 배터리 감성분석과 이슈 탐색)

  • Lee, Ji Yeon;Lee, Byeong-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.10
    • /
    • pp.11-21
    • /
    • 2022
  • All-solid-state batteries are one of the promising candidates for next-generation batteries and are drawing attention as a key component that will lead the future electric vehicle industry. This study analyzes 10,280 comments on Reddit, which is a global social media, in order to identify policy issues and public interest related to all-solid-state batteries from 2016 to 2021. Text mining such as frequency analysis, association rule analysis, and topic modeling, and sentiment analysis are applied to the collected global data to grasp global trends, compare them with the South Korean government's all-solid-state battery development strategy, and suggest policy directions for its national research and development. As a result, the overall sentiment toward all-solid-state battery issues was positive with 50.5% positive and 39.5% negative comments. In addition, as a result of analyzing detailed emotions, it was found that the public had trust and expectation for all-solid-state batteries. However, feelings of concern about unresolved problems coexisted. This study has an academic and practical contribution in that it presented a text mining analysis method for deriving key issues related to all-solid-state batteries, and a more comprehensive trend analysis by employing both a top-down approach based on government policy analysis and a bottom-up approach that analyzes public perception.

Wild Boar (Sus scrofa corranus Heude ) Habitat Modeling Using GIS and Logistic Regression (GIS와 로지스틱 회귀분석을 이용한 멧돼지 서식지 모형 개발)

  • 서창완;박종화
    • Spatial Information Research
    • /
    • v.8 no.1
    • /
    • pp.85-99
    • /
    • 2000
  • Accurate information on habitat distribution of protected fauna is essential for the habitat management of Korea, a country with very high development pressure. The objectives of this study were to develop a habitat suitability model of wild boar based on GIS and logistic regression, and to create habitat distribution map, and to prepare the basis for habitat management of our country s endangered and protected species. The modeling process of this restudyarch had following three steps. First, GIS database of environmental factors related to use and availability of wild boar habitat were built. Wild boar locations were collected by Radio-Telemetry and GPS. Second, environmental factors affecting the habitat use and availability of wild boars were identified through chi-square test. Third, habitat suitability model based on logistic regression were developed, and the validity of the model was tested. Finally , habitat assessment map was created by utilizing a rule-based approach. The results of the study were as folos. First , distinct difference in wild boar habitat use by season and habitat types were found, however, no difference in wild boar habiat use by season and habitat types were found , however, ho difference by sex and activity types were found. Second, it was found, through habitat availability analysis, that elevation , aspect , forest type, and forest age were significant natural environmental factors affecting wild boar hatibate selection, but the effects of slope, ridge/valley, water, and solar radiation could not be identified, Finally, the habitat at cutoff value of 0.5. The model validation showed that inside validation site had the classification accuracy of 73.07% for total habitat and 80.00% for cover habitat , and outside validation site had the classification accuracy of 75.00% for total habitat.

  • PDF

Dynamic traffic assignment based on arrival time-based OD flows (도착시간 기준 기종점표를 이용한 동적통행배정)

  • Kim, Hyeon-Myeong
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.1
    • /
    • pp.143-155
    • /
    • 2009
  • A dynamic traffic assignment (DTA) has recently been implemented in many practical projects. The core of dynamic model is the inclusion of time scale. If excluding the time dimension from a DTA model, the framework of a DTA model is similar to that of static model. Similar to static model, with given exogenous travel demand, a DTA model loads vehicles on the network and finds an optimal solution satisfying a pre-defined route choice rule. In most DTA models, the departure pattern of given travel demand is predefined and assumed as a fixed pattern, although the departure pattern of driver is changeable depending on a network traffic condition. Especially, for morning peak commute where most drivers have their preferred arrival time, the departure time, therefore, should be modeled as an endogenous variable. In this paper, the authors point out some shortcomings of current DTA model and propose an alternative approach which could overcome the shortcomings of current DTA model. The authors substitute a traditional definition for time-dependent OD table by a new definition in which the time-dependent OD table is defined as arrival time-based one. In addition, the authors develop a new DTA model which is capable of finding an equilibrium departure pattern without the use of schedule delay functions. Three types of objective function for a new DTA framework are proposed, and the solution algorithms for the three objective functions are also explained.

Improved Sentence Boundary Detection Method for Web Documents (웹 문서를 위한 개선된 문장경계인식 방법)

  • Lee, Chung-Hee;Jang, Myung-Gil;Seo, Young-Hoon
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.6
    • /
    • pp.455-463
    • /
    • 2010
  • In this paper, we present an approach to sentence boundary detection for web documents that builds on statistical-based methods and uses rule-based correction. The proposed system uses the classification model learned offline using a training set of human-labeled web documents. The web documents have many word-spacing errors and frequently no punctuation mark that indicates the end of sentence boundary. As sentence boundary candidates, the proposed method considers every Ending Eomis as well as punctuation marks. We optimize engine performance by selecting the best feature, the best training data, and the best classification algorithm. For evaluation, we made two test sets; Set1 consisting of articles and blog documents and Set2 of web community documents. We use F-measure to compare results on a large variety of tasks, Detecting only periods as sentence boundary, our basis engine showed 96.5% in Set1 and 56.7% in Set2. We improved our basis engine by adapting features and the boundary search algorithm. For the final evaluation, we compared our adaptation engine with our basis engine in Set2. As a result, the adaptation engine obtained improvements over the basis engine by 39.6%. We proved the effectiveness of the proposed method in sentence boundary detection.