• Title/Summary/Keyword: Knowledgebase

Search Result 29, Processing Time 0.025 seconds

The Optimiazation of Knowledgebase for Swimming Pool Temperature Control Systems using Genetic Algorithms (Genetic 알고리즘을 이용한 풀 온도 제어 시스템의 지식베이스 최적화)

  • Kim, Seong-Hak
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.3
    • /
    • pp.319-326
    • /
    • 1994
  • Automatic control has been for the most part applied to linear systems where ti can be approximately formalized. In case that it is not definitely established the mathematical modelling to control objects, it requires manual control strategies which put under the human rule. In this paper, it constructs an FLC (Fuzzy Logic Controller) in order to turn a hand control into an automatic control in the domain of swimming pool that has been almost absolutely dependant on a skilled worker's experience. Genetic algorithms upgrade the knowledge which is acquired from human expert, using by FLC, so as to maintain knowledge in the very optimal way. It also designs an algorithm that modifies the rule base and the membership function at the same time, and ultimately will show that it can get better result than human controllers.

  • PDF

Design of Conceptual Software Process Database, Using Ontology (온톨로지를 이용한 개념형 소프트웨어 프로세스 데이터베이스 설계 및 구현)

  • Lee, Jun-Ha;Park, Young-Beom
    • The KIPS Transactions:PartD
    • /
    • v.14D no.2
    • /
    • pp.203-210
    • /
    • 2007
  • Ontology can be used as a formal and demonstrative knowledgebase that can express the thinking process of human. Software Development Process is a collection of ideal practices and procedural system that is performed by mature organization with high capability. Due to complexity of process, however, Software development Process often results in obstruction of introducing and improving simple process activity. While introducing and improving software development process, application of ontology to complex software development process is more approachable by showing deductive results of relationship between ISO/IEC 15504 and CMMI. In this paper, we demonstrate a methodology that utilizes the improved process database conceptually mapping between ISO/IEC 15504 and CMMI using ontology.

The first record of the rare fern Pteris griffithii (Polypodiales: Pteridaceae: Pteridoideae) in the Bhutan Himalayas

  • DORJI, Rinchen;DEMA, Sangay;NIROLA, Mani Prasad;GYELTSHEN, Choki
    • Korean Journal of Plant Taxonomy
    • /
    • v.52 no.1
    • /
    • pp.24-28
    • /
    • 2022
  • Pteris griffithii Hook., one of the rarest fern species on the Indian subcontinent, is reported from Bhutan for the first time. The identity of this species was confirmed through morphological determination at the National Herbarium (THIM) of the National Biodiversity Centre (NBC) of Bhutan. It was found only in one location, in Gyelpozhing in eastern Bhutan, at an elevation of 521 m a.s.l. on 10 January 2016. Given that a very limited study of this species was conducted, the knowledge baseline with regard to its distribution is poor. It is also reported that this species has not been found for several years. The species is also considered to be very rare or critically endangered in some countries; however, there are no assessments on the International Union for Conservation of Nature (IUCN) Red List for this particular species. This paper attempts to provide baseline information considering its rarity and data deficiency. This species is also reported from the adjacent neighboring Indian state of Arunachal Pradesh as very rare, and also from Myanmar; however, confirmation of its presence in China is not clear at this time. Therefore, considering its data deficient status, we attempt to document it scientifically to create a knowledgebase pertaining to this particular species. Concurrently, this species merits further research to understand its distribution patterns in Bhutan and any related anthropogenic threats.

Physical Characteristics of Two Types of EUV Coronal Jets Observed by SDO/AIA

  • Kim, Il-Hoon;Moon, Yong-Jae;Lee, Jin-Yi;Lee, Kyoung-Sun;Sung, Suk-Kyung;Kim, Kap-Sung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.1
    • /
    • pp.63.2-63.2
    • /
    • 2013
  • We have investigated the EUV coronal jets observed by Solar Dynamic Observatory (SDO) / Atmospheric Imaging Assembly (AIA). From the Heliophysics Events Knowledgebase (HEK), we consider all recorded 40 EUV jets in $171{\AA}$ from May 2010 to July 2011 and use 19 jets whose location can be clearly identified, excluding limb events because of the ambiguity of their positions. According to the positions of their roots, these coronal jets are classified into two types: bright point jet (BPJ, 9 jets) and active region boundary jet (ABJ, 10 jets). BPJs are located at the top of bright points and ABJs at the boundaries of active regions. There are significant differences in speed and size between two types. Here the speed and size of a jet are assumed to be its maximum values in the case that the jet has several ejections. The average speed and size of 9 BPJs are about 110 km/s and 69,000km, respectively. The average speed and size of 10 ABJs are about 660 km/s and 194,000 km, respectively. The speed distribution of ABJs has two peaks at about 270 km/s and 1700 km/s. It is very interesting to note that three ABJs have very high speeds larger than 1600 km/s and they are all composed of a group of recurrent jets with low and high speed at the same location. In addition, we are investigating these events in other wavelengths and compare their characteristics.

  • PDF

Relationship Between EUV Coronal Jets and Bright Points Observed by SDO/AIA

  • Kim, Il-Hoon;Lee, Kyoung-Sun;Lee, Jin-Yi;Moon, Yong-Jae;Sung, Suk-Kyung;Kim, Kap-Sung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.37 no.2
    • /
    • pp.112.1-112.1
    • /
    • 2012
  • We have investigated the relationship between EUV coronal jets and bright points observed by Solar Dynamic Observatory (SDO)/Atmospheric Imaging Assembly (AIA). For this we consider 39 EUV coronal jets from May 2010 to July 2011 in 171 A identified by Heliophysics Events Knowledgebase (HEK) which provides an automatic identification of coronal jets. We look for coronal jet-bright point pairs as follows. First, we select the size of event area as 360 arcsec * 360 arcsec where the coronal jets are located at the center of the area. Second, we select jet-bright point pairs in case that they are located at the same position or just adjacent. Third, we select jet-bright point pairs that are connected by loops each other. Otherwise, we select jet-bright points pairs as the nearest one. As a result, we present 19 coronal jet-bright point pairs. The mean distance of these pairs is 77.24 arcsec. According to their distance and morphological connection, we classify the following three groups: 1) Adjacent (6 events), 2) Loop connected (5 events), and 3) Not connected in appearance (8 events). The histogram of mutual distance has two peaks; the first peak corresponds to the first group and the other one to the second group. We compare these events with previous observations and theoretical models as well as discuss possible physical connections between jets and bright points.

  • PDF

Semantic-based Keyword Search System over Relational Database (관계형 데이터베이스에서의 시맨틱 기반 키워드 탐색 시스템)

  • Yang, Younghyoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.12
    • /
    • pp.91-101
    • /
    • 2013
  • One issue with keyword search in general is its ambiguity which can ultimately impact the effectiveness of the search in terms of the quality of the search results. This ambiguity is primarily due to the ambiguity of the contextual meaning of each term in the query. In addition to the query ambiguity itself, the relationships between the keywords in the search results are crucial for the proper interpretation of the search results by the user and should be clearly presented in the search results. We address the keyword search ambiguity issue by adapting some of the existing approaches for keyword mapping from the query terms to the schema terms/instances. The approaches we have adapted for term mapping capture both the syntactic similarity between the query keywords and the schema terms as well as the semantic similarity of the two and give better mappings and ultimately 50% raised accurate results. Finally, to address the last issue of lacking clear relationships among the terms appearing in the search results, our system has leveraged semantic web technologies in order to enrich the knowledgebase and to discover the relationships between the keywords.

A Global-Interdependence Pairwise Approach to Entity Linking Using RDF Knowledge Graph (개체 링킹을 위한 RDF 지식그래프 기반의 포괄적 상호의존성 짝 연결 접근법)

  • Shim, Yongsun;Yang, Sungkwon;Kim, Hong-Gee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.129-136
    • /
    • 2019
  • There are a variety of entities in natural language such as people, organizations, places, and products. These entities can have many various meanings. The ambiguity of entity is a very challenging task in the field of natural language processing. Entity Linking(EL) is the task of linking the entity in the text to the appropriate entity in the knowledge base. Pairwise based approach, which is a representative method for solving the EL, is a method of solving the EL by using the association between two entities in a sentence. This method considers only the interdependence between entities appearing in the same sentence, and thus has a limitation of global interdependence. In this paper, we developed an Entity2vec model that uses Word2vec based on knowledge base of RDF type in order to solve the EL. And we applied the algorithms using the generated model and ranked each entity. In this paper, to overcome the limitations of a pairwise approach, we devised a pairwise approach based on comprehensive interdependency and compared it.

The Validation Study of Normality Distribution of Aquatic Toxicity Data for Statistical Analysis (수생태 독성자료의 정규성 분포 특성 확인을 통해 통계분석 시 분포 특성 적용에 대한 타당성 확인 연구)

  • OK, Seung-yeop;Moon, Hyo-Bang;Ra, Jin-Sung
    • Journal of Environmental Health Sciences
    • /
    • v.45 no.2
    • /
    • pp.192-202
    • /
    • 2019
  • Objectives: According to the central limit theorem, the samples in population might be considered to follow normal distribution if a large number of samples are available. Once we assume that toxicity dataset follow normal distribution, we can treat and process data statistically to calculate genus or species mean value with standard deviation. However, little is known and only limited studies are conducted to investigate whether toxicity dataset follows normal distribution or not. Therefore, the purpose of study is to evaluate the generally accepted normality hypothesis of aquatic toxicity dataset Methods: We selected the 8 chemicals, which consist of 4 organic and 4 inorganic chemical compounds considering data availability for the development of species sensitivity distribution. Toxicity data were collected at the US EPA ECOTOX Knowledgebase by simple search with target chemicals. Toxicity data were re-arranged to a proper format based on the endpoint and test duration, where we conducted normality test according to the Shapiro-Wilk test. Also we investigated the degree of normality by simple log transformation of toxicity data Results: Despite of the central limit theorem, only one large dataset (n>25) follow normal distribution out of 25 large dataset. By log transforming, more 7 large dataset show normality. As a result of normality test on small dataset (n<25), log transformation of toxicity value generally increases normality. Both organic and inorganic chemicals show normality growth for 26 species and 30 species, respectively. Those 56 species shows normality growth by log transformation in the taxonomic groups such as amphibian (1), crustacean (21), fish (22), insect (5), rotifer (2), and worm (5). In contrast, mollusca shows normality decrease at 1 species out of 23 that originally show normality. Conclusions: The normality of large toxicity dataset was not always satisfactory to the central limit theorem. Normality of those data could be improved through log transformation. Therefore, care should be taken when using toxicity data to induce, for example, mean value for risk assessment.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.