• Title/Summary/Keyword: Knowledge-based platform

Search Result 280, Processing Time 0.034 seconds

Proposal of Process Model for Research Data Quality Management (연구데이터 품질관리를 위한 프로세스 모델 제안)

  • Na-eun Han
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.1
    • /
    • pp.51-71
    • /
    • 2023
  • This study analyzed the government data quality management model, big data quality management model, and data lifecycle model for research data management, and analyzed the components common to each data quality management model. Those data quality management models are designed and proposed according to the lifecycle or based on the PDCA model according to the characteristics of target data, which is the object that performs quality management. And commonly, the components of planning, collection and construction, operation and utilization, and preservation and disposal are included. Based on this, the study proposed a process model for research data quality management, in particular, the research data quality management to be performed in a series of processes from collecting to servicing on a research data platform that provides services using research data as target data was discussed in the stages of planning, construction and operation, and utilization. This study has significance in providing knowledge based for research data quality management implementation methods.

X-TOP: Design and Implementation of TopicMaps Platform for Ontology Construction on Legacy Systems (X-TOP: 레거시 시스템상에서 온톨로지 구축을 위한 토픽맵 플랫폼의 설계와 구현)

  • Park, Yeo-Sam;Chang, Ok-Bae;Han, Sung-Kook
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.2
    • /
    • pp.130-142
    • /
    • 2008
  • Different from other ontology languages, TopicMap is capable of integrating numerous amount of heterogenous information resources using the locational information without any information transformation. Although many conventional editors have been developed for topic maps, they are standalone-type only for writing XTM documents. As a result, these tools request too much time for handling large-scale data and provoke practical problems to integrate with legacy systems which are mostly based on relational database. In this paper, we model a large-scale topic map structure based on XTM 1.0 into RDB structure to minimize the processing time and build up the ontology in legacy systems. We implement a topic map platform called X-TOP that can enhance the efficiency of ontology construction and provide interoperability between XTM documents and database. Moreover, we can use conventional SQL tools and other application development tools for topic map construction in X-TOP. The X-TOP is implemented to have 3-tier architecture to support flexible user interfaces and diverse DBMS. This paper shows the usability of X-TOP by means of the comparison with conventional tools and the application to healthcare cancer ontology management.

Clustering of Smart Meter Big Data Based on KNIME Analytic Platform (KNIME 분석 플랫폼 기반 스마트 미터 빅 데이터 클러스터링)

  • Kim, Yong-Gil;Moon, Kyung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.13-20
    • /
    • 2020
  • One of the major issues surrounding big data is the availability of massive time-based or telemetry data. Now, the appearance of low cost capture and storage devices has become possible to get very detailed time data to be used for further analysis. Thus, we can use these time data to get more knowledge about the underlying system or to predict future events with higher accuracy. In particular, it is very important to define custom tailored contract offers for many households and businesses having smart meter records and predict the future electricity usage to protect the electricity companies from power shortage or power surplus. It is required to identify a few groups with common electricity behavior to make it worth the creation of customized contract offers. This study suggests big data transformation as a side effect and clustering technique to understand the electricity usage pattern by using the open data related to smart meter and KNIME which is an open source platform for data analytics, providing a user-friendly graphical workbench for the entire analysis process. While the big data components are not open source, they are also available for a trial if required. After importing, cleaning and transforming the smart meter big data, it is possible to interpret each meter data in terms of electricity usage behavior through a dynamic time warping method.

Suggestion of Customized Professional Guide Services through Domestic and Foreign Travel Platforms (국내외 여행 플랫폼을 통한 맞춤형 전문 가이드 서비스 제안)

  • Kim, Seung-In;Lee, Kaha
    • Journal of Digital Convergence
    • /
    • v.17 no.9
    • /
    • pp.421-428
    • /
    • 2019
  • This study includes services that require six professional guides - health care, exhibition performance, restaurant, shopping, business and daily life - as well as a trip-based tour, along with solving existing problems that may occur on domestic and overseas trips, and proposes services that connect users with proven professional guides. As a result of previous research, the most needed services among overseas travel were air tickets, accommodation, transportation, and guides, but as Korea enters an aging society, professional guide services are expected to increase daily demand. Based on this, the service technology, user scenarios and brand development were presented. This Service Proposal provides users with a personalized guide to knowledge or experience that they have not previously known, providing them with a wealth of experience. In addition, guides can help develop new fields of expertise and help improve professional guides' capabilities. Finally, the platform service is meaningful in that it enables the creation of jobs by enabling everyone to engage in economic activities, as it allows users to provide services through their own capabilities while at the same time being convenient for them.

Analysis of the factors related to the infection control practice of 119 emergency medical service providers based on the PRECEDE model (PRECEDE 모형에 기반한 119구급대원의 감염관리 수행 관련 요인 분석)

  • Yang, Yeunsoo;Kimm, Heejin;Jee, Sun Ha;Hong, Seok-Hwan;Han, Sang-Kyun
    • The Korean Journal of Emergency Medical Services
    • /
    • v.24 no.1
    • /
    • pp.7-24
    • /
    • 2020
  • Purpose: Emergency medical service (EMS) personnel are at high risk of spreading infection. In this study, we used the PRECEDE model to identify the knowledge, status, and barriers to infection control among Korean paramedics to provide basic infection control data. Methods: A total of 164 respondents were analyzed for the study. A questionnaire was administered and collected through an online self-response platform. Descriptive analysis, t-test, ANOVA, multiple regression, and logistic regression analyses were performed to determine infection control practices and associated factors using SAS 9.4. To identify the pathways and direct, indirect, total effects based on the PRECEDE model, we used AMOS 26.0. Results: Highly rated self-efficacy (OR 8.82, 95% CI: 3.23-24.09), awareness (OR 6.05, 95% CI: 2.06-17.72), and enabling factors (OR 3.23, 95% CI: 1.18-8.78) led to superior infection control. As a result of the structural model analysis, the highly rated enabling factors and awareness led to superior practice patterns. Conclusion: Practice is related to self-efficacy, awareness, and enabling factors; however, further research is needed to develop strategies for infection control. In particular, institutional arrangements are needed to improve the enabling factors. Improving infection control performance may lead to better infection control and enhanced protection of EMS personnel and patients against infection risks.

Land Use Feature Extraction and Sprawl Development Prediction from Quickbird Satellite Imagery Using Dempster-Shafer and Land Transformation Model

  • Saharkhiz, Maryam Adel;Pradhan, Biswajeet;Rizeei, Hossein Mojaddadi;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.1
    • /
    • pp.15-27
    • /
    • 2020
  • Accurate knowledge of land use/land cover (LULC) features and their relative changes over upon the time are essential for sustainable urban management. Urban sprawl growth has been always also a worldwide concern that needs to carefully monitor particularly in a developing country where unplanned building constriction has been expanding at a high rate. Recently, remotely sensed imageries with a very high spatial/spectral resolution and state of the art machine learning approaches sent the urban classification and growth monitoring to a higher level. In this research, we classified the Quickbird satellite imagery by object-based image analysis of Dempster-Shafer (OBIA-DS) for the years of 2002 and 2015 at Karbala-Iraq. The real LULC changes including, residential sprawl expansion, amongst these years, were identified via change detection procedure. In accordance with extracted features of LULC and detected trend of urban pattern, the future LULC dynamic was simulated by using land transformation model (LTM) in geospatial information system (GIS) platform. Both classification and prediction stages were successfully validated using ground control points (GCPs) through accuracy assessment metric of Kappa coefficient that indicated 0.87 and 0.91 for 2002 and 2015 classification as well as 0.79 for prediction part. Detail results revealed a substantial growth in building over fifteen years that mostly replaced by agriculture and orchard field. The prediction scenario of LULC sprawl development for 2030 revealed a substantial decline in green and agriculture land as well as an extensive increment in build-up area especially at the countryside of the city without following the residential pattern standard. The proposed method helps urban decision-makers to identify the detail temporal-spatial growth pattern of highly populated cities like Karbala. Additionally, the results of this study can be considered as a probable future map in order to design enough future social services and amenities for the local inhabitants.

U-City Construction Process Modeling based on UML (UML을 이용한 U-City건설사업 프로세스 모델링)

  • Lee, Sung-Pyo;Shin, Yong-Jin
    • 한국IT서비스학회:학술대회논문집
    • /
    • 2009.05a
    • /
    • pp.467-470
    • /
    • 2009
  • U-City is the key element of the future knowledge-based society where every citizen will be benefited from the most advanced information technology. The following research reengineers the old process to define the rational and efficient process of the U-City construction project(BPR: Business Process Reengineering). In order to achieve the goal, identify and diagnose problem points, outline the improved plan, and readjust the managing system and process to propose better process. And Lastly, redefine the best efficient process by simple modeling using the UML(Unified Modeling Language). The goal of the systematic proposal is to make contributions to a base of the green growth to be the development of the Sustainable U-City.

  • PDF

A Review of the Application of Constructed Wetlands as Stormwater Treatment Systems

  • Reyes, Nash Jett;Geronimo, Franz Kevin;Guerra, Heidi;Jeon, Minsu;Kim, Lee-Hyung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.162-162
    • /
    • 2022
  • Stormwater management is an essential component of land-use planning and development. Due to the additional challenges posed by climate change and urbanization, various stormwater management schemes have been developed to limit flood damages and ease water quality concerns. Nature-based solutions (NBS) are increasingly used as cost-effective measures to manage stormwater runoff from various land uses. Specifically, constructed wetlands were already considered as socially acceptable green stormwater infrastructures that are widely used in different countries. There is a large collection of published literature regarding the effectiveness or efficiency of constructed wetlands in treating stormwater runoff; however, metadata analyses using bibliographic information are very limited or seldomly explored. This study was conducted to determine the trends of publication regarding stormwater treatment wetlands using a bibliometric analysis approach. Moreover, the research productivity of various countries, authors, and institutions were also identified in the study. The Web of Science (WoS) database was utilized to retrieve bibliographic information. The keywords ("constructed wetland*" OR "treatment wetland*" OR "engineered wetland*" OR "artificial wetland*") AND ("stormwater*" or "storm water*") were used to retrieve pertinent information on stormwater treatment wetlands-related publication from 1990 up to 2021. The network map of keyword co-occurrence map was generated through the VOSviewer software and the contingency matrices were obtained using the Cortext platform (www.cortext.net). The results obtained from this inquiry revealed the areas of research that have been adequately explored by past studies. Furthermore, the extensive collection of published scientific literature enabled the identification of existing knowledge gaps in the field of stormwater treatment wetlands.

  • PDF

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

A Survey of Ecological Knowledge and Information for Climate Change Adaptation in Korea - Focused on the Risk Assessment and Adaptation Strategy to Climate Change - (기후변화 적응정책 관련 생태계 지식정보 수요와 활용도 증진 방향 - 생태계 기후변화 리스크 평가 및 적응대책을 중심으로 -)

  • Yeo, Inae;Hong, Seungbum
    • Journal of Environmental Impact Assessment
    • /
    • v.29 no.1
    • /
    • pp.26-36
    • /
    • 2020
  • This study aimed at investigating present research and knowledge-base on climate change adaptation in ecosystem sector and analyzed the current status of basic information on ecosystem that functions as evidence-base of climate change adaptation to deduce the suggestions for the future development for knowledge and information in biodiversity. In this perspective, a questionary survey titled as "the ecological knowledge-base and information needs for climate change adaptation" with the researchers who were engaged with adaptation studies for biodiversity in the ecosystem related-research institutes including national and 17 regional local governments-affiliated agencies in Korea. The results are as follows; current status of utilizing ecological information which supports climate change adaptation strategy, future needs for adaptation knowledge and ecological information, and activation of utilizing ecological information. The majority of respondents (90.7%) replied that the ecological information has high relevance when conducting research on climate change adaptation. However, only half of all respondents (53.2%) agreed with the real viability of current information to the adaptation research. Particularly, urgent priority for researchers was deduced as intensifying knowledge-base and constructing related information on 'ecosystem change from climate change (productivity, community structure, food chain, phenology, range distribution, and number of individuals) with the overall improvement of information contents and its quality. The respondents emphasized with the necessity of conducting field surveys of local ecosystem and constructing ecosystem inventories, advancing monitoring designs for climate change in ecosystem, and case studies for regional ecosystem changes with the guidance or guidelines for monitoring ecosystem change to enhance the quality of adaptation research and produce related information. In terms of activation for ecological information usage, national and local adaptation network should be working based on the integrated ecological platform necessary to support exchanges of knowledge and information and to expand ecosystem types in time and spatial dimension.