• Title/Summary/Keyword: Knowledge-based systems

Search Result 2,129, Processing Time 0.027 seconds

Information Technology and Environmental Decision-Making (정보 기술과 환경 의사 결정)

  • Woo Chung-Gyoo
    • Journal of Science and Technology Studies
    • /
    • v.1 no.2 s.2
    • /
    • pp.371-398
    • /
    • 2001
  • Sciences and technologies are the sources which have formed presently highly developed civilizations and cultures and have enhanced the quality of human lives. But we see the dark sides of them as well as the bright sides, and we have the consciousness of environmental crisis and destruction of lives caused by them. Thus were are criticisms against human-tropism or technology-tropism from nature-tropism or deep ecology. However, if people would continue to have the desire of enjoying the present quality of their lives, they should try to develop and improve pro-environmental technologies. In this vein, we have the necessity of making environmental decisions and solving environmental problems by information technologies. Since the second half of the last century, 'environment' is the key word because we have the consciousness of environment strongly. As we solve human problems by making decisions of actions, we must face with environmental decisions in order to solve our environmental problems. If we have the better understanding of the nature of information and the role of information technology, and the relation of information technology and decision-making, we are able to design environmental systems and implement their optimal interfaces of environmental components. For this purpose, we are obliged to combine several useful technologies including GIS, DSS, Knowledge-based system and artificial neural networks. Therefore the developments and cooperations of these fields in environmental decision making enables us to live in the better and comfortable surrounding in the near future.

  • PDF

New Service System Model According to Evolution of Service Concept (서비스 개념의 진화에 따른 신(新) 서비스 시스템 모델)

  • Lee, JeungSun;Kim, Hyunsoo
    • Journal of Service Research and Studies
    • /
    • v.7 no.2
    • /
    • pp.1-16
    • /
    • 2017
  • The service that has been recognized as both a non-productive activity and auxiliary activity of manufacturing have become the driving force of the customers' demand with the 'service' itself. The service base is expanding and evolving rapidly. It is important to look at changes in service concepts to understand service systems. Because the service system itself has a cyclic nature based on the concept of service, it can help in the study and the "how" of service by looking at changing the system according to the evolution of the service concept. The ability to organize and utilize relationships is considered to be an important factor for managers in the service economy era. However, the attention of corporate is focused on their internal capabilities and they are familiar with external resources (knowledge and competence of customers). In this case study for each type of service, we analyzed the activities of interacting service providers-consumers in service relationship, and constructed a new service system model emphasizing intangible value and long-term outcome. This study is worth re-examining the role of customers in today's service economy era and actively utilizing a new service model for business performance.

A Dynamic Recommendation System Using User Log Analysis and Document Similarity in Clusters (사용자 로그 분석과 클러스터 내의 문서 유사도를 이용한 동적 추천 시스템)

  • 김진수;김태용;최준혁;임기욱;이정현
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.586-594
    • /
    • 2004
  • Because web documents become creation and disappearance rapidly, users require the recommend system that offers users to browse the web document conveniently and correctly. One largely untapped source of knowledge about large data collections is contained in the cumulative experiences of individuals finding useful information in the collection. Recommendation systems attempt to extract such useful information by capturing and mining one or more measures of the usefulness of the data. The existing Information Filtering system has the shortcoming that it must have user's profile. And Collaborative Filtering system has the shortcoming that users have to rate each web document first and in high-quantity, low-quality environments, users may cover only a tiny percentage of documents available. And dynamic recommendation system using the user browsing pattern also provides users with unrelated web documents. This paper classifies these web documents using the similarity between the web documents under the web document type and extracts the user browsing sequential pattern DB using the users' session information based on the web server log file. When user approaches the web document, the proposed Dynamic recommendation system recommends Top N-associated web documents set that has high similarity between current web document and other web documents and recommends set that has sequential specificity using the extracted informations and users' session information.

Classification of BcN Vulnerabilities Based on Extended X.805 (X.805를 확장한 BcN 취약성 분류 체계)

  • Yoon Jong-Lim;Song Young-Ho;Min Byoung-Joon;Lee Tai-Jin
    • The KIPS Transactions:PartC
    • /
    • v.13C no.4 s.107
    • /
    • pp.427-434
    • /
    • 2006
  • Broadband Convergence Network(BcN) is a critical infrastructure to provide wired-and-wireless high-quality multimedia services by converging communication and broadcasting systems, However, there exist possible danger to spread the damage of an intrusion incident within an individual network to the whole network due to the convergence and newly generated threats according to the advent of various services roaming vertically and horizontally. In order to cope with these new threats, we need to analyze the vulnerabilities of BcN in a system architecture aspect and classify them in a systematic way and to make the results to be utilized in preparing proper countermeasures, In this paper, we propose a new classification of vulnerabilities which has been extended from the ITU-T recommendation X.805, which defines the security related architectural elements. This new classification includes system elements to be protected for each service, possible attack strategies, resulting damage and its criticalness, and effective countermeasures. The new classification method is compared with the existing methods of CVE(Common Vulnerabilities and Exposures) and CERT/CC(Computer Emergency Response Team/Coordination Center), and the result of an application to one of typical services, VoIP(Voice over IP) and the development of vulnerability database and its management software tool are presented in the paper. The consequence of the research presented in the paper is expected to contribute to the integration of security knowledge and to the identification of newly required security techniques.

Anomaly Detection Methodology Based on Multimodal Deep Learning (멀티모달 딥 러닝 기반 이상 상황 탐지 방법론)

  • Lee, DongHoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.101-125
    • /
    • 2022
  • Recently, with the development of computing technology and the improvement of the cloud environment, deep learning technology has developed, and attempts to apply deep learning to various fields are increasing. A typical example is anomaly detection, which is a technique for identifying values or patterns that deviate from normal data. Among the representative types of anomaly detection, it is very difficult to detect a contextual anomaly that requires understanding of the overall situation. In general, detection of anomalies in image data is performed using a pre-trained model trained on large data. However, since this pre-trained model was created by focusing on object classification of images, there is a limit to be applied to anomaly detection that needs to understand complex situations created by various objects. Therefore, in this study, we newly propose a two-step pre-trained model for detecting abnormal situation. Our methodology performs additional learning from image captioning to understand not only mere objects but also the complicated situation created by them. Specifically, the proposed methodology transfers knowledge of the pre-trained model that has learned object classification with ImageNet data to the image captioning model, and uses the caption that describes the situation represented by the image. Afterwards, the weight obtained by learning the situational characteristics through images and captions is extracted and fine-tuning is performed to generate an anomaly detection model. To evaluate the performance of the proposed methodology, an anomaly detection experiment was performed on 400 situational images and the experimental results showed that the proposed methodology was superior in terms of anomaly detection accuracy and F1-score compared to the existing traditional pre-trained model.

Diagnostic Image Feature and Performance of CT and Gadoxetic Acid Disodium-Enhanced MRI in Distinction of Combined Hepatocellular-Cholangiocarcinoma from Hepatocellular Carcinoma

  • Kim, Hyunghu;Kim, Seung-seob;Lee, Sunyoung;Lee, Myeongjee;Kim, Myeong-Jin
    • Investigative Magnetic Resonance Imaging
    • /
    • v.25 no.4
    • /
    • pp.313-322
    • /
    • 2021
  • Purpose: To find diagnostic image features, to compare diagnostic performance of multiphase CT versus gadoxetic acid disodium-enhanced MRI (GAD-MRI), and to evaluate the impact of analyzing Liver Imaging Reporting and Data System (LI-RADS) imaging features, for distinguishing combined hepatocellular-cholangiocarcinoma (CHC) from hepatocellular carcinoma (HCC). Materials and Methods: Ninety-six patients with pathologically proven CHC (n = 48) or HCC (n = 48), diagnosed June 2008 to May 2018 were retrospectively analyzed in random order by three radiologists with different experience levels. In the first analysis, the readers independently determined the probability of CHC based on their own knowledge and experiences. In the second analysis, they evaluated imaging features defined in LI-RADS 2018. Area under the curve (AUC) values for CHC diagnosis were compared between CT and MRI, and between the first and second analyses. Interobserver agreement was assessed using Cohen's weighted κ values. Results: Targetoid LR-M image features showed better specificities and positive predictive values (PPV) than the others. Among them, rim arterial phase hyperenhancement had the highest specificity and PPV. Average sensitivity, specificity, and AUC values were higher for MRI than for CT in both the first (P = 0.008, 0.005, 0.002, respectively) and second (P = 0.017, 0.026, 0.036) analyses. Interobserver agreements were higher for MRI in both analyses (κ = 0.307 for CT, κ = 0.332 for MRI in the first analysis; κ = 0.467 for CT, κ = 0.531 for MRI in the second analysis), with greater agreement in the second analysis for both CT (P = 0.001) and MRI (P < 0.001). Conclusion: Rim arterial phase hyperenhancement on GAD-MRI can be a good indicator suggesting CHC more than HCC. GAD-MRI may provide greater accuracy than CT for distinguishing CHC from HCC. Interobserver agreement can be improved for both CT and MRI by analyzing LI-RADS imaging features.

KHistory: A System for Automatic Generation of Multiple Choice Questions on the History of Korea (KHistory: 한국사 객관식 문제 자동 생성 시스템)

  • Kim, Seong-Won;Jung, Hae-Seong;Jin, Jae-Hwan;Lee, Myung-Joon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.1
    • /
    • pp.253-263
    • /
    • 2017
  • As needs for knowledge on Korean history and the attention of the people are rapidly increasing, various smartphone applications for learning the history have appeared during recent years. These applications provide multiple choice questions to users through their own problem banks. But, since these questions are selected from the fixed set of problems that are stored previously, the learning efficiency of users is inevitably decreased when they use the applications repeatedly. In this paper, we present a question generation system named K-History which generates multiple choice questions in an automatic way using the database on the history of Korea. In addition, we also describe the development of the application Korean History Infinite Challenge as a learning application for Korean history. To develop K-History, we classify typical types of learning problems through various problems based on Korean history learning materials, proposing algorithms to generate problems according to the types found. Through the developed techniques, various learning systems can reduce the cost for creating questions, while increasing the learning efficiency of users.

Blind Rhythmic Source Separation (블라인드 방식의 리듬 음원 분리)

  • Kim, Min-Je;Yoo, Ji-Ho;Kang, Kyeong-Ok;Choi, Seung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.8
    • /
    • pp.697-705
    • /
    • 2009
  • An unsupervised (blind) method is proposed aiming at extracting rhythmic sources from commercial polyphonic music whose number of channels is limited to one. Commercial music signals are not usually provided with more than two channels while they often contain multiple instruments including singing voice. Therefore, instead of using conventional modeling of mixing environments or statistical characteristics, we should introduce other source-specific characteristics for separating or extracting sources in the under determined environments. In this paper, we concentrate on extracting rhythmic sources from the mixture with the other harmonic sources. An extension of nonnegative matrix factorization (NMF), which is called nonnegative matrix partial co-factorization (NMPCF), is used to analyze multiple relationships between spectral and temporal properties in the given input matrices. Moreover, temporal repeatability of the rhythmic sound sources is implicated as a common rhythmic property among segments of an input mixture signal. The proposed method shows acceptable, but not superior separation quality to referred prior knowledge-based drum source separation systems, but it has better applicability due to its blind manner in separation, for example, when there is no prior information or the target rhythmic source is irregular.

Proof-of-principle Experimental Study of the CMA-ES Phase-control Algorithm Implemented in a Multichannel Coherent-beam-combining System (다채널 결맞음 빔결합 시스템에서 CMA-ES 위상 제어 알고리즘 구현에 관한 원리증명 실험적 연구)

  • Minsu Yeo;Hansol Kim;Yoonchan Jeong
    • Korean Journal of Optics and Photonics
    • /
    • v.35 no.3
    • /
    • pp.107-114
    • /
    • 2024
  • In this study, the feasibility of using the covariance-matrix-adaptation-evolution-strategy (CMA-ES) algorithm in a multichannel coherent-beam-combining (CBC) system was experimentally verified. We constructed a multichannel CBC system utilizing a spatial light modulator (SLM) as a multichannel phase-modulator array, along with a coherent light source at 635 nm, implemented the stochastic-parallel-gradient-descent (SPGD) and CMA-ES algorithms on it, and compared their performances. In particular, we evaluated the characteristics of the CMA-ES and SPGD algorithms in the CBC system in both 16-channel rectangular and 19-channel honeycomb formats. The results of the evaluation showed that the performances of the two algorithms were similar on average, under the given conditions; However, it was verified that under the given conditions the CMA-ES algorithm was able to operate with more stable performance than the SPGD algorithm, as the former had less operational variation with the initial phase setting than the latter. It is emphasized that this study is the first proof-of-principle demonstration of the CMA-ES phase-control algorithm in a multichannel CBC system, to the best of our knowledge, and is expected to be useful for future experimental studies of the effects of additional channel-number increments, or external-phase-noise effects, in multichannel CBC systems based on the CMA-ES phase-control algorithm.

Keyword Network Analysis for Technology Forecasting (기술예측을 위한 특허 키워드 네트워크 분석)

  • Choi, Jin-Ho;Kim, Hee-Su;Im, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.227-240
    • /
    • 2011
  • New concepts and ideas often result from extensive recombination of existing concepts or ideas. Both researchers and developers build on existing concepts and ideas in published papers or registered patents to develop new theories and technologies that in turn serve as a basis for further development. As the importance of patent increases, so does that of patent analysis. Patent analysis is largely divided into network-based and keyword-based analyses. The former lacks its ability to analyze information technology in details while the letter is unable to identify the relationship between such technologies. In order to overcome the limitations of network-based and keyword-based analyses, this study, which blends those two methods, suggests the keyword network based analysis methodology. In this study, we collected significant technology information in each patent that is related to Light Emitting Diode (LED) through text mining, built a keyword network, and then executed a community network analysis on the collected data. The results of analysis are as the following. First, the patent keyword network indicated very low density and exceptionally high clustering coefficient. Technically, density is obtained by dividing the number of ties in a network by the number of all possible ties. The value ranges between 0 and 1, with higher values indicating denser networks and lower values indicating sparser networks. In real-world networks, the density varies depending on the size of a network; increasing the size of a network generally leads to a decrease in the density. The clustering coefficient is a network-level measure that illustrates the tendency of nodes to cluster in densely interconnected modules. This measure is to show the small-world property in which a network can be highly clustered even though it has a small average distance between nodes in spite of the large number of nodes. Therefore, high density in patent keyword network means that nodes in the patent keyword network are connected sporadically, and high clustering coefficient shows that nodes in the network are closely connected one another. Second, the cumulative degree distribution of the patent keyword network, as any other knowledge network like citation network or collaboration network, followed a clear power-law distribution. A well-known mechanism of this pattern is the preferential attachment mechanism, whereby a node with more links is likely to attain further new links in the evolution of the corresponding network. Unlike general normal distributions, the power-law distribution does not have a representative scale. This means that one cannot pick a representative or an average because there is always a considerable probability of finding much larger values. Networks with power-law distributions are therefore often referred to as scale-free networks. The presence of heavy-tailed scale-free distribution represents the fundamental signature of an emergent collective behavior of the actors who contribute to forming the network. In our context, the more frequently a patent keyword is used, the more often it is selected by researchers and is associated with other keywords or concepts to constitute and convey new patents or technologies. The evidence of power-law distribution implies that the preferential attachment mechanism suggests the origin of heavy-tailed distributions in a wide range of growing patent keyword network. Third, we found that among keywords that flew into a particular field, the vast majority of keywords with new links join existing keywords in the associated community in forming the concept of a new patent. This finding resulted in the same outcomes for both the short-term period (4-year) and long-term period (10-year) analyses. Furthermore, using the keyword combination information that was derived from the methodology suggested by our study enables one to forecast which concepts combine to form a new patent dimension and refer to those concepts when developing a new patent.