Proceedings of the Korea Inteligent Information System Society Conference (한국지능정보시스템학회:학술대회논문집)
Korea Intelligent Information System Society
- Semi Annual
Domain
- Information/Communication > Information Processing Theory
- Economics/Management > Management Information/e-Business
2001.01a
-
The explosion of the Internet, and most recently e-commerce, has caused great interest in agent technologies. The development of virtual environments has also increased in the last few years. A growing number of real-time applications use graphics with photorealistic quality, especially in the field of training, but also in the areas of design and ergonomic research. We describe an attempt to develop a framework that provides customers with multimedia information and interactive experiences with a virtual shopping environment. The application presented consists on a virtual visit to a music -store where the user is guided by an intelligent agent named Lizeth which responds in real-time to user's requests with precise information about music, artist's biographies, prices and related products to help the user to make decisions. The potential of UML and the Java programming language is discussed to show their application in the field of intelligent agents as mediators on shopping processes. We conclude that the proposed framework leads to the creation of application with a potentially significant impact in the development of e-commerce systems embedded in virtual environments.
-
There is no doubt that agents play an increasingly predominant role in e-commerce, whether these are business-to-consumer or business-to-business applications. However most of the current e-commerce agents only support a single bid for a product at a fixed price. Although price is an important factor, it is not the only concern of both business and consumer. There is doubt as to whether such agents satisfv both parties. Negotiation on a variety of issues is needed in order to reach an agreement. In this paper, a computational agent negotiation(CAN) model is proposed to facilitate multiple-issue negotiation via an agent. The main contribution of the CAN model is it enables agent to participate actively in the negotiation with various feedback instead of simply an agreement or rejection.
-
There are so many researches about the search method for the most compatible dispatching rule to a manufacturing system state. Most of researches select the dispatching rule using simulation results. This paper touches upon two research topics: the clustering method for manufacturing system states using simulation, and the search method for the most compatible dispatching rule to a manufacturing system state. The manufacturing system state variables are given to ART II neural network as input. The ART II neural network is trained to cluster the system state. After being trained, the ART II neural network classifies any system state as one state of some clustered states. The simulation results using clustered system state information and those of various dispatching rules are compared and the most compatible dispatching rule to the system state is defined. Finally there are made two knowledge bases. The simulation experiments are given to compare the proposed methods with other scheduling methods. The result shows the superiority of the proposed knowledge base.
-
The number of hidden neurons of the feed-forward neural networks is generally decided on the basis of experience. The method usually results in the lack or redundancy of hidden neurons, and causes the shortage of capacity for storing information of learning overmuch. This research proposes a new method for optimizing the number of hidden neurons bases on information entropy, Firstly, an initial neural network with enough hidden neurons should be trained by a set of training samples. Second, the activation values of hidden neurons should be calculated by inputting the training samples that can be identified correctly by the trained neural network. Third, all kinds of partitions should be tried and its information gain should be calculated, and then a decision-tree correctly dividing the whole sample space can be constructed. Finally, the important and related hidden neurons that are included in the tree can be found by searching the whole tree, and other redundant hidden neurons can be deleted. Thus, the number of hidden neurons can be decided. In the case of building a neural network with the best number of hidden units for tea quality evaluation, the proposed method is applied. And the result shows that the method is effective
-
This paper describes the modeling of work environment for the extraction of abstract operation rules for cooperative work with multiple agent. We propose the modeling method using a potential field. In the method, it is applied to a box pushing problem, which is to move a box from a start to a goal b multiple agent. The agents follow the potential value when they move and work in the work environment. The work environment is represented as the grid space. The potential field is generated by Genetic Algorithm(GA) for each agent. GA explores the positions of a potential peak value in the grid space, and then the potential value stretching in the grid space is spread by a potential diffusion function in each grid. However it is difficult to explore suitable setting using hand coding of the position of peak potential value. Thus, we use an evlolutionary computation way because it is possible to explore the large search space. So we make experiments the environment modeling using the proposed method and verify the performance of the exploration by GA. And we classify some types from acquired the environment model and extract the abstract operation rule, As results, we find out some types of the environment models and operation rules by the observation, and the performance of GA exploration is almost same as the hand coding set because these are nearly same performance on the evaluation of the consumption of agent's energy and the work step from point to the goal point.
-
Korean government and companies have given a lot of their efforts to exchange electronic documents between themselves and their partners. As the results of them. Korean EDI standards were made by Korean EDIFACT Committee and the standards have been used by companies and governmental organization in Korea. However, Korean export customs clearance EDI system is based on VAN(Value Added Network) and one VAN company ha monopolistic right to relay EDI documents to Korean Customs Service. Therefor is leads to a lot of problems such as inconvenient software, expensive transmission fee and the difficulty of connection with the in-house systems of user companies. To solve these problems, a few good solutions and systems have been suggested and one of them is the Internet EDI. we will suggest a new export customs clearance EDI system running on the Web. This system is basically an Internet EDI system, but we have developed this system using XML instead of HTML, XML is a new markup language with merit such as isolating data from style of documents. This system consists of 7 modules, schema/style/template management, XML/EDI document management, XML/EDI transformation, EDI transmission, certification management and log management. Also this system can be used with other traditional EDI systems that have UN/EDIFACT standards. We will discuss the advantages and disadvantages of XML/EDI system for customs clearance. The development of this system will be a leading study for XML/EDI standards in export clearance EDI system.
-
Modern day networks are increasingly moving towards peer to peer architecture where routing tasks will not be limited to some dedicated routers, but instead all computers in a network will take part in some routing task. Since there are no specialized routers, each node performs some routing tasks and information passes from one neighbouring node to another, not in the form of dumb data, but as intelligent virtual agents or active code that performs some tasks by executing at intermediate nodes in its itinerary. The mobile agents can run, and they are free to d other tasks as the agent will take care of the routing tasks. The mobile agents because of their inherent 'intelligence'are better able to execute complex routing tasks and handle unexpected situations as compared to traditional routing techniques. In a modern day dynamic network users get connected frequently, change neighbours and disconnect at a rapid pace. There can be unexpected link failure as well. The mobile agent based routing system should be able to react to these situations in a fact and efficient manner so that information regarding change in topology propagates quickly and at the same time the network should not get burdened with traffic. We intend to build such a system.
-
The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.
-
We consider a hyper-redundant system that consists of many uniform units. The hyper-redundant system has many degrees of freedom and it can accomplish various tasks. Applysing the reinforcement learning to the hyper-redundant system is very attractive because it is possible to acquire various behaviors for various tasks automatically. In this paper we present a new reinforcement learning algorithm "Q-learning with propagation of motion". The algorithm is designed for the multi-agent systems that have strong connections. The proposed algorithm needs only one small Q-table even for a large scale system. So using the proposed algorithm, it is possible for the hyper-redundant system to learn the effective behavior. In this algorithm, only one leader agent learns the own behavior using its local information and the motion of the leader is propagated to another agents with time delay. The reward of the leader agent is given by using the whole system information. And the effective behavior of the leader is learned and the effective behavior of the system is acquired. We apply the proposed algorithm to a snake-like hyper-redundant robot. The necessary condition of the system to be Markov decision process is discussed. And the computer simulation of learning the locomotion is demonstrated. From the simulation results we find that the task of the locomotion of the robot to the desired point is learned and the winding motion is acquired. We can conclude that our proposed system and our analysis of the condition, that the system is Markov decision process, is valid.
-
In this paper, we propose a virtual action system based on information agents, We call the system the MultiHammer, MultiHammer can be used for studying and analyzing online actions. MuiltiHammer provides functions of implement-ing a meta online action site and an experiment environ-ment. We have been using MultiHammer as an experiment as an experiment environment for BiddinBot. BiddingBot aims at assisting users to bid simultaneously in multiple online auctions. In order to bid simultaneously in multiple online auctions. In order to analyze the behavior of BiddngBot, we need to pur-chase a lot of items. It is hard for us to prepare a lot of fund to show usability and advantage of BiddingBot. MultiHam-mer enables us to effectively analyze the behavior of BiddingBot. MultiHammer consists of three types of agents for information collecting data storing and auctioning. Agents for information wrappers. To make agent work as wrarp-pers, we heed to realize software modules for each online action site. Implementing these modules reguires a lot of time and patience. To address this problem, we designed a support mechanism for developing the modules. Agents for data storing record the data gathered by agents for informa-tion collecting. Agents for auctioning provide online services using data recorded by agents for data storing. By recording the activities in auction sites. MultiHammer can recreate any situation and trace auction for experimentation, Users can participate in virtual using the same information in real online auctions. Users also participate in real auc-tions via wrapper agents for information collecting
-
Most of current Internet auction systems are simple server programs that partly automated the function of the conventional auction house. These system do not provide sufficient independence, distribution and parallelism between the functions of the conventional auction house. In there are automated by server program, but still users need to execute a repetitive task to monitor the dynamic progress of a certain auction, decide proper bid price, and submit the bid. Another problem is that they support only single as English auction and Dutch auction, but they do not support the double auctions are superior to the single auction with respect to speed., efficiency, and the fair distribution of profit. In this paper, we present design and implement of an agent-based continuous double auction system, called CoDABot, in order to mitigate the limitations of current auctions system. CoDABot supports continuous double auction, provides various bidding agent for users to select, and has been implemented into a multi-agent system that realize more independent, distributed, and parallel subsystems.
-
To get the items that a buyer wants in Internet auction. he must search for the items through several auction sites. When the bidding starts, he(the buyer) needs to connect to these auction sites frequently so that he can monitor the bid stats and re-bid. A reserve-price auction reduces the number of connections, but this limits the user's bidding strategy. Another problem is equity between the buyer and the seller. Both the buyer and the seller should profit together within proper limits. In this paper, we propose an auction agent system using a collaborative mobile agent and a brokering mechanism called MoCAAS (Mobile Collaborative Auction Agent System), which mediates between the buyer and the seller and executes bidding asynchronously and autonomously. This reduces connection costs. offers more intelligent bidding and solves the equity problem.
-
Agent-mediated electronic markets have been a grow-ing area of agent research and developmen tin recent year. There exist a lot of e-commerce sites on the In-ternet(e.g. Priceline, com, Amazon, com etc). These e-commerce site have proposed new business models for effective and efficient commerce activity. Intelli-gent agents have been studied very widely in the field of artificial intelligence, For purpose of this paper, an agent can act autonomously and collaboratively in a network environment on behalf of its users. It is hard for people to effectively and efficiently monitor, buy, and sell at multiple e-commerce sites. If we intro-duce agent technologies into e-commerce systems, we can expect to further enhance the intelligence of their support. In this paper, we propose a new coopera-tion mechanism among seller agents based on exchang-ing their goods in our agent-mediated electronic market system. G-Commerce. On G-Commerce, seller agents and buyer agents negotiate with each other. In our model, seller agents cooperatively negotiate in order to effectively sell goods in stock. Buyer agents coopera-tively form coalitions in order to buy goods based an discount proices. Seller agent's negotiation goods. Our current experiments show that exchanging mechanism enables seller agents to effectively sell goods in stock. Also, we present the Pareto optimality of our exchang-ing mechanism.
-
Frauds detection is a difficult problem, requiring huge computer resources and complicated search activities Researchers have struggled with the problem. Even though a fee research approaches have claimed that their solution is much better than others, research community has not found 'the best solution'well fitting every fraud. Because of the evolving nature of the frauds. a novel and self-adapting method should be devised. In this research a new approach is suggested to solving frauds in insurance claims credit card transaction. Based on evolutionary computing approach, the method is itself self-adjusting and evolving enough to generate a new self of decision-makin rules. We believe that this new approach will provide a promising alternative to conventional ones, in terms of computation performance and classification accuracy.
-
In this paper, a simple and systematic control design method is proposed for a discrete-time Takagi-Sugeno(TS) fuzzy system, which employs the parallel distributed compensation(PDC) to determine the structure of a fuzzy controller so as to mark all the Lyaunov exponents of the controlled TS fuzzy system strictly positive. This approach is proven to be mathematically rigorous for anticontrol of chaos for a TS fuzzy system in the sense that any given discrete-time TS fuzzy system can be made chaotic by the designed PDC controller along with the-operation. A numerical example is included to visualize the anticontrol effect.
-
Prediction of the top(service) speeds of high-speed trains and configuration design to trainset of them has been studied using the neural network system The traction system. The traction system of high-speed trains is composed of transformers, motor blocks, and traction motors of which locations and number in the trainset formation should be determine in the early stage of train conceptural design. Components of the traction system are the heaviest parts in a train so that it gives strong influence to the top speeds of high-speed trains. Prediction of the top speeds has been performed mainly with data associated with the traction system based on the frequently used neural network system-backpropagation. The neural network has been trained with the data of the high-speed trains such as TGV, ICE, and Shinkanse. Configuration design of the trainset determines the number of trains motor cars, traction motors, weights and power of trains. Configuration results from the neural network are more accurate if neural networks is trained with data of the same type of trains will be designed.
-
This study investigates the effectiveness of time delay neural networks(TDNN) for the time dependent prediction domain. Although it is well-known fact that the back-propagation neural network(BPN) performs well in pattern recognition tasks, the method has some limitations in that it can only learn an input mapping of static (or spatial) patterns that are independent of time of sequences. The preliminary results show that the accuracy of TDNN is higher than the standard BPN with time lag. Our proposed approaches are demonstrated by the stork market prediction domain.
-
In this paper, we evaluate and contrast four neural network rule extraction approaches for credit scoring. Experiments are carried our on three real life credit scoring data sets. Both the continuous and the discretised versions of all data sets are analysed The rule extraction algorithms, Neurolonear, Neurorule. Trepan and Nefclass, have different characteristics, with respect to their perception of the neural network and their way of representing the generated rules or knowledge. It is shown that Neurolinear, Neurorule and Trepan are able to extract very concise rule sets or trees with a high predictive accuracy when compared to classical decision tree(rule) induction algorithms like C4.5(rules). Especially Neurorule extracted easy to understand and powerful propositional if -then rules for all discretised data sets. Hence, the Neurorule algorithm may offer a viable alternative for rule generation and knowledge discovery in the domain of credit scoring.
-
Since the computing environment changes very rapidly, the estimation of software effort is very difficult because it is not easy to collect a sufficient number of relevant cases from the historical data. If we pinpoint the cases, the number of cases becomes too small. However is we adopt too many cases, the relevance declines. So in this paper we attempt to balance the number of cases and relevance. Since many researches on software effort estimation showed that the neural network models perform at least as well as the other approaches, so we selected the neural network model as the basic estimator. We propose a search method that finds the right level of relevant cases for the neural network model. For the selected case set. eliminating the qualitative input factors with the same values can reduce the scale of the neural network model. Since there exists a multitude of combinations of case sets, we need to search for the optimal reduced neural network model and corresponding case, set. To find the quasi-optimal model from the hierarchy of reduced neural network models, we adopted the beam search technique and devised the Case-Set Selection Algorithm. This algorithm can be adopted in the case-adaptive software effort estimation systems.
-
A Controlled Ecological Life Support System(CELSS) is essential for man to live a long time in a closed space such as a lunar base or a mars base. Such a system may be an extremely complex system that has a lot of facilities and circulates multiple substances,. Therefore, it is very difficult task to control the whole CELSS. Thus by regarding facilities constituting the CELSS as agents and regarding the status and action as information, the whole CELSS can be treated as multi-agent system(MAS). If a CELSS can be regarded as MAS the CELSS can have three advantages with the MAS. First the MAS need not have a central computer. Second the expendability of the CELSS increases. Third, its fault tolerance rises. However it is difficult to describe the cooperation protocol among agents for MAS. Therefore in this study we propose to apply reinforcement learning (RL), because RL enables and agent to acquire a control rule automatically. To prove that MAS and RL are effective methods. we have created the system in Java, which easily gives a distributed environment that is the characteristics feature of an agent. In this paper, we report the simulation results for material circulation control of the CELSS by the MAS and RL.
-
Mobile agent technology has been the subject of much attention in the last few years, mainly due to the proliferation of distributed software technologies combined with the distributed AI research field. In this paper, we present a design of communication networks of agents that cooperate with each other for forwarding messages to the specific mobile agent in order to make the overall system location transparent. In order to make the material accessible to general intelligent system researchers, we present the general ideas abstractly in terms of the graph theory. In particular, a proxy network is defined as a directed acyclic graph satisfying some structural conditions. In turns out that the definition ensures some kind of reliability of the network, in the sense that as long as at most one proxy agent is abnormal, there agent exists a communication path, from every proxy agent to the target agent, without passing through the abnormal proxy. As the basis for the implementation of this scheme, an appropriate initial proxy network is specified and the dynamic nature of the network is represented by a set of graph transformation rules. It is shown that those rules are sound, in the sense that all graphs created from the initial proxy network by zero or more applications of the rules are guaranteed to be proxy networks. Finally, we will discuss some implementation issues.
-
Business forecasting is vital to the success of business. There has been an increasing demand for building business forecasting software system to assist human being to do forecasting. However, the uncertain and complex nature makes is a challenging work to analyze, design and implement software solutions for business forecasting. Traditional forecasting systems in which their models are trained based on small collection of historical data could not meet such challenges at the information explosion over the Internet. This paper presents an agent oriented business forecasting approach for building intelligent business forecasting software systems with high reusability. Although agents have been applied successfully to many application domains. little work has been reported to use the emerging agent oriented technology of this paper is that it explores how agent can be used to help human to manage various business forecasting processes in the whole business forecasting life cycle.
-
This paper introduces an agent system that assists in user's work on the Internet. First, the agent receives requests from the user or other agents. Since there are various kinds of requests, it is difficult to describe a completes set of request -handling rules in advance. Therefore, the agent makes a plan referring to old cases. The agent executes the plane which is a sequence of basic operation. If the agent fails to execute basic operation or to create a plan. then it makes a new plan by interacting with the user or other agent. Finally the agent stores this new case with user's evaluation score into the case base.
-
It is not easy for business workers to find proper information for his job, because of the following problems:(1) information exist in various information system both internal and external with diverse formats(2) he cannot estimate relevance of all the information and (3) he cannot select proper information among large volume of information. The requirements for the systems are the following things:(1) search of relevant information from various internal and external information systems(2) proving an integrated view of information for workers, and (3) information processing and decision-making are seamlessly integrated fro business workers. We proposed an agent system to fulfill the requirements. We focus on the purchasing job, and developed a prototype system. PIPA(Purchasing Information Process Agent). The PIPA performs not only information processing jobs(information search gathering, display, business form -making)but also decision-making support job (evaluation of supplier candidates, and apply of sourcing policy) for sourcing manager. The framework of the PIPA can be appled to other information processing and desision-making jobs.
-
Mobile agents have the unique ability to transport themselves from one system in a network to another. The ability to travel allows mobile agents to move to a system that contains services with which they want to interact and then to take advantage of being in the same host or network as the service. But most of conventional mobile agent systems require that the users or the programmer should give the mobile agent its detail behavioral script for accomplishing the given task. And during its runtime, such mobile agents just behave according to the fixed script given by its user. Therefore it is impossible that conventional mobile agents autonomously build their own plants and execute them in considering their ultimate goals and the dynamic world states. One way to overcome such limitations of conventional mobile agent systems is to develop an intelligent mobile agent system embedding a reactive planner. In this paper, we design both a model of agent mobility and a model of inter-agent communication based upon the representative reactive planning agent architecture called JAM. An then we develop an intelligent mobile agent system with reactive planning capability, IMAS, by implementing additional basic actions for agent moves and inter-agent communication within JAM according to the predefined models. Unlike conventional mobile agents. IMAS agents can be able to adapt their behaviors to the dynamic changes of their environments as well as build their own plans autonomously. Thus IMAS agents can show higher flexibility and robustness than the conventional ones.
-
Mobile agent technologies are getting popular as means for accessing network resources efficiently. In order for mobile agents to be accepted as a reliable technology for applications such as e-commerce, a proper framework for mobile database should be established. In this paper, we first discuss weak points of current mobile computing systems that mostly result from the limitations of current mobile computing technology including frequent disconnection, limited battery capacity, low-bandwidth communication and reduced storage capacity. These weak points also have become the cause of transaction problem where mobile devices issue transactions. In order to eliminate this transaction problem in the mobile environment, we propose a mobile database framework, Proxyne, which is based on the proxy and SyncML.
-
A recommendation system tracks past actions of a group of users to make a recommendation to individual members of the group. The computer-mediated marketing and commerce have grown rapidly nowadays so the concerns about various recommendation procedures are increasing. We introduce a recommendation methodology by which e-commerce sites suggest new products of services to their customers. The suggested methodology is based on web log analysis, product taxonomy, and association rule mining. A product recommendation system is developed based on our suggested methodology and applied to a Korean internet shopping mall. The validity of our recommendation system is discussed with the analysis of a real internet shopping mall case.
-
Recently the wireless-internet has been spreading extensively. People are spending a large part of their time gaining access to information using a mobile device. With the rapid growth of on-line Electronic Commerce, the use of mobile devices creates a new paradigm that provides users with location-independent real time service. Although this new paradigm does have some advantages, limited process speed, low bandwidth, the low battery capacity of mobile devices, and a high rate of wireless network errors causes many overhead expenses during service time with the server. In this paper, we suggest an autonomous service delivery system, which provides mobile agent capability to users that cannot maintain a connection. We have developed the system based on java mobile agent technology. Using this system, we can provide more effective service to users when the user is sending requirements for service through a mobile device that has limited resources. Furthermore we can manage the contact server dynamically when new services are added.
-
One of the most frequent uses of Internet is data gathering. Data can be about many themes but perhaps one of the most demanded fields is the tourist information. Normally, databases that support these systems are maintained manually. However, there is other approach, that is, to extract data automatically, for instance, from textual public information existing in the Web. This approach consists of extracting data from textual sources(public or not) and to serve them totally or partially to the user in the form that he/she wants. The obtained data can maintain automatically databases that support different systems as WAP mobile telephones, or commercial systems accessed by Natural Language Interfaces and others. This process has three main actors. The first is the information itself that is present in a particular context. The second is the information supplier (extracting data from the existing information) and the third is the user or information searcher. This added value chain reuse and give value to existing data even in the case that these data were not tough for the last use by the use of the described technology. The main advantage of this approach is that it makes independent the information source from the information user. This means that the original information belongs to a particular context, not necessarily the context of the user. This paper will describe the application based on this approach developed by the authors in the FLEX EXPRIT IV n
$^{\circ}$ EP29158 in the Work-package "Knowledge Extraction & Data mining"where the information captured from digital newspapers is extracted and reused in tourist information context. -
This paper introduces a method of constructing expert systems in an integrated environment for automatic software design. This integrated environment may be applicable from top-level system architecture design, data flow diagram design down to flow chart and coding. The system is integrated with three CASE tools, FSD (Functional Structure Diagram), DFD (Data Flow Diagram) and structured chart PAD (Problem Analysis Diagram), and respective expert systems with automatic design capability by reusing past design. The construction way of these expert systems is based on systematic acquisition of design knowledge stemmed from a systematic design work process of well-matured developers. The design knowledge is automatically acquired from respective documents and stored in the respective knowledge bases. By reusing it, a similar software system may be designed automatically. In order to develop these expert systems in a short period, these design knowledge is expressed by the unified frame structure, functions of th expert system units are partitioned mono-functions and then standardized components. As a result, the design cost of an expert system can be reduced to standard work procedures. Another feature of this paper is to introduce the integrated environment for automatic software design. This system features an essentially zero start-up cost for automatic design resulting in substantial saving of design man-hours in the resulting in substantial saving of design man-hours in the design life cycle, and the expected increase in software productivity after enough design experiences are accumulated.
-
Within a mechanical system such as an automotive the number of standard machine parts is increasing, so that the parts selection becomes more important than ever before. Selection of appropriate bearings in the preliminary design phase of a machine is also important. In this paper, three decision-making approaches are compared to find out a model that is appropriate to bearing selection problem. An artificial neural network, which is trained with real design cases, is used to select a bearing mechanism at the first step. Then, the subtype of the bearing is selected by the weighting factor method. Finally, types of peripherals such as lubrication methods are determined by a rule-based expert system.
-
When a semiconductor package is assembled, various materials such as die attach adhesive, lead frame, EMC (Epoxy Molding Compound), and gold wire are used. For better preconditioning performance, the combination between the packaging materials by studying the compatibility of their properties as well as superior packaging material selection is important. But it is not an easy task to find proper packaging material sets, since a variety of factors like package design, substrate design, substrate size, substrate treatment, die size, die thickness, die passivation, and customer requirements should be considered. This research applies case-based reasoning(CBR) technique to solve this problem, utilizing prior cases that have been experienced. Our particular interests lie in building decision support model to aid the selection of proper die attach adhesive. The preliminary results show that this approach is promising.
-
In this paper, we explored the possibilities of embedding Qualitative Reasoning techniques, the Qualitative Process Theory (QPT), and its implementation in the field of inorganic chemistry. The target field of implementation is Qualitative Chemical Analysis and Laboratory Simulation. By embedding such technique in this education software we aim to combine theory and practice into a single package. The system, are able to generate reasoning and explanation based on chemical theories, helping student in mastering basic chemistry knowledge and practical skill as well. We also review the suitability of embedding QPT techniques into chemistry in general, by comparing some examples from both fields.
-
Automated essay grading has been proposed for over thirty years. Only recently have practical implementations been constructed and tested. This paper investigated the role of the nearest-neighbour algorithm within the information retrieval as a way of grading the essay automatically called Automated Essay Grading System. It intended to offer teachers an individualized assistance in grading the student\`s essay. The system involved several processes, which are the indexing, the structuring of the model answer and the grade processing. The indexing process comprised the document indexing and query processing which are mainly used for representing the documents and the query. Structuring the model answer is actually preparing the marking scheme and the grade processing is the process of assessing the essay. To test the effectiveness of the developed algorithms, the algorithms are tested against the History text in Malay. The result showed that th information retrieval and the nearest-neighbour algorithm are practical combination that offer acceptable performance for grading the essay.
-
Different kinds of information are used when solving tasks that arise in the life cycle of an applied knowledge based system (KBS). Many of these tasks are still under investigation. Their solving methods are often researched independently of each other due to complexity of the tasks. As a result, systems that realize these methods turn out to be incompatible and therefore could not be used together in the lifecycle of a KBS. The following problem arises here: how to support the full life cycle of a KBS. This paper introduces a class of computer knowledge banks that are intended to support the full life cycle of KBSs. Primary tasks that arise in the full life cycle of a KBS are analyzed. The architecture of a knowledge bank of the introduced class is presented, including an Information Content, a Shell of the Information Content and a Software Content. General requirements on these components are formulated on the basis of the analysis. These requirements depend on the current state of understanding in the life cycle of KBSs.
-
A methodology of ontology design and a computer system supporting ontology design are needed. Our research goals include development of a methodology for ontology design and a its support environment. Although several systems for building ontologies have been implemented, they do not consider ontological theory very much. We discuss how to apply the \"role-concept\" and \"relationship\" in our environment, named Hozo, for creating and using ontologies. We present the architecture, functionalities of its modules, its interface and the some experiences on the design and use of ontologies.
-
It is highly for the research in artificial intelligence area to be able to manage knowledge as human beings do. One of the fantastic natures that human knowledge management systems have is being active. Human beings actively manage their knowledge, solve conflicts and make inference. It makes a major difference from artificial intelligent systems. This paper focuses on the discussion of the features of that human knowledge systems, which underlies the active nature. With the features extracted, further research can be done to construct a suitable infrastructure to facilitate these features to build a man-made active knowledge management system. This paper proposed 10 features that human beings follow to maintain their knowledge. We believe it will advance the evolution of active knowledge management systems by realizing these features with suitable knowledge representation/decision models and software agent technology.
-
In this paper, we describe a new method for selecting information sources in a distributed environment. Recently, there has been much research on distributed information retrieval, that is information retrieval (IR) based on a multi-database model in which the existence of multiple sources is modeled explicitly. In distributed IR, a method is needed that would enable selecting appropriate sources for users\` queries. Most existing methods use statistical data such as document frequency. These methods may select inappropriate ate sources if a query contains polysemous words. In this paper, we describe an information-source selection method using two types of thesaurus. One is a thesaurus automatically constructed from documents in a source. The other is a hand-crafted general-purpose thesaurus(e.g. WordNet). Terms used in documents in a source differ from one another and the meanings of a term differ depending on th situation in which the term is used. The difference is a characteristic of the source. In our method, the meanings of a term are distinguished between by the relationship between the term and other terms, and the relationship appear in the co-occurrence-based thesaurus. In this paper, we describe an algorithm for evaluating a usefulness of a source for a query based on a thesaurus. For a practical application of our method, we have developed Papits, a multi-agent-based in formation sharing system. An experiment of selection shows that our method is effective for selecting appropriate sources.
-
The discovery of tacit knowledge from domain experts is one of the most exciting challenges in today\`s knowledge management. The nature of decision knowledge in determining the quality a firm\`s short-term liquidity is full of abstraction, ambiguity, and incompleteness, and presents a typical tacit knowledge extraction problem. In dealing with knowledge discovery of this nature, we propose a scheme that integrates both knowledge elicitation and knowledge discovery in the knowledge engineering processes. The knowledge elicitation component applies the Verbal Protocol Analysis to establish industrial cases as the basic knowledge data set. The knowledge discovery component then applies fuzzy clustering to the data set to build a fuzzy knowledge based system, which consists of a set of fuzzy rules representing the decision knowledge, and membership functions of each decision factor for verifying linguistic expression in the rules. The experimental results confirm that the proposed scheme can effectively discover the expert\`s tacit knowledge, and works as a feedback mechanism for human experts to fine-tune the conversion processes of converting tacit knowledge into implicit knowledge.
-
This paper describes a natural language question answering system that can be used by students in getting as solution to their queries. Unlike AI question answering system that focus on the generation of new answers, the present system retrieves existing ones from question-answer files. Unlike information retrieval approaches that rely on a purely lexical metric of similarity between query and document, it uses a semantic knowledge base (WordNet) to improve its ability to match question. Paper describes the design and the current implementation of the system as an intelligent tutoring system. Main drawback of the existing tutoring systems is that the computer poses a question to the students and guides them in reaching the solution to the problem. In the present approach, a student asks any question related to the topic and gets a suitable reply. Based on his query, he can either get a direct answer to his question or a set of questions (to a maximum of 3 or 4) which bear the greatest resemblance to the user input. We further analyze-application fields for such kind of a system and discuss the scope for future research in this area.
-
In writing learning as a second/foreign language, a learner has to acquire not only lexical and syntactical knowledge but also the skills to choose suitable words for content which s/he is interested in. A learning system should extrapolate learner\\`s intention and give example phrases that concern with the content in order to support this on the system. However, a learner cannot always represent a content of his/her desired phrase as inputs to the system. Therefore, the system should be equipped with a diagnosis function for learner\\`s intention. Additionally, a system also should be equipped with an analysis function to score similarity between learner\\`s intention and phrases which is stored in the system on both syntactic and idiomatic level in order to present appropriate example phrases to a learner. In this paper, we propose architecture of an interactive support method for English writing learning which is based an analogical search technique of sample phrases from corpora. Our system can show a candidate of variation/next phrases to write and an analogous sentence that a learner wants to represents from corpora.
-
This research aims to unravel the significant features of the human immune system, which would be successfully employed for a novel network intrusion detection model. Several salient features of the human immune system, which detects intruding pathogens, are carefully studied and the possibility and the advantages of adopting these features for network intrusion detection are reviewed and assessed.
-
We are focusing on an approach which handle a general Web as a resource in order to support self-directed learning for a student. Then, we are developing a Web based learning environment "Web-Retracer"for utilizing Web as teaching materials by a user′s Annotation. Although the learner can share the Web resource that the others utilized in this environment, Web resources unsuitable for a student′s needs becomes hindrance about her/his self-directed learning. In this paper, we propose a recommending method of the resource united with a student′s needs on the basis of a student′s learning and Web browsing history. This method analyzed the feature peculiar to a resource, and extracts the resource with which the needs of the feature and a student agreed.
-
When the multiple video sources are together transmitted through the channel of fixed bandwidth, the efficiently picture quality method is necessary. This paper presents the picture quality control method to keep the same distortion level among the video sources. We first find a model of distortion and bitrate for the multiplexing system of multiple sources. Then we obtain the bitrate for each source to have same distortion level among the sources by using the approximated model parameters for simple implementation.
-
This paper tests existence of precautionary saving motive under health uncertainty, using household level panel data from Korea. For this purpose, this paper considers a dynamic health capital model with health uncertainty and derives testable equations for changes in consumption and medical expenditures. Under this framework, households who face future health uncertainty will exhibit precautionary behavior by depressing consumption or increasing investment in health. To test this hypothesis, the paper uses the conditional variance of health as the direct measure of health uncertainty, obtained by estimating a multinomial logit model. Empirical results using the Korean Household Panel Study (KHPS, 1993 - 1997) suggest that Korean elderly households follow the precautionary behavior to insure against future health risk.
-
The occurrences of occupational illness and injury have been seriously underestimated in Korea. Surveillance systems for occupational diseases have recently emerged as important strategies for the control of occupational hazards and the implementation of intervention programs to protect workers. However, health service providers do not actively diagnose occupational diseases and are unwilling to report occupational diseases. With the rapid growth of Internet usage in Korea, the computer network has become the predominant means of communicating and sharing information. Therefore, we developed a web-based updated information and education network to assist the health services providers in reporting occupational diseases. Information systems for occupational disease surveillance were also designed to support occupational disease reporting. Commonly available database systems, such as web databases, are useful to manage occupational diseases data efficiently. Standardized case definitions and report guidelines were also established, which included cumulative trauma disorder, occupational asthma, occupational contact dermatitis, and occupational cancer. This system may provide the basis of an efficient and continuously updated source of educational information and provide specific information concerning the occurrence of occupational diseases in specific areas. Background information on occupational diseases obtained in this way will be invaluable for preventing hazards and enforcing occupational disease prevention programs. Moreover, our experiences in establishing these information systems will be of great use in other countries and settings.
-
One of the most important problems on rule induction methods is that they cannot extract rules, which plausibly represent experts decision processes. On one hand, rule induction methods induce probabilistic rules, the description length of which is too short, compared with the experts rules. On the other hand, construction of Bayesian networks generates too lengthy rules. In this paper, the characteristics of experts rules are closely examined and a new approach to extract plausible rules is introduced, which consists of the following three procedures. First, the characterization of decision attributes (given classes) is extracted from databases and the classes are classified into several groups with respect to the characterization. Then, two kinds of sub-rules, characterization rules for each group and discrimination rules for each class in the group are induced. Finally, those two parts are integrated into one rule for each decision attribute. The proposed method was evaluated on a medical database, the experimental results of which show that induced rules correctly represent experts decision processes.
-
World practice is evidence of that computer systems of an intellectual support of medical activities bound up with examination of patients, their diagnosis, therapy and so on are the most effective means for attainment of a high level of physician\`s qualification. Such systems must contain large knowledge bases consistent with the modern level of science and practice. To from large knowledge bases for such systems it is necessary to have a medical ontology model reflecting contemporary notions of medicine. This paper presents a description of an observation ontology, knowledge base for the physician of general tipe, architecture, functions and implementation of problem independent shell of the system for intellectual supporting patient examination and mathematical model of the dialog. The system can be used by the following specialist: therapeutist, surgeon, gynecologist, urologist, otolaryngologist, ophthalmologist, endocrinologist, neuropathologist and immunologist. The system supports a high level of examination of patients, delivers doctors from routine work upon filling in case records and also automatically forms a computer archives of case records. The archives can be used for any statistical data processing, for producing accounts and also for debugging of knowledge bases of expert systems. Besides that, the system can be used for rise of medical education level of students, doctors in internship, staff physicians and postgraduate students.
-
This study presents an analysis of healthcare quality indicators using data mining for developing quality improvement strategies. Specifically, important factors influencing the inpatient mortality were identified using a decision tree method for data mining based on 8,405 patients who were discharged from the study hospital during the period of December 1, 2000 and January 31, 2001. Important factors for the inpatient mortality were length of stay, disease classes, discharge departments, and age groups. The optimum range of target group in inpatient healthcare quality indicators were identified from the gains chart. In addition, a decision support system was developed to analyze and monitor trends of quality indicators using Visual Basic 6.0. Guidelines and tutorial for quality improvement activities were also included in the system. In the future, other quality indicators should be analyze to effectively support a hospital-wide continuous quality improvement (CQI) activity and the decision support system should be well integrated with the hospital OCS (Order Communication System) to support concurrent review.
-
Although data mining promises a new paradigm to discover medical knowledge form a database, there are many problems to be solved before real application is feasible. We had the chance to provide a data set to be analyzed as a discovery challenge by using various data mining techniques at the PKDD conference. As data providers, we evaluated and discussed results and clarified problems.
-
As the population of persons over the age of sixty-five is rapidly growing, the population of solitary senior person living at own home is growing in Japan. This situation has caused the social issue of how supports their healthy life. There have been some projects related to improve their quality of life and support their healthy life. Unfortunately mostly they focus the method of measuring vital signal and observing behavior. Nobody reports how utilize the measured data. Aim of our project is how find emergency of the aged people at home. As emergency is big different from regular life behavior, we have to recognize it. We propose concept of the human behavior model and show the some types human behavior knowledge constructed by observed human behavior model and show the some types human behavior knowledge constructed by observed human behavior. This idea is based on human having habitual life. And we discuss the possibility of finding emergency using knowledge and observed data.
-
This study is to examine the factors that influence the performances of service quality in university hospitals by investigating systematically the condition of service quality. A synthesis of the health care quality is conducted to identify physical quality, operating process quality, and human resources quality that relate to both the overall satisfaction and intention of revisit. Based on the proposed hypotheses, the relationships between the service quality factors and performance are examined using data collected from 167 patients in three hospitals, Korea. Reliability and validity tests are performed for examining its relationship with service quality in health care systems. Total eight independent variables with respect to three service quality levels and two dependent variables for performance are identified for relationships between service quality and performance in health care systems. The results provide health care managers with a managerial insight to the planning function of performance with service quality in health care systems as well as other operations (business, government, or other service organizations) systems. Implication of the study for theory, future studies, and practices are discussed.
-
The purpose of this study was to develop an expert system supporting the diagnosis of diffuse interstitial lung disease by high resolution computed tomography. CLIPS(C language integrated production system) with rule-based reasoning was used to develop the system. Development of expert system had three stages knowledge acquisition, knowledge representation, and reasoning. Knowledge was obtained and integrated, from tables and figure legends of a representative textbook in the domain of this expert system, High-Resolution CT of the Lung, by Webb WR, Mueller NL, and Naidich DP. The acquired knowledge was analyzed to form a knowledge base. Overlapping knowledge was eliminated, similar pieces of knowledge were combined and professional terms were defined. The most important knowledge of findings was then selected for each disease. After groupings of combined findings were made, disease groups were analyzed sequentially to determine final diagnoses. The system was based upon the input of 69 diseases, 185 findings, 73 conditions, 387 status, and 62 rules. The system was set up to determine the diagnoses of diseases from the combination of findings using forward reasoning. In an empirical trial, the system was applied to support the diagnosis of 40 cases of diffuse interstitial lung diseases. The performance of two doctors with support of the system was compared to that of another two doctors without support of the system. The two doctors with the support of the system made more accurate diagnoses than the doctors without the support of the system. The system is believed to be useful for the diagnosis of rare diseases and for cases with many possible differential diagnoses. In conclusion, an expert system supporting the high resolution computed tomographic diagnosis of diffuse interstitial lung disease was developed and the system is thought to be useful for medical practice.
-
This paper proposes a meta modeling technique which permits to describe a KBS according to three axis: the object of reuse axis, the levels of granularity axis and the reuse process axis. The object of reuse axis allows to see a KBS as a set of inter-related components for reuse purposes. The levels of granularity axis allows to describe the KBS components according to different levels of granularity for clarity and reuse purposes. The reuse process axis allows to see the KBS components as (re)usable components.
-
The Purpose of this paper is how KM is implemented and executed to reform the organization in the change management aspect and how its current KM can be developed in the future, mainly based on the organizational system, business process, and information system related to KM. This Paper is longitudinal case study of Knowledge Management at LG-EDS Systems. The effective approach to undertake KM is phase. First, it is structured to share explicit knowledge for better performance and then implicit knowledge for best performance. But, This method had some limitations. So, LG-EDS systems integrated KMS-Knowledge Portal-to facilitate cooperation and improve contents quality.
-
In conceptual design of engineering devices, a designer decomposes a required function into sub-functions, so-called functional decomposition, using a kind of functional knowledge representing achievement relations among functions. However, such knowledge about functionality of engineering devices is usually left implicit because each designer possesses it. Even if such knowledge is found in documents, it is often scattered around technical domains and lacks consistency. Aiming at capturing such functional knowledge explicitly and sharing it in design teams, we discuss its systematic description based on functional ontologies which provide common concepts for its consistent and generic description. We propose a new concept named “was of achievement” as a key concept for capturing such functional knowledge. Categorization of typical representations of the knowledge and its organization as is-a hierarchies are also discussed. The generic concepts representing functionality of a device in the functional knowledge are provided by the functional concept ontology, which makes the functional knowledge consistent and applicable to other domains. We also discuss development of a design supporting system using the systematized knowledge, called a functional was server. It helps human designers redesign an existing engineering device by providing a wide range of alternative ways of achievement of the required function in a manner suitable for the viewpoint of each designer and then facilitates innovative design.
-
A primitive conceptualization is defined as the set of all intended situations. A non-primitive conceptualization is defined as the set of all the pairs every of which consists of an intended knowledge system and the set of all the situations admitted by the knowledge system. The reality of a domain is considered as the set of all the situation which have ever taken place in the past, are taking place now and will take place in the future. A conceptualization is defined as precise if the set of intended situations is equal to the domain reality. The representation of various elements of a domain ontology in a model of the ontology is considered. These elements are terms for situation description and situations themselves, terms for knowledge description and knowledge systems themselves, mathematical terms and constructions, auxiliary terms and ontological agreements. It has been shown that any ontology representing a conceptualization has to be non-primitive if either (1) a conceptualization contains intended situations of different structures, or (2) a conceptualization contains concepts designated by terms for knowledge description, or (3) a conceptualization contains concept classes and determines properties of the concepts belonging to these classes, but the concepts themselves are introduced by domain knowledge, or (4) some restrictions on meanings of terms for situation description in a conceptualization depend on the meaning of terms for knowledge description.
-
In this paper, a method for selection of the optimal feature vectors is proposed for the classification of closed 2D shapes using the bispectrum of a contour sequence. The bispectrum based on third order cumulants is applied to the contour sequences of the images to extract feature vectors for each planar image. These bispectral feature vectors, which are invariant to shape translation, rotation and scale transformation, can be used to represent two-dimensional planar images, but there is no certain criterion on the selection of the feature vectors for optimal classification of closed 2D images. In this paper, a new method for selecting the optimal bispectral feature vectors based on the variances of the feature vectors. The experimental results are presented using eight different shapes of aircraft images, the feature vectors of the bispectrum from five to fifteen and an weighted mean fuzzy classifier.
-
Knowledge discovery in databases(KDD) is the process for extracting valid, novel, potentially useful and understandable knowledge form real data. There are many academic and industrial activities with new technologies and application areas. Particularly, data mining is the core step in the KDD process, consisting of many algorithms to perform clustering, pattern recognition and rule induction functions. The main goal of these algorithms is prediction and description. Prediction means the assessment of unknown variables. Description is concerned with providing understandable results in a compatible format to human users. We introduce an efficient data mining algorithm considering predictive and descriptive capability. Reasonable pattern is derived from real world data by a revised neural network model and a proposed fuzzy rule extraction technique is applied to obtain understandable knowledge. The proposed neural network model is a hierarchical self-organizing system. The rule base is compatible to decision makers perception because the generated fuzzy rule set reflects the human information process. Results from real world application are analyzed to evaluate the system\`s performance.
-
The development of web-aware knowledge discovery system has received a great deal of attention in recent years. It plays a key-enabling role for competitive businesses in the E-commerce era. One of the challenges in developing web-aware knowledge discovery systems is to integrate and coordinate and coordinate existing standalone or legacy knowledge discovery applications in a seamless manner, so that cost-effective systems can be developed without the need of costly proprietary products. In this paper, we present an approach for developing a framework of web-aware interoperable knowledge discovery system to achieve this purpose. This approach applies RMI and high-level code wrapper of Java distributed object computing to address the issues of interoperability in heterogeneous environments, which includes programming language, platform, and visual object model. The effectiveness of the proposed framework is demonstrated through the integration and extension of the two well-known standalone knowledge discovery tools, SOM_PAK and Nenet. It confirms that a variety of interoperable knowledge discovery systems can be constructed efficiently on the basis of the framework to meet various requirements of knowledge discovery tasks.
-
The problem of query rewriting using views has interested in the context of data integration where source data is described by the views on global relations. When the query and views are of the form of conjunctive queries, the rewriting is a union of conjunctive queries each of which is contained in the original query and consists of only views. Most previous methods for query rewriting using views are 2-step algorithms. In the first step, they identify the views that are useful in rewriting and in the second step they construct all correct rewritings by combining the views that gained in the first step. The larger the number of selected views in the first step, the larger the number of candidate rewritings in the second step. We want to minimize the number of selected views in the first step by defining stringent conditions for a view to be participated in rewritings. In this paper, first we offer a necessary condition for the existence of a rewriting that includes a view. For the common case that predicate repetitions are not allowed in the bodies of views, we show that our algorithm for testing the condition is done in a polynomial-time. Second, we offer an algorithm to construct contained rewritings using the view instances that are computed in the first step. The exponential containment-mapping test in the second step is not needed in our algorithm.
-
Recently, many enterprises have attempted to capitalize knowledge assets on data warehouse (DW). It has been recognized as strategic core process to create corporate competitive advantage and implement enterprise e-biz strategies. However, most approaches to represent knowledge and decision process have limits in considering various knowledge types, their relationships and continuity in knowledge formulation. In addition, they are so inclined to one side such as concept-oriented frameworks or technology-oriented ones. They lack universal and wide-ranging features. This paper presents a comprehensive methodology to accumulate knowledge capital on DW via a properly grained hypermedia model. The methodology consists of three phases: knowledge requirement elicitation, hypermedia modeling, and system implementation. A real-life case for medical DW development is presented to demonstrate the usefulness of the proposed methodology. This methodology is effective when an organization accumulates knowledge assets to put the corporate e-biz or cre-biz strategy into practice.
-
A Post-Analysis of Decision Tree to Detect the Change of Customer Behavior on Internet Shopping MallUnderstanding and adapting to changes of customer behavior in internet shopping mall is an important aspect to survive in continuously changing environment. This paper develops a methodology based on decision tree algorithms to detect changes of customer behavior automatically from customer profiles and sales data at different time snapshots. We first define three types of changes as emerging pattern, unexpected change and the added/perished rule. Then, it is developed similarity and difference measures for rule matching to detect all types of change. Finally, the degree of change is developed to evaluate the amount of change. A Korean internet shopping mall case is evaluated to represent the performance of our methodology. And practical business implications for this methodology are also provided.
-
CRM(Customer Relationship Management : CRM) is an advanced marketing supporting system which analyze customers\` transaction data and classify or target customer groups to effectively increase market share and profit. Many engines were developed to implements the function and those for classification and clustering are considered core ones. In this study, an improved clustering method based on SOM(Self-Organizing Maps : SOM) is proposed. The proposed clustering method finds the optimal number of clusters so that the effectiveness of clustering is increased. It considers all the data types existing in CRM data warehouses. In particular, and adaptive algorithm where the concepts of degeneration and fusion are applied to find optimal number of clusters. The feasibility and efficiency of the proposed method are demonstrated through simulation with simplified data of customers.
-
Artificial neural network (ANN) is known to identify relationships even when some of the input data are very complex, ill-defined and ill-structured. One of the advantages in ANN is that it can discriminate the linearly inseparable data. This study presents an application of ANN to classify and predict the symptomatic status of HIV/AIDS patients. Even though ANN techniques have been applied to a variety of areas, this study has a substantial contribution to the HIV/AIDS care and prevention planning area. ANN model in classifying both the HIV and AIDS status of HIV/AIDS patients is developed and analyzed. The diagnostic accuracy of the ANN in classifying both the HIV status and AIDS status of HIV/AIDS status is evaluated. Several different ANN topologies are applied to AIDS Cost and Services Utilization Survey (ACSUS) datasets in order to demonstrate the model\`s capability. If ANN design models are different, it would be interesting to see what influence would have on classification of HIV/AIDS-related persons.
-
In this study, we have developed a prototype of clinical decision support systems (CDSS) for diagnosing neurogenic bladder and compared its predicted diagnoses with the actual diagnoses using 92 patient\`s Urodynamic study cases. The CDSS was developed using a Visual Basic based on the evidence-based rules extracted from guidelines and other references regarding a diagnosis of neurogenic bladder. To compare with the 92 final diagnoses made by doctors at the Yonsei Rehabilitation Center, we classified all diagnoses into 5 groups. The predictive rates of the CDSS were: 48.0% for areflexic neurogenic bladder; 60.0% for hyperreflexic neurogenic bladder in a spinal shock recovery stage; 72.9% for hyperreflexic neurogenic bladder, and 80.0% for areflexic neurogenic bladder in a spinal shock stage, which was the highest predicted rate. There were only 2 cases for hyperreflexic neurogenic bladder in a well controlled detrusor activity, and its predictive rate was 0%. The study results showed that CDSS for diagnosing neurogenic bladder could provide a helpful advice on decision-making for doctors. The findings also suggest that physicians should be involved in all development stages to ensure that systems are developed in a fashion that maximizes their beneficial effect on patient care, and that systems are acceptable to both professionals and patients. The future studies will concentrate on including more validating the system.
-
This paper describes an implementation of the interval based expert system for syndrome differential diagnosis of Oriental Traditional Medicine (OTM). An approximate reasoning model using fuzzy logic for syndrome differential diagnosis is proposed. Based on this model, we implemented the system for diagnosing Eight rule diagnosis, organ diagnosis and then final differential syndrome of OTM. After carrying out inference process, the system will provide patient\`s syndromes differentiation diagnosis in the intervals and will give the explanation, which helps the user to understand the obtained conclusions.
-
An expert system for the diagnosis and indication of hypertension is implemented through HTML-based backward inference. HTML-based backward inference is performed using the hypertext function of HTML, and many HTML files, which are hyperlinked to each other based on the backward rules, should be prepared beforehand. The development and maintenance of the HTML files are conducted automatically using the decision graph. Still, the drawing and input of the decision graph is a time consuming and tedious job if it is done manually. So, automatic generator of the decision graph for the diagnosis and indication of hypertension was implemented. The HTML-based backward inference ensures accessibility, multimedia facilities, fast response, stability, easiness, and platform independency of the expert system. So, this research reveals that HTML-based inference approach can be used for many Web-based intelligent site with fast and stable performance.
-
Adopting the method of user weighting fuzzy mathematics, the author accomplished the subject title “Study on Expert System of Chicken\`s Common Diseases Diagnostics”, which could properly diagnose 30 kinds of chicken\`s common diseases and the accordance rate reached 80% verified through 244 disease cases. On the basis of the accomplishment, the multimedia technology was adopted further more to establish a system, which integrated with the input, display, query, and processing of sound, picture and text etc., combined with the previous chicken disease diagnostic expert system, make the output information of computer more rich and comprehensive, and the accordance rate of disease diagnosis could be improved. The system consists of database, knowledge base, graphics and picture base. This system is easy to operate and interface of which is vivid and intuitive. It could output diagnostic result and prescribe rapidly, so that, such a system is not only adapted to large, medium chicken farm but also to grass-roots veterinary station for developing health care and disease diagnosing. It is sure that the system could have side prospect of application.
-
Many intelligent agent systems are known to incorporate BDI architecture for cognitive reasoning. Since this architecture contains all the knowledge of world model and reasoning rule, it is very complex and difficult to handle. This paper describes a methodology to design and implement BDI architecture, BDIAXml based on XML for multi-agent systems. This XML-based BDI architecture is smaller than any other BDI architecture because it separates knowledge for reasoning from domain knowledge and enables knowledge sharing using XML technology. Knowledge for BDI mental state and reasoning is composed of specific XML files and these XML files are stored into a specific knowledge server. Most systems using BDIAxml architecture can access knowledge from this server. We apply this BDIAXml system to domain of Hospital Information System and show that this architecture performs more efficiently than other BDI architecture system in terms of knowledge sharing, system size, and ease of use.