• Title/Summary/Keyword: Intelligence level

Search Result 973, Processing Time 0.026 seconds

Keyword Network Analysis for Technology Forecasting (기술예측을 위한 특허 키워드 네트워크 분석)

  • Choi, Jin-Ho;Kim, Hee-Su;Im, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.227-240
    • /
    • 2011
  • New concepts and ideas often result from extensive recombination of existing concepts or ideas. Both researchers and developers build on existing concepts and ideas in published papers or registered patents to develop new theories and technologies that in turn serve as a basis for further development. As the importance of patent increases, so does that of patent analysis. Patent analysis is largely divided into network-based and keyword-based analyses. The former lacks its ability to analyze information technology in details while the letter is unable to identify the relationship between such technologies. In order to overcome the limitations of network-based and keyword-based analyses, this study, which blends those two methods, suggests the keyword network based analysis methodology. In this study, we collected significant technology information in each patent that is related to Light Emitting Diode (LED) through text mining, built a keyword network, and then executed a community network analysis on the collected data. The results of analysis are as the following. First, the patent keyword network indicated very low density and exceptionally high clustering coefficient. Technically, density is obtained by dividing the number of ties in a network by the number of all possible ties. The value ranges between 0 and 1, with higher values indicating denser networks and lower values indicating sparser networks. In real-world networks, the density varies depending on the size of a network; increasing the size of a network generally leads to a decrease in the density. The clustering coefficient is a network-level measure that illustrates the tendency of nodes to cluster in densely interconnected modules. This measure is to show the small-world property in which a network can be highly clustered even though it has a small average distance between nodes in spite of the large number of nodes. Therefore, high density in patent keyword network means that nodes in the patent keyword network are connected sporadically, and high clustering coefficient shows that nodes in the network are closely connected one another. Second, the cumulative degree distribution of the patent keyword network, as any other knowledge network like citation network or collaboration network, followed a clear power-law distribution. A well-known mechanism of this pattern is the preferential attachment mechanism, whereby a node with more links is likely to attain further new links in the evolution of the corresponding network. Unlike general normal distributions, the power-law distribution does not have a representative scale. This means that one cannot pick a representative or an average because there is always a considerable probability of finding much larger values. Networks with power-law distributions are therefore often referred to as scale-free networks. The presence of heavy-tailed scale-free distribution represents the fundamental signature of an emergent collective behavior of the actors who contribute to forming the network. In our context, the more frequently a patent keyword is used, the more often it is selected by researchers and is associated with other keywords or concepts to constitute and convey new patents or technologies. The evidence of power-law distribution implies that the preferential attachment mechanism suggests the origin of heavy-tailed distributions in a wide range of growing patent keyword network. Third, we found that among keywords that flew into a particular field, the vast majority of keywords with new links join existing keywords in the associated community in forming the concept of a new patent. This finding resulted in the same outcomes for both the short-term period (4-year) and long-term period (10-year) analyses. Furthermore, using the keyword combination information that was derived from the methodology suggested by our study enables one to forecast which concepts combine to form a new patent dimension and refer to those concepts when developing a new patent.

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

The Behavior Analysis of Exhibition Visitors using Data Mining Technique at the KIDS & EDU EXPO for Children (유아교육 박람회에서 데이터마이닝 기법을 이용한 전시 관람 행동 패턴 분석)

  • Jung, Min-Kyu;Kim, Hyea-Kyeong;Choi, Il-Young;Lee, Kyoung-Jun;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.77-96
    • /
    • 2011
  • An exhibition is defined as market events for specific duration to present exhibitors' main products to business or private visitors, and it plays a key role as effective marketing channels. As the importance of exhibition is getting more and more, domestic exhibition industry has achieved such a great quantitative growth. But, In contrast to the quantitative growth of domestic exhibition industry, the qualitative growth of Exhibition has not achieved competent growth. In order to improve the quality of exhibition, we need to understand the preference or behavior characteristics of visitors and to increase the level of visitors' attention and satisfaction through the understanding of visitors. So, in this paper, we used the observation survey method which is a kind of field research to understand visitors and collect the real data for the analysis of behavior pattern. And this research proposed the following methodology framework consisting of three steps. First step is to select a suitable exhibition to apply for our method. Second step is to implement the observation survey method. And we collect the real data for further analysis. In this paper, we conducted the observation survey method to obtain the real data of the KIDS & EDU EXPO for Children in SETEC. Our methodology was conducted on 160 visitors and 78 booths from November 4th to 6th in 2010. And, the last step is to analyze the record data through observation. In this step, we analyze the feature of exhibition using Demographic Characteristics collected by observation survey method at first. And then we analyze the individual booth features by the records of visited booth. Through the analysis of individual booth features, we can figure out what kind of events attract the attention of visitors and what kind of marketing activities affect the behavior pattern of visitors. But, since previous research considered only individual features influenced by exhibition, the research about the correlation among features is not performed much. So, in this research, additional analysis is carried out to supplement the existing research with data mining techniques. And we analyze the relation among booths using data mining techniques to know behavior patterns of visitors. Among data mining techniques, we make use of two data mining techniques, such as clustering analysis and ARM(Association Rule Mining) analysis. In clustering analysis, we use K-means algorithm to figure out the correlation among booths. Through data mining techniques, we figure out that there are two important features to affect visitors' behavior patterns in exhibition. One is the geographical features of booths. The other is the exhibit contents of booths. Those features are considered when the organizer of exhibition plans next exhibition. Therefore, the results of our analysis are expected to provide guideline to understanding visitors and some valuable insights for the exhibition from the earlier phases of exhibition planning. Also, this research would be a good way to increase the quality of visitor satisfaction. Visitors' movement paths, booth location, and distances between each booth are considered to plan next exhibition in advance. This research was conducted at the KIDS & EDU EXPO for Children in SETEC(Seoul Trade Exhibition & Convention), but it has some constraints to be applied directly to other exhibitions. Also, the results were derived from a limited number of data samples. In order to obtain more accurate and reliable results, it is necessary to conduct more experiments based on larger data samples and exhibitions on a variety of genres.

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.

Comparative Analysis of ViSCa Platform-based Mobile Payment Service with other Cases (스마트카드 가상화(ViSCa) 플랫폼 기반 모바일 결제 서비스 제안 및 타 사례와의 비교분석)

  • Lee, June-Yeop;Lee, Kyoung-Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.163-178
    • /
    • 2014
  • Following research proposes "Virtualization of Smart Cards (ViSCa)" which is a security system that aims to provide a multi-device platform for the deployment of services that require a strong security protocol, both for the access & authentication and execution of its applications and focuses on analyzing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service by comparing with other similar cases. At the present day, the appearance of new ICT, the diffusion of new user devices (such as smartphones, tablet PC, and so on) and the growth of internet penetration rate are creating many world-shaking services yet in the most of these applications' private information has to be shared, which means that security breaches and illegal access to that information are real threats that have to be solved. Also mobile payment service is, one of the innovative services, has same issues which are real threats for users because mobile payment service sometimes requires user identification, an authentication procedure and confidential data sharing. Thus, an extra layer of security is needed in their communication and execution protocols. The Virtualization of Smart Cards (ViSCa), concept is a holistic approach and centralized management for a security system that pursues to provide a ubiquitous multi-device platform for the arrangement of mobile payment services that demand a powerful security protocol, both for the access & authentication and execution of its applications. In this sense, Virtualization of Smart Cards (ViSCa) offers full interoperability and full access from any user device without any loss of security. The concept prevents possible attacks by third parties, guaranteeing the confidentiality of personal data, bank accounts or private financial information. The Virtualization of Smart Cards (ViSCa) concept is split in two different phases: the execution of the user authentication protocol on the user device and the cloud architecture that executes the secure application. Thus, the secure service access is guaranteed at anytime, anywhere and through any device supporting previously required security mechanisms. The security level is improved by using virtualization technology in the cloud. This virtualization technology is used terminal virtualization to virtualize smart card hardware and thrive to manage virtualized smart cards as a whole, through mobile cloud technology in Virtualization of Smart Cards (ViSCa) platform-based mobile payment service. This entire process is referred to as Smart Card as a Service (SCaaS). Virtualization of Smart Cards (ViSCa) platform-based mobile payment service virtualizes smart card, which is used as payment mean, and loads it in to the mobile cloud. Authentication takes place through application and helps log on to mobile cloud and chooses one of virtualized smart card as a payment method. To decide the scope of the research, which is comparing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service with other similar cases, we categorized the prior researches' mobile payment service groups into distinct feature and service type. Both groups store credit card's data in the mobile device and settle the payment process at the offline market. By the location where the electronic financial transaction information (data) is stored, the groups can be categorized into two main service types. First is "App Method" which loads the data in the server connected to the application. Second "Mobile Card Method" stores its data in the Integrated Circuit (IC) chip, which holds financial transaction data, which is inbuilt in the mobile device secure element (SE). Through prior researches on accept factors of mobile payment service and its market environment, we came up with six key factors of comparative analysis which are economic, generality, security, convenience(ease of use), applicability and efficiency. Within the chosen group, we compared and analyzed the selected cases and Virtualization of Smart Cards (ViSCa) platform-based mobile payment service.

Intents of Acquisitions in Information Technology Industrie (정보기술 산업에서의 인수 유형별 인수 의도 분석)

  • Cho, Wooje;Chang, Young Bong;Kwon, Youngok
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.123-138
    • /
    • 2016
  • This study investigates intents of acquisitions in information technology industries. Mergers and acquisitions are a strategic decision at corporate-level and have been an important tool for a firm to grow. Plenty of firms in information technology industries have acquired startups to increase production efficiency, expand customer base, or improve quality over the last decades. For example, Google has made about 200 acquisitions since 2001, Cisco has acquired about 210 firms since 1993, Oracle has made about 125 acquisitions since 1994, and Microsoft has acquired about 200 firms since 1987. Although there have been many existing papers that theoretically study intents or motivations of acquisitions, there are limited papers that empirically investigate them mainly because it is challenging to measure and quantify intents of M&As. This study examines the intent of acquisitions by measuring specific intents for M&A transactions. Using our measures of acquisition intents, we compare the intents by four acquisition types: (1) the acquisition where a hardware firm acquires a hardware firm, (2) the acquisition where a hardware firm acquires a software/IT service firm, (3) the acquisition where a software/IT service firm acquires a hardware firm, and (4) the acquisition where a software /IT service firm acquires a software/IT service firm. We presume that there are difference in reasons why a hardware firm acquires another hardware firm, why a hardware firm acquires a software firm, why a software/IT service firm acquires a hardware firm, and why a software/IT service firm acquires another software/IT service firm. Using data of the M&As in US IT industries, we identified major intents of the M&As. The acquisition intents are identified based on the press release of M&A announcements and measured with four categories. First, an acquirer may have intents of cost saving in operations by sharing common resources between the acquirer and the target. The cost saving can accrue from economies of scope and scale. Second, an acquirer may have intents of product enhancement/development. Knowledge and skills transferred from the target may enable the acquirer to enhance the product quality or to expand product lines. Third, an acquirer may have intents of gain additional customer base to expand the market, to penetrate the market, or to enter a foreign market. Fourth, a firm may acquire a target with intents of expanding customer channels. By complementing existing channel to the customer, the firm can increase its revenue. Our results show that acquirers have had intents of cost saving more in acquisitions between hardware companies than in acquisitions between software companies. Hardware firms are more likely to acquire with intents of product enhancement or development than software firms. Overall, the intent of product enhancement/development is the most frequent intent in all of the four acquisition types, and the intent of customer base expansion is the second. We also analyze our data with the classification of production-side intents and customer-side intents, which is based on activities of the value chain of a firm. Intents of cost saving operations and those of product enhancement/development can be viewed as production-side intents and intents of customer base expansion and those of expanding customer channels can be viewed as customer-side intents. Our analysis shows that the ratio between the number of customer-side intents and that of production-side intents is higher in acquisitions where a software firm is an acquirer than in the acquisitions where a hardware firm is an acquirer. This study can contribute to IS literature. First, this study provides insights in understanding M&As in IT industries by answering for question of why an IT firm intends to another IT firm. Second, this study also provides distribution of acquisition intents for acquisition types.

The perception of undergraduates of the college of education on the importance of trainee teacher certification areas and sub-factors (사범대학 재학생의 예비 교사 인증 영역 및 하위 요소에 대한 중요도 인식 분석)

  • Kim, Tae-Hoon;Lee, Tae-Ho
    • 대한공업교육학회지
    • /
    • v.39 no.1
    • /
    • pp.164-188
    • /
    • 2014
  • The purpose of this study is to investigate the perception of undergraduates of the college of education on the importance of certification areas and factors suggested by the certification system at each department level as well as the college as a whole, in order to come up with measures for further improvement. The specific objectives of this study are first, verifying different perception on the importance of certification areas and factor per department, second, verifying different perception on the importance of certification areas and factor per grade. The population of this study is undergraduates of the college of education at A University, and the survey on the different perception on the importance was conducted on 758 students of 10 departments. Total 800 copies of survey were distributed, and 299 copies or 37.3% were retrieved. First, it was found that undergraduates of the college of education at A University highly recognize the necessity of a new system to produce excellent teachers. when it comes to different department, in the area of teaching personalities, there is difference in the importance of teaching aptitude test and completion of social intelligence development program. In the area of teaching expertise, there is different perception in the importance of completion of curriculum education subjects per major, completion of curriculum contents per major, and participation in teaching demonstration contest. In the area of student guidance expertise, there is difference by department in completion of creative character development related education programs and "teaching practice" course. In the area of communication skills in the information society, minimum score requirement for a second foreign language is considered less important than others. Second, as for grade, freshmen highly recognize the importance of validity of teaching training course, integrity of the course, validity of the teaching training course in producing excellent teachers, graduates' job performing ability development as a teacher, appropriacy of curriculum of the college of education in producing excellent teachers, compared to other grades. In particular, seniors consider the necessity of a new system to produce excellent teachers the most.

A Study on Design of Agent based Nursing Records System in Attending System (에이전트기반 개방병원 간호기록시스템 설계에 관한 연구)

  • Kim, Kyoung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.73-94
    • /
    • 2010
  • The attending system is a medical system that allows doctors in clinics to use the extra equipment in hospitals-beds, laboratory, operating room, etc-for their patient's care under a contract between the doctors and hospitals. Therefore, the system is very beneficial in terms of the efficiency of the usage of medical resources. However, it is necessary to develop a strong support system to strengthen its weaknesses and supplement its merits. If doctors use hospital beds under the attending system of hospitals, they would be able to check a patient's condition often and provide them with nursing care services. However, the current attending system lacks delivery and assistance support. Thus, for the successful performance of the attending system, a networking system should be developed to facilitate communication between the doctors and nurses. In particular, the nursing records in the attending system could help doctors monitor the patient's condition and provision of nursing care services. A nursing record is the formal documentation associated with nursing care. It is merely a data repository that helps nurses to track their activities; nursing records thus represent a resource of primary information that can be reused. In order to maximize their usefulness, nursing records have been introduced as part of computerized patient records. However, nursing records are internal data that are not disclosed by hospitals. Moreover, the lack of standardization of the record list makes it difficult to share nursing records. Under the attending system, nurses would want to minimize the amount of effort they have to put in for the maintenance of additional records. Hence, they would try to maintain the current level of nursing records in the form of record lists and record attributes, while doctors would require more detailed and real-time information about their patients in order to monitor their condition. Therefore, this study developed a system for assisting in the maintenance and sharing of the nursing records under the attending system. In contrast to previous research on the functionality of computer-based nursing records, we have emphasized the practical usefulness of nursing records from the viewpoint of the actual implementation of the attending system. We suggested that nurses could design a nursing record dictionary for their convenience, and that doctors and nurses could confirm the definitions that they looked up in the dictionary through negotiations with intelligent agents. Such an agent-based system could facilitate networking among medical institutes. Multi-agent systems are a widely accepted paradigm for the distribution and sharing of computation workloads in the scientific community. Agent-based systems have been developed with differences in functional cooperation, coordination, and negotiation. To increase such communication, a framework for a multi-agent based system is proposed in this study. The agent-based approach is useful for developing a system that promotes trade-offs between transactions involving multiple attributes. A brief summary of our contributions follows. First, we propose an efficient and accurate utility representation and acquisition mechanism based on a preference scale while minimizing user interactions with the agent. Trade-offs between various transaction attributes can also be easily computed. Second, by providing a multi-attribute negotiation framework based on the attribute utility evaluation mechanism, we allow both the doctors in charge and nurses to negotiate over various transaction attributes in the nursing record lists that are defined by the latter. Third, we have designed the architecture of the nursing record management server and a system of agents that provides support to the doctors and nurses with regard to the framework and mechanisms proposed above. A formal protocol has also been developed to create and control the communication required for negotiations. We verified the realization of the system by developing a web-based prototype. The system was implemented using ASP and IIS5.1.

The Effect of Patent Citation Relationship on Business Performance : A Social Network Analysis Perspective (특허 인용 관계가 기업 성과에 미치는 영향 : 소셜네트워크분석 관점)

  • Park, Jun Hyung;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.127-139
    • /
    • 2013
  • With an advent of recent knowledge-based society, the interest in intellectual property has increased. Firms have tired to result in productive outcomes through continuous innovative activity. Especially, ICT firms which lead high-tech industry have tried to manage intellectual property more systematically. Firm's interest in the patent has increased in order to manage the innovative activity and Knowledge property. The patent involves not only simple information but also important values as information of technology, management and right. Moreover, as the patent has the detailed contents regarding technology development activity, it is regarded as valuable data. The patent which reflects technology spread and research outcomes and business performances are closely interrelated as the patent is considered as a significant the level of firm's innovation. As the patent information which represents companies' intellectual capital is accumulated continuously, it has become possible to do quantitative analysis. The advantages of patent in the related industry information and it's standardize information can be easily obtained. Through the patent, the flow of knowledge can be determined. The patent information can analyze in various levels from patent to nation. The patent information is used to analyze technical status and the effects on performance. The patent which has a high frequency of citation refers to having high technological values. Analyzing the patent information contains both citation index analysis using the number of citation and network analysis using citation relationship. Network analysis can provide the information on the flows of knowledge and technological changes, and it can show future research direction. Studies using the patent citation analysis vary academically and practically. For the citation index research, studies to analyze influential big patent has been conducted, and for the network analysis research, studies to find out the flows of technology in a certain industry has been conducted. Social network analysis is applied not only in the sociology, but also in a field of management consulting and company's knowledge management. Research of how the company's network position has an impact on business performances has been conducted from various aspects in a field of network analysis. Social network analysis can be based on the visual forms. Network indicators are available through the quantitative analysis. Social network analysis is used when analyzing outcomes in terms of the position of network. Social network analysis focuses largely on centrality and structural holes. Centrality indicates that actors having central positions among other actors have an advantage to exert stronger influence for exchange relationship. Degree centrality, betweenness centrality and closeness centrality are used for centrality analysis. Structural holes refer to an empty place in social structure and are defined as efficiency and constraints. This study stresses and analyzes firms' network in terms of the patent and how network characteristics have an influence on business performances. For the purpose of doing this, seventy-four ICT companies listed in S&P500 are chosen for the sample. UCINET6 is used to analyze the network structural characteristics such as outdegree centrality, betweenness centrality and efficiency. Then, regression analysis test is conducted to find out how these network characteristics are related to business performance. It is found that each network index has significant impacts on net income, i.e. business performance. However, it is found that efficiency is negatively associated with business performance. As the efficiency increases, net income decreases and it has a negative impact on business performances. Furthermore, it is shown that betweenness centrality solely has statistically significance for the multiple regression analysis with three network indexes. The patent citation network analysis shows the flows of knowledge between firms, and it can be expected to contribute to company's management strategies by analyzing company's network structural positions.