• Title/Summary/Keyword: Ubiquitous School

Search Result 669, Processing Time 0.027 seconds

Classification of the Architectures of Web based Expert Systems (웹기반 전문가시스템의 구조 분류)

  • Lim, Gyoo-Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.13 no.4
    • /
    • pp.1-16
    • /
    • 2007
  • According to the expansion of the Internet use and the utilization of e-business, there are an increasing number of studies of intelligent-based systems for the preparation of ubiquitous environment. In addition, expert systems have been developed from Stand Alone types to web-based Client-Server types, which are now used in various Internet environments. In this paper, we investigated the environment of development for web-based expert systems, we classified and analyzed them according to type, and suggested general typical models of web-based expert systems and their architectures. We classified the web-based expert systems with two perspectives. First, we classified them into the Server Oriented model and Client Oriented model based on the Load Balancing aspect between client and server. Second, based on the degree of knowledge and inference-sharing, we classified them into the No Sharing model, Server Sharing model, Client Sharing model and Client-Server Sharing model. By combining them we derived eight types of web-based expert systems. We also analyzed the location problems of Knowledge Bases, Fact Bases, and Inference Engines on the Internet, and analyzed the pros & cons, the technologies, the considerations, and the service types for each model. With the framework proposed from this study, we can develop more efficient expert systems in future environments.

  • PDF

Technical problems of Li-Fi wireless network (무선 네트워크 기술 Li-Fi의 문제점)

  • Park, Hyun Uk;Kim, Hyun Ho;Lee, Hoon Jae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.186-188
    • /
    • 2014
  • In recent years, domestic as well as LTE wireless network of Wi-Fi and most used. In addition, mobile-intensive services that used mainly in our society makes it easier, SNS, application (APP), and file downloads. As such, the amount of data requested, while living at the time of mobile users will want to be safe from the earliest. And the wireless network communications mortality (3G, 4G (LTE), LTE-A) and Wi-Fi (802.11 n-2.4 G H z z H a c-5, 802.11 G), and users are mainly used in the death 4G (LTE), communication Wi-Fi, 802.11 n-2.4 GHz are used most frequently. As above, use the wireless network in order to safely and quickly developed the technology of the Li-Fi. Li-Fi light (visible light) technology to communicate with, and Wi-Fi (802.11 n-2.4 G z H) 100 times faster, LTE-A 66 times faster. However, the current Li-Fi to commercialise the big issue exists. In this paper, there are a lot of existing problems in the commercialization of Li-Fi being used in Wi-Fi, and a comparative analysis.

  • PDF

Postsurgical Wound Infection Caused by Mycobacterium conceptionense Identified by Sequencing of 16S rRNA, hsp65, and rpoB Genes in an Immunocompetent Patient (16S rRNA, hsp65, 및 rpoB 염기순서분석으로 동정한 Mycobacterium conceptionense에 의한 면역능이 정상인 환자에서 발생한 수술후 창상감염)

  • Lee, Ja Young;Kim, Si Hyun;Shin, Jeong Hwan;Lee, Hyun-Kyung;Lee, Young Min;Song, Sae Am;Bae, Il Kwon;Kim, Chang-Ki;Jun, Kyung Ran;Kim, Hye Ran;Lee, Jeong Nyeo;Chang, Chulhun L.
    • Annals of Clinical Microbiology
    • /
    • v.17 no.1
    • /
    • pp.23-27
    • /
    • 2014
  • Rapidly growing mycobacteria are ubiquitous in the environment and are increasingly being recognized as opportunistic pathogens. Recently, a new species, Mycobacteium conceptionense, has been validated from the Mycobacterium fortuitum third biovariant complex by molecular analysis. However, there are few reports, and postsurgical wound infection by this species is rare. We report a case of postsurgical wound infection caused by M. conceptionense in an immunocompetent patient that was identified by a sequencing analysis of 16S rRNA, hps65, and rpoB genes.

An Exploratory Study of Purchasing Decision Making and Adoption on the RFID Purchasing Customer (RFID 구매고객의 구매 의사결정과 수용에 대한 탐색적 연구)

  • Seo, Pil-Su;Jang, Jang-Yi;Shim, Kyeng-Su
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.3 no.4
    • /
    • pp.89-116
    • /
    • 2008
  • RFID (Radio Frequency Identification) is regarded as a core technology of ubiquitous computing. Although it has some technical limitations such as technological standardization of RFID tags as well as economical limitations, many companies around the world have already accepted RFID to improve their management efficiency. In this regard, this study is to meet with results that the adoption of RFID technology willbring opportunities that companies' operational process are improved and customer satisfaction is highly strengthened. This research focuses on providing more understanding for building RFID marketing strategy to suppliers who want to sell their RFID products to customers through analyzing purchasing process. The findings are as follows; First, the study shows that buying center members usually take product reliability and precision of technical specification in the case of new-task buying situation while they put their first purchasing priority on prices in the straight rebuy. Second, the finding presents that in new-task buying situation and the straight rebuy purchasing personnel get information about new products through product performance test, organizational engineers, opinions from other companies' purchasing personnel, and checking out samples. Third, this research demonstrates when it comes to purchasing risk in their first purchasing, the persons who are in charge of material purchasing are inclined to be aware of the risk most in technical problems, followed by financial problems and time delay problems in order. And in addition to those risks are mentioned above, once-again-purchasers take the risk like an opportunity loss for better products into consideration. Fourth, the study shows that the role of concerning departments makes no difference in each purchasing stage. Accordingly marketers need to beef up the differentiated strategy to persuade their customers Fifth, the findings of this study demonstrate that purchasing decision making is much influenced by the final users. So suppliers are supposed to perform the most active marketing strategy at the first stage of purchasing through various resources. Finally, the study presents that the suppliers who will have had close relationships with their customers need to give consistent information to them so that their customers can have lower motive in purchasing products from competitors.

  • PDF

A Study on Purchasing Decision Making and Adoption : Focused on the RFID Purchasing Customer (구매의사 결정과 수용에 대한 연구 : RFID 구매고객 중심으로)

  • Seo, Pil-Su;Jang, Jang-Yi;Shim, Kyeng-Su
    • 한국벤처창업학회:학술대회논문집
    • /
    • 2008.11a
    • /
    • pp.257-282
    • /
    • 2008
  • RFID (Radio Frequency Identification) is regarded as a core technology of ubiquitous computing. Although it has some technical limitations such as technological standardization of RFID tags as well as economical limitations, many companies around the world have already accepted RFID to improve their management efficiency. In this regard, this study is to meet with results that the adoption of RFID technology willbring opportunities that companies' operational process are improved and customer satisfaction is highly strengthened. This research focuses on providing more understanding for building RFID marketing strategy to suppliers who want to sell their RFID products to customers through analyzing purchasing process. The findings are as follows; First, the study shows that buying center members usually take product reliability and precision of technical specification in the case of new-task buying situation while they put their first purchasing priority on prices in the straight rebuy. Second, the finding presents that in new-task buying situation and the straight rebuy purchasing personnel get information about new products through product performance test, organizational engineers, opinions from other companies' purchasing personnel, and checking out samples. Third, this research demonstrates when it comes to purchasing risk in their first purchasing, the persons who are in charge of material purchasing are inclined to be aware of the risk most in technical problems, followed by financial problems and time delay problems in order. And in addition to those risks are mentioned above, once-again-purchasers take the risk like an opportunity loss for better products into consideration. Fourth, the study shows that the role of concerning departments makes no difference in each purchasing stage. Accordingly marketers need to beef up the differentiated strategy to persuade their customers. Fifth, the findings of this study demonstrate that purchasing decision making is much influenced by the final users. So suppliers are supposed to perform the most active marketing strategy at the first stage of purchasing through various resources. Finally, the study presents that the suppliers who will have had close relationships with their customers need to give consistent information to them so that their customers can have lower motive in purchasing products from competitors.

  • PDF

Separation of Nanomaterials Using Flow Field-Flow Fractionation (흐름 장-흐름 분획기를 이용한 나노물질의 분리)

  • Kim, Sung-Hee;Lee, Woo-Chun;Kim, Soon-Oh;Na, So-Young;Kim, Hyun-A;Lee, Byung-Tae;Lee, Byoung-Cheun;Eom, Ig-Chun
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.35 no.11
    • /
    • pp.835-860
    • /
    • 2013
  • Recently, the consumption of nanomaterials has been significantly increased in both industrial and commercial sectors, as a result of steady advancement in the nano-technologies. This ubiquitous use of nanomaterials has brought up the concern that their exposure to environments may cause detrimental effects on human health as well as natural ecosystems, and it is required to characterize their behavior in various environmental media and to evaluate their ecotoxicity. For the sake of accomplishing those assessments, the development of methods to effectively separate them from diverse media and to quantify their properties should be requisitely accompanied. Among a number of separation techniques developed so far, this study focuses on Field-Flow Fractionation (FFF) because of its strengths, such as relatively less disturbance of samples and simple pretreatment, and we review overseas and domestic literatures on the separation of nanomaterials using the FFF technique. In particular, researches with Flow Field-Flow Fractionation (FlFFF) are highlighted due to its most frequent application among FFF techniques. The basic principle of the FlFFF is briefly introduced and the studies conducted so far are classified and scrutinized based on the sort of target nanomaterials for the purpose of furnishing practical data and information for the researchers struggling in this field. The literature review suggests that the operational conditions, such as pretreatment, selection of membrane and carrier solution, and rate (velocity) of each flow, should be optimized in order to effectively separate them from various matrices using the FFF technique. Moreover, it seems to be a prerequisite to couple or hyphenate with several detectors and analyzers for quantification of their properties after their separation using the FFF. However, its application has been restricted regarding the types of target nanomaterials and environmental media. Furthermore, domestic literature data on both separation and characterization of nanomaterials are extremely limited. Taking into account the overwhelmingly increasing consumption of nanomaterials, the efforts for the area seem to be greatly urgent.

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

Comparative Analysis of ViSCa Platform-based Mobile Payment Service with other Cases (스마트카드 가상화(ViSCa) 플랫폼 기반 모바일 결제 서비스 제안 및 타 사례와의 비교분석)

  • Lee, June-Yeop;Lee, Kyoung-Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.163-178
    • /
    • 2014
  • Following research proposes "Virtualization of Smart Cards (ViSCa)" which is a security system that aims to provide a multi-device platform for the deployment of services that require a strong security protocol, both for the access & authentication and execution of its applications and focuses on analyzing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service by comparing with other similar cases. At the present day, the appearance of new ICT, the diffusion of new user devices (such as smartphones, tablet PC, and so on) and the growth of internet penetration rate are creating many world-shaking services yet in the most of these applications' private information has to be shared, which means that security breaches and illegal access to that information are real threats that have to be solved. Also mobile payment service is, one of the innovative services, has same issues which are real threats for users because mobile payment service sometimes requires user identification, an authentication procedure and confidential data sharing. Thus, an extra layer of security is needed in their communication and execution protocols. The Virtualization of Smart Cards (ViSCa), concept is a holistic approach and centralized management for a security system that pursues to provide a ubiquitous multi-device platform for the arrangement of mobile payment services that demand a powerful security protocol, both for the access & authentication and execution of its applications. In this sense, Virtualization of Smart Cards (ViSCa) offers full interoperability and full access from any user device without any loss of security. The concept prevents possible attacks by third parties, guaranteeing the confidentiality of personal data, bank accounts or private financial information. The Virtualization of Smart Cards (ViSCa) concept is split in two different phases: the execution of the user authentication protocol on the user device and the cloud architecture that executes the secure application. Thus, the secure service access is guaranteed at anytime, anywhere and through any device supporting previously required security mechanisms. The security level is improved by using virtualization technology in the cloud. This virtualization technology is used terminal virtualization to virtualize smart card hardware and thrive to manage virtualized smart cards as a whole, through mobile cloud technology in Virtualization of Smart Cards (ViSCa) platform-based mobile payment service. This entire process is referred to as Smart Card as a Service (SCaaS). Virtualization of Smart Cards (ViSCa) platform-based mobile payment service virtualizes smart card, which is used as payment mean, and loads it in to the mobile cloud. Authentication takes place through application and helps log on to mobile cloud and chooses one of virtualized smart card as a payment method. To decide the scope of the research, which is comparing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service with other similar cases, we categorized the prior researches' mobile payment service groups into distinct feature and service type. Both groups store credit card's data in the mobile device and settle the payment process at the offline market. By the location where the electronic financial transaction information (data) is stored, the groups can be categorized into two main service types. First is "App Method" which loads the data in the server connected to the application. Second "Mobile Card Method" stores its data in the Integrated Circuit (IC) chip, which holds financial transaction data, which is inbuilt in the mobile device secure element (SE). Through prior researches on accept factors of mobile payment service and its market environment, we came up with six key factors of comparative analysis which are economic, generality, security, convenience(ease of use), applicability and efficiency. Within the chosen group, we compared and analyzed the selected cases and Virtualization of Smart Cards (ViSCa) platform-based mobile payment service.

A Study on Actual Usage of Information Systems: Focusing on System Quality of Mobile Service (정보시스템의 실제 이용에 대한 연구: 모바일 서비스 시스템 품질을 중심으로)

  • Cho, Woo-Chul;Kim, Kimin;Yang, Sung-Byung
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.611-635
    • /
    • 2014
  • Information systems (IS) have become ubiquitous and changed every aspect of how people live their lives. While some IS have been successfully adopted and widely used, others have failed to be adopted and crowded out in spite of remarkable progress in technologies. Both the technology acceptance model (TAM) and the IS Success Model (ISSM), among many others, have contributed to explain the reasons of success as well as failure in IS adoption and usage. While the TAM suggests that intention to use and perceived usefulness lead to actual IS usage, the ISSM indicates that information quality, system quality, and service quality affect IS usage and user satisfaction. Upon literature review, however, we found a significant void in theoretical development and its applications that employ either of the two models, and we raise research questions. First of all, in spite of the causal relationship between intention to use and actual usage, in most previous studies, only intention to use was employed as a dependent variable without overt explaining its relationship with actual usage. Moreover, even in a few studies that employed actual IS usage as a dependent variable, the degree of actual usage was measured based on users' perceptual responses to survey questionnaires. However, the measurement of actual usage based on survey responses might not be 'actual' usage in a strict sense that responders' perception may be distorted due to their selective perceptions or stereotypes. By the same token, the degree of system quality that IS users perceive might not be 'real' quality as well. This study seeks to fill this void by measuring the variables of actual usage and system quality using 'fact' data such as system logs and specifications of users' information and communications technology (ICT) devices. More specifically, we propose an integrated research model that bring together the TAM and the ISSM. The integrated model is composed of both the variables that are to be measured using fact as well as survey data. By employing the integrated model, we expect to reveal the difference between real and perceived degree of system quality, and to investigate the relationship between the perception-based measure of intention to use and the fact-based measure of actual usage. Furthermore, we also aim to add empirical findings on the general research question: what factors influence actual IS usage and how? In order to address the research question and to examine the research model, we selected a mobile campus application (MCA). We collected both fact data and survey data. For fact data, we retrieved them from the system logs such information as menu usage counts, user's device performance, display size, and operating system revision version number. At the same time, we conducted a survey among university students who use an MCA, and collected 180 valid responses. A partial least square (PLS) method was employed to validate our research model. Among nine hypotheses developed, we found five were supported while four were not. In detail, the relationships between (1) perceived system quality and perceived usefulness, (2) perceived system quality and perceived intention to use, (3) perceived usefulness and perceived intention to use, (4) quality of device platform and actual IS usage, and (5) perceived intention to use and actual IS usage were found to be significant. In comparison, the relationships between (1) quality of device platform and perceived system quality, (2) quality of device platform and perceived usefulness, (3) quality of device platform and perceived intention to use, and (4) perceived system quality and actual IS usage were not significant. The results of the study reveal notable differences from those of previous studies. First, although perceived intention to use shows a positive effect on actual IS usage, its explanatory power is very weak ($R^2$=0.064). Second, fact-based system quality (quality of user's device platform) shows a direct impact on actual IS usage without the mediating role of intention to use. Lastly, the relationships between perceived system quality (perception-based system quality) and other constructs show completely different results from those between quality of device platform (fact-based system quality) and other constructs. In the post-hoc analysis, IS users' past behavior was additionally included in the research model to further investigate the cause of such a low explanatory power of actual IS usage. The results show that past IS usage has a strong positive effect on current IS usage while intention to use does not have, implying that IS usage has already become a habitual behavior. This study provides the following several implications. First, we verify that fact-based data (i.e., system logs of real usage records) are more likely to reflect IS users' actual usage than perception-based data. In addition, by identifying the direct impact of quality of device platform on actual IS usage (without any mediating roles of attitude or intention), this study triggers further research on other potential factors that may directly influence actual IS usage. Furthermore, the results of the study provide practical strategic implications that organizations equipped with high-quality systems may directly expect high level of system usage.

Dynamic Decision Making using Social Context based on Ontology (상황 온톨로지를 이용한 동적 의사결정시스템)

  • Kim, Hyun-Woo;Sohn, M.-Ye;Lee, Hyun-Jung
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.43-61
    • /
    • 2011
  • In this research, we propose a dynamic decision making using social context based on ontology. Dynamic adaptation is adopted for the high qualified decision making, which is defined as creation of proper information using contexts depending on decision maker's state of affairs in ubiquitous computing environment. Thereby, the context for the dynamic adaptation is classified as a static, dynamic and social context. Static context contains personal explicit information like demographic data. Dynamic context like weather or traffic information is provided by external information service provider. Finally, social context implies much more implicit knowledge such as social relationship than the other two-type context, but it is not easy to extract any implied tacit knowledge as well as generalized rules from the information. So, it was not easy for the social context to apply into dynamic adaptation. In this light, we tried the social context into the dynamic adaptation to generate context-appropriate personalized information. It is necessary to build modeling methodology to adopt dynamic adaptation using the context. The proposed context modeling used ontology and cases which are best to represent tacit and unstructured knowledge such as social context. Case-based reasoning and constraint satisfaction problem is applied into the dynamic decision making system for the dynamic adaption. Case-based reasoning is used case to represent the context including social, dynamic and static and to extract personalized knowledge from the personalized case-base. Constraint satisfaction problem is used when the selected case through the case-based reasoning needs dynamic adaptation, since it is usual to adapt the selected case because context can be changed timely according to environment status. The case-base reasoning adopts problem context for effective representation of static, dynamic and social context, which use a case structure with index and solution and problem ontology of decision maker. The case is stored in case-base as a repository of a decision maker's personal experience and knowledge. The constraint satisfaction problem use solution ontology which is extracted from collective intelligence which is generalized from solutions of decision makers. The solution ontology is retrieved to find proper solution depending on the decision maker's context when it is necessary. At the same time, dynamic adaptation is applied to adapt the selected case using solution ontology. The decision making process is comprised of following steps. First, whenever the system aware new context, the system converses the context into problem context ontology with case structure. Any context is defined by a case with a formal knowledge representation structure. Thereby, social context as implicit knowledge is also represented a formal form like a case. In addition, for the context modeling, ontology is also adopted. Second, we select a proper case as a decision making solution from decision maker's personal case-base. We convince that the selected case should be the best case depending on context related to decision maker's current status as well as decision maker's requirements. However, it is possible to change the environment and context around the decision maker and it is necessary to adapt the selected case. Third, if the selected case is not available or the decision maker doesn't satisfy according to the newly arrived context, then constraint satisfaction problem and solution ontology is applied to derive new solution for the decision maker. The constraint satisfaction problem uses to the previously selected case to adopt and solution ontology. The verification of the proposed methodology is processed by searching a meeting place according to the decision maker's requirements and context, the extracted solution shows the satisfaction depending on meeting purpose.