• Title/Summary/Keyword: School Information Publish System

Search Result 11, Processing Time 0.028 seconds

Design and Evaluation of a Fault-tolerant Publish/Subscribe System for IoT Applications (IoT 응용을 위한 결함 포용 발행/구독 시스템의 설계 및 평가)

  • Bae, Ihn-Han
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1101-1113
    • /
    • 2021
  • The rapid growth of sense-and-respond applications and the emerging cloud computing model present a new challenge: providing publish/subscribe middleware as a scalable and elastic cloud service. The publish/subscribe interaction model is a promising solution for scalable data dissemination over wide-area networks. In addition, there have been some work on the publish/subscribe messaging paradigm that guarantees reliability and availability in the face of node and link failures. These publish/subscribe systems are commonly used in information-centric networks and edge-fog-cloud infrastructures for IoT. The IoT has an edge-fog cloud infrastructure to efficiently process massive amounts of sensing data collected from the surrounding environment. In this paper. we propose a quorum-based hierarchical fault-tolerant publish/subscribe systems (QHFPS) to enable reliable delivery of messages in the presence of link and node failures. The QHFPS efficiently distributes IoT messages to the publish/subscribe brokers in fog overlay layers on the basis of proposing extended stepped grid (xS-grid) quorum for providing tolerance when faced with node failures and network partitions. We evaluate the performance of QHFPS in three aspects: number of transmitted Pub/Sub messages, average subscription delay, and subscritpion delivery rate with an analytical model.

ICT-based Cooperative Model for Transparent and Sustainable Scholarly Publishing Ecosystem

  • Jung, Youngim;Seo, Tae-Sul
    • Journal of Contemporary Eastern Asia
    • /
    • v.21 no.1
    • /
    • pp.53-71
    • /
    • 2022
  • The overall purposes of this study are to identify actions taken to counter predatory publishing practices as well as to propose an ICT-based model to detect such practices. The need to raise quantitative performance metrics to support career goals has created immense pressure on researchers to publish in the literature as frequently as possible. This "publish or perish" syndrome appears to be fueling a rise in scholarly journals and conferences that provide quicker and easier routes to publication. However, such avenues sometimes involve questionable academic practices with important ethical ramifications. One notable example is the proliferation of predatory publishing, including predatory journals and fake conferences. The widening impact of such activities is beginning to prompt academic societies, publishers, and institutions to take measures. This paper discusses the issues on predatory publishing practices, and some of the actions taken by various stakeholders to address these practices. In order to build a transparent and sustainable scholarly publishing ecosystem, this study highlights multi-dimensional and specific solutions, including reforms to research ethics codes, research management rules, and legal protection from exploitative practices. This paper proposes an ICT-based cooperative model for monitoring of predatory publishers as a potential solution to create a sustainable and transparent infrastructure for a scholarly publication system guarding against misconduct in publishing practices.

Web Service Proxy Architecture using WS-Eventing for Reducing SOAP Traffic

  • Terefe, Mati Bekuma;Oh, Sangyoon
    • Journal of Information Technology and Architecture
    • /
    • v.10 no.2
    • /
    • pp.159-167
    • /
    • 2013
  • Web Services offer many benefits over other types of middleware in distributed computing. However, usage of Web Services results in large network bandwidth since Web Services use XML-based protocol which is heavier than binary protocols. Even though there have been many researches to minimize the network traffic and bandwidth usages of Web Services messages, none of them are solving problem clearly yet. In this paper, we propose a transparent proxy with cache to avoid transfer of repeated SOAP data, sent by Web Service to an application. To maintain the cache consistency, we introduce publish/subscribe paradigm using WS-Eventing between the proxy and Web Service. The implemented system based on our proposed architecture will not compromise the standards of Web Service. The evaluation of our system shows that caching SOAP messages not only reduces the network traffic but also decreases the request delays.

A wireless sensor network approach to enable location awareness in ubiquitous healthcare applications

  • Singh, Vinay Kumar;Lim, Hyo-Taek;Chung, Wan-Young
    • Journal of Sensor Science and Technology
    • /
    • v.16 no.4
    • /
    • pp.277-285
    • /
    • 2007
  • In this paper, we outline the research issues that we are pursuing towards building of location aware environments for mainly ubiquitous healthcare applications. Such location aware application can provide what is happening in this space. To locate an object, such as patient or elderly person, the active ceiling-mounted reference beacons were placed throughout the building. Reference beacons periodically publish location information on RF and ultrasonic signals to allow application running on mobile or static nodes to study and determine their physical location. Once object-carried passive listener receives the information, it subsequently determines it's location from reference beacons. The cost of the system was reduced while the accuracy in our experiments was fairly good and fine grained between 7 and 12 cm for location awareness in indoor environments by using only the sensor nodes and wireless sensor network technology. Passive architecture used here provides the security of the user privacy while at the server the privacy was secured by providing the authentication using Geopriv approach. This information from sensor nodes is further forwarded to base station where further computation is performed to determine the current position of object.

A Study on the Trend of Collaborative Research Using Korean Health Panel Data: Focusing on the Network Structure of Co-authors (한국의료패널 데이터를 활용한 공동연구 동향 분석: 공동 연구자들 연결망 구조를 중심으로)

  • Um, Hyemi;Lee, Hyunju;Choi, Sung Eun
    • Journal of Information Technology Applications and Management
    • /
    • v.25 no.4
    • /
    • pp.185-196
    • /
    • 2018
  • This study investigates the social network among authors to improve the quality of Panel researches. Korea Health Panel (KHP), implemented by the collaborative work between KIHASA (Korea Institute for Health and Social Affairs) and NHIC (National Health Insurance Service) since 2008, provides a critical infrastructure for policy making and management for insurance system and healthcare service. Using bibliographic data extracted from academic databases, eighty articles were extracted in domestic and international journals from 2008 to 2014, April. Data were analyzed by NetMiner 4.0, social network analysis software, to identify the extent to which authors are involved in healthcare use research and the patterns of collaboration between them. Analysis reveals that most authors publish a very small number of articles and collaborate within tightly knit circles. Centrality measures confirm these findings by revealing that only a small percentage of the authors are structurally dominant, and influence the flow of communication among others. It leads to the discovery of dependencies between the elements of the co-author network such as affiliates in health panel communities. Based on these findings, we recommend that Korea Health Panel could benefit from cultivating a wider base of influential authors and promoting broader collaborations.

A Study on the Disclosure Method of Major Topics in Response to the ESG Management Disclosure Transition-Focused on the Oil and Gas Industry (ESG경영 공시전환에 대응하는 중대토픽 공시방법 연구-석유와 가스산업 중심으로)

  • Park, TaeYang
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.1
    • /
    • pp.53-70
    • /
    • 2022
  • Recently, due to the change to SASB(Sustainability Accounting Standards Board) and GRI(Global Reporting Initiative) Standards 2021, the paradigm for non-financial information disclosure is changing significantly, with the number of ESG topics and indicators that must be disclosed by industry from an autonomous material topic selection method. This study revealed that the number of compulsory topics in the oil and gas industry by GRI standards 2021 is up to 2.4 times higher than the average number of material topics disclosed when domestic companies publish sustainability reports using GRI Standards 2020. In the oil and gas industry, I analyzed the similarities and differences between the GRI standards 2021 and the ESG topics covered by SASB by environmental, social, economic, and governance areas. In addition, the materiality test process, which is different in GRI standards 2021, is introduced, and the issues included in the following 10 representative ESG-related initiatives are summarized into 62 and suggested improvement plans for materiality test used in the topic pool.

The Development of Policy toward the Students' Access to Their Own Information on NEIS (나이스의 학생개인정보서비스 제공을 위한 인식조사)

  • Jang, Soon-Sun;Lee, Ok-Hwa
    • Journal of The Korean Association of Information Education
    • /
    • v.14 no.2
    • /
    • pp.261-271
    • /
    • 2010
  • NEIS (National Education Information SystTem) provides educational information to parents while students are not able to have the direct access to their own academic information. Students filed their request to the National Human Right Agency. The objective of this study is to investigate the feasibility of students direct access to their own information and how to do it. In detail, online survey was conducted from 3,300 students, teachers and parents who are the stake holders of student information service on NEIS regarding whether they are for it, if so how to do it. The result indicated that students, parents and teachers are all positive in that order. It is unexpected that teachers who need to input and are responsible for the accuracy and reliability for student information are also for it. Although the human right law suggests all students have the right to direct access to their own information, the order of operation can be from secondary school level if the operation can not be done all at the same time. The range of information provided through this service is in principle the same as what parents can access except the counseling information.

  • PDF

An Investigation on Expanding Co-occurrence Criteria in Association Rule Mining (연관규칙 마이닝에서의 동시성 기준 확장에 대한 연구)

  • Kim, Mi-Sung;Kim, Nam-Gyu;Ahn, Jae-Hyeon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.23-38
    • /
    • 2012
  • There is a large difference between purchasing patterns in an online shopping mall and in an offline market. This difference may be caused mainly by the difference in accessibility of online and offline markets. It means that an interval between the initial purchasing decision and its realization appears to be relatively short in an online shopping mall, because a customer can make an order immediately. Because of the short interval between a purchasing decision and its realization, an online shopping mall transaction usually contains fewer items than that of an offline market. In an offline market, customers usually keep some items in mind and buy them all at once a few days after deciding to buy them, instead of buying each item individually and immediately. On the contrary, more than 70% of online shopping mall transactions contain only one item. This statistic implies that traditional data mining techniques cannot be directly applied to online market analysis, because hardly any association rules can survive with an acceptable level of Support because of too many Null Transactions. Most market basket analyses on online shopping mall transactions, therefore, have been performed by expanding the co-occurrence criteria of traditional association rule mining. While the traditional co-occurrence criteria defines items purchased in one transaction as concurrently purchased items, the expanded co-occurrence criteria regards items purchased by a customer during some predefined period (e.g., a day) as concurrently purchased items. In studies using expanded co-occurrence criteria, however, the criteria has been defined arbitrarily by researchers without any theoretical grounds or agreement. The lack of clear grounds of adopting a certain co-occurrence criteria degrades the reliability of the analytical results. Moreover, it is hard to derive new meaningful findings by combining the outcomes of previous individual studies. In this paper, we attempt to compare expanded co-occurrence criteria and propose a guideline for selecting an appropriate one. First of all, we compare the accuracy of association rules discovered according to various co-occurrence criteria. By doing this experiment we expect that we can provide a guideline for selecting appropriate co-occurrence criteria that corresponds to the purpose of the analysis. Additionally, we will perform similar experiments with several groups of customers that are segmented by each customer's average duration between orders. By this experiment, we attempt to discover the relationship between the optimal co-occurrence criteria and the customer's average duration between orders. Finally, by a series of experiments, we expect that we can provide basic guidelines for developing customized recommendation systems. Our experiments use a real dataset acquired from one of the largest internet shopping malls in Korea. We use 66,278 transactions of 3,847 customers conducted during the last two years. Overall results show that the accuracy of association rules of frequent shoppers (whose average duration between orders is relatively short) is higher than that of causal shoppers. In addition we discover that with frequent shoppers, the accuracy of association rules appears very high when the co-occurrence criteria of the training set corresponds to the validation set (i.e., target set). It implies that the co-occurrence criteria of frequent shoppers should be set according to the application purpose period. For example, an analyzer should use a day as a co-occurrence criterion if he/she wants to offer a coupon valid only for a day to potential customers who will use the coupon. On the contrary, an analyzer should use a month as a co-occurrence criterion if he/she wants to publish a coupon book that can be used for a month. In the case of causal shoppers, the accuracy of association rules appears to not be affected by the period of the application purposes. The accuracy of the causal shoppers' association rules becomes higher when the longer co-occurrence criterion has been adopted. It implies that an analyzer has to set the co-occurrence criterion for as long as possible, regardless of the application purpose period.

CIA-Level Driven Secure SDLC Framework for Integrating Security into SDLC Process (CIA-Level 기반 보안내재화 개발 프레임워크)

  • Kang, Sooyoung;Kim, Seungjoo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.909-928
    • /
    • 2020
  • From the early 1970s, the US government began to recognize that penetration testing could not assure the security quality of products. Results of penetration testing such as identified vulnerabilities and faults can be varied depending on the capabilities of the team. In other words none of penetration team can assure that "vulnerabilities are not found" is not equal to "product does not have any vulnerabilities". So the U.S. government realized that in order to improve the security quality of products, the development process itself should be managed systematically and strictly. Therefore, the US government began to publish various standards related to the development methodology and evaluation procurement system embedding "security-by-design" concept from the 1980s. Security-by-design means reducing product's complexity by considering security from the initial phase of development lifecycle such as the product requirements analysis and design phase to achieve trustworthiness of product ultimately. Since then, the security-by-design concept has been spread to the private sector since 2002 in the name of Secure SDLC by Microsoft and IBM, and is currently being used in various fields such as automotive and advanced weapon systems. However, the problem is that it is not easy to implement in the actual field because the standard or guidelines related to Secure SDLC contain only abstract and declarative contents. Therefore, in this paper, we present the new framework in order to specify the level of Secure SDLC desired by enterprises. Our proposed CIA (functional Correctness, safety Integrity, security Assurance)-level-based security-by-design framework combines the evidence-based security approach with the existing Secure SDLC. Using our methodology, first we can quantitatively show gap of Secure SDLC process level between competitor and the company. Second, it is very useful when you want to build Secure SDLC in the actual field because you can easily derive detailed activities and documents to build the desired level of Secure SDLC.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.