• Title/Summary/Keyword: Information Exchange Standard

Search Result 482, Processing Time 0.026 seconds

Extending a WebDAV Protocol to Efficiently Support the Management of User Properties (사용자 속성 관리의 효율적 지원을 위한 WebDAV 프로토콜의 확장)

  • Jung Hye-Young;Kim Dong-Ho;Ahn Geon-Tae;Lee Myung-Joon
    • The KIPS Transactions:PartC
    • /
    • v.12C no.7 s.103
    • /
    • pp.1057-1066
    • /
    • 2005
  • WebDAV(Web-based Distributed Authoring and Versioning), a protocol which supports web-based distributed authoring and versioning, provides a standard infrastructure for asynchronous collaboration on various contents through the Internet. A WebDAV property management is a function to set and manage the main information of the resources as properties, and a user property, one kind of the WebDAV properties, has the ability to be freely defined by users. This free definition of user property makes it very useful to develop web-based applications like a collaboration system based on WebDAV However, with an existing WebDAV property management scheme, there is a limit to develop various applications. This paper describes a DavUP(WebDAV User property design Protocol) protocol which extended the original WebDAV and its uti-lization which efficiently supports management of WebDAV user properties. DavUP needs the definition of the collection structure and type definition properties for an application. To do this, we added a new header md appropriated WebDAV method functions to the WebDAV protocol. To show the usefulness of DavUP protocols, we extended our DAVinci WebDAV server to support DavUP Protocols and experimentally implemented a general Open Workspace, which provides effective functions to share and exchange open data among general users, on the DAVinci.

VLSI Design of Interface between MAC and PHY Layers for Adaptive Burst Profiling in BWA System (BWA 시스템에서 적응형 버스트 프로파일링을 위한 MAC과 PHY 계층 간 인터페이스의 VLSI 설계)

  • Song Moon Kyou;Kong Min Han
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.1
    • /
    • pp.39-47
    • /
    • 2005
  • The range of hardware implementation increases in communication systems as high-speed processing is required for high data rate. In the broadband wireless access (BWA) system based on IEEE standard 802.16 the functions of higher part in the MAC layer to Provide data needed for generating MAC PDU are implemented in software, and the tasks from formatting MAC PDUs by using those data to transmitting the messages in a modem are implemented in hardware. In this paper, the interface hardware for efficient message exchange between MAC and PHY layers in the BWA system is designed. The hardware performs the following functions including those of the transmission convergence(TC) sublayer; (1) formatting TC PDU(Protocol data unit) from/to MAC PDU, (2) Reed-solomon(RS) encoding/decoding, and (3) resolving DL MAP and UL MAP, so that it controls transmission slot and uplink and downlink traffic according to the modulation scheme of burst profile. Also, it provides various control signal for PHY modem. In addition, the truncated binary exponential backoff (TBEB) algorithm is implemented in a subscriber station to avoid collision on contention-based transmission of messages. The VLSI architecture performing all these functions is implemented and verified in VHDL.

Rollback Dependency Detection and Management with Data Consistency in Collaborative Transactional Workflows (협력 트랜잭셔널 워크플로우에서 데이터 일관성을 고려한 철회 종속성 감지 및 관리)

  • Byun, Chang-Woo;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.30 no.2
    • /
    • pp.197-208
    • /
    • 2003
  • Abstract Workflow is not appropriately applied to coordinated execution of applications(steps) that comprise business process such as a collaborative series of tasks because of the lacks of network infra, standard of information exchange and data consistency management with conflict mode of shared data. Particularly we have not mentioned the problem which can be occurred by shared data with conflict mode. In this paper, to handle data consistency in the process of rollback for failure handling or recovery policy, we have classified rollback dependency into three types such as implicit rollback dependency in a transactional workflow, implicit rollback dependency in collaborative transactional workflows and explicit rollback dependency in collaborative transactional workflows. Also, we have proposed the rollback dependency compiler that determines above three types of rollback dependency. A workflow designer specifies the workflow schema and the resources accessed by the steps from a global database of resources. The rollback dependency compiler generates the enhanced workflow schema with the rollback dependency specification. The run-time system interprets this specification and executes the rollback policy with data consistency if failure of steps is occurred. After all, this paper can offer better correctness and performance than state-of-the-art WFMSs.

The study of RFID Tag read range test with RFID Emulator (RFID Emulator를 이용한 Tag 인식거리 시험 연구)

  • Joo, Hae-Jong;Kim, Young-Choon;Lee, Eu-Soo;Cho, Moon-Taek
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.10
    • /
    • pp.4536-4542
    • /
    • 2011
  • RFID technology uses communication through the use of radio waves to transfer data between a reader and an electronic tag attached to an object for the purpose of identification and tracking. RFID technology can be applied to the various service areas such as, position determination technology, remote processing management and information exchange between objects by collecting, storing, processing, and tracing their informations from the tag attached to the objects using electronic wave by recognizing the information and environment of those objects. However, to revitalize these various services, it is important to test the RFID tag performance. But There are few instructions which have and hold the RFID emulator technology for organizing the RFID international test environment. Also there are not many manufacturing companies which recognize about the exact RFID test standard and requirements for the International Standards. In this paper, a construction of Tag Performance test environments and test methods are suggested which are required by EPCglobal or ISO/IEC. Details about RFID Tag performance test items proposed by ISO/IEC FDIS 18046-3 are explained, performed RFID Tag performance test through the performing test against each measured item, and draw a result for the RFID Tag performance of International Standards.

The Effect of Corporate Social Responsibilities on the Quality of Corporate Reporting (기업의 사회책임이 기업경영보고의 질에 미치는 영향)

  • Jeong, Kap-Soo;Park, Cheong-Kyu
    • Journal of Distribution Science
    • /
    • v.14 no.6
    • /
    • pp.75-80
    • /
    • 2016
  • Purpose - A growing demand for sustainability reporting has placed pressure on firms with non-financial information that affects firm valuation, growth, and development. In particular, a number of researchers have investigated various topics in Corporate Social Responsibility (CSR), non-financial information. Prior studies suggest that CSR may affect corporate outcomes like corporate reporting, financial performance, and disclosures. However, the results from prior studies are not clear whether CSR affects corporate outcomes. This is partially due to the measurement issues with CSR. In this study, we examine whether CSR affects the quality of corporate reporting, one of the popular measures in corporate outcomes. We find an evidence that CSR positively affects the quality of corporate reporting. Research design, data, and methodology - In this study, we collected a unique dataset of CSR from MSCI. Total 169 firms listed in the Korean Stock Exchange from 2011 to 2014 were collected and analysed with the detailed CSR reports. Using a correlation test, we found a weak association between CSR and the quality of corporate reporting. However, the regression tests provided a strong relationship between CSR and the quality of corporate reporting after controlling for other variables that may affect the quality of corporate reporting. Additionally, we calculated the t-statistics based on heteroskedaticity-consistent standard errors (White, 1980). Results - Before we run the regression test, we sort the measures of the two dependent variables into each rating of CSR (from AAA to CCC). The results indicate that the quality of corporate reporting measured by discretionary accruals and performance-matched discretionary accruals monotonically decrease as the CSR ratings increase. This supports our hypothesis. In the regression tests, the coefficient on MJDA (PMDA) is -0.183 (-0.173) and significant at the 5% level. We can interpret the results as CSR affecting the quality of corporate reporting in positive ways. Other coefficients on control variables are consistent with prior studies. For example, the coefficients on both LOSS and LEV are positive and significant at conventional level, meaning that firms with financial difficulty may harm their quality of corporate reporting. Conclusion - We found an evidence that CSR is positively associated with the quality of corporate reporting. This study contributes to the literature in various ways. First, this study extends the line of CSR research by providing additional evidence in the setting of ethical behaviors by managements. This is consistent with the hypothesis and supports the results of prior studies. Second, to the best of my knowledge, this is the first study using the MSCI CSR ratings. In contrast with prior studies using different measures of CSR, the MSCI CSR ratings allow us to provide in-depth analysis. Third, the additional measure of dependent variable (PMDA) allows us to improve the robustness of our results. Overall, the results provided this study to extend the findings in prior studies by providing incremental evidence.

Implementation of XML Query Processing System Using the Materialized View Cache-Answerability (실체뷰 캐쉬 기법을 이용한 XML 질의 처리 시스템의 구현)

  • Moon, Chan-Ho;Park, Jung-Kee;Kang, Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.11D no.2
    • /
    • pp.293-304
    • /
    • 2004
  • Recently, caching for the database-backed web applications has received much attention. The results of frequent queries could be cached for repeated reuse or for efficient processing of the relevant queries. Since the emergence of XML as a standard for data exchange on the web, today's web applications are to retrieve information from the remote XML sources across the network, and thus it is desirable to maintain the XML query results in the cache for the web applications. In this paper, we describe implementation of an XML query processing system that supports cache-answerability of XML queries, and evaluate its performance. XML path expression, which is one of the core features of XML query languages including XQuery, XPath, and XQL was considered as the XML query. Their result is maintained as an XML materialized view in the XML cache. The algorithms to rewrite the given XML path expression using its relevant materialized view proposed in [13] were implemented with RDBMS as XML store. The major issues of implementation are described in detail. The results of performance experiments conducted with the implemented system showed effectiveness of cache-answerability of XML queries. Comparison with previous research in terms of performance is also Provided.

A Filtering Technique of Streaming XML Data based Postfix Sharing for Partial matching Path Queries (부분매칭 경로질의를 위한 포스트픽스 공유에 기반한 스트리밍 XML 데이타 필터링 기법)

  • Park Seog;Kim Young-Soo
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.138-149
    • /
    • 2006
  • As the environment with sensor network and ubiquitous computing is emerged, there are many demands of handling continuous, fast data such as streaming data. As work about streaming data has begun, work about management of streaming data in Publish-Subscribe system is started. The recent emergence of XML as a standard for information exchange on Internet has led to more interest in Publish - Subscribe system. A filtering technique of streaming XML data in the existing Publish- Subscribe system is using some schemes based on automata and YFilter, which is one of filtering techniques, is very popular. YFilter exploits commonality among path queries by sharing the common prefixes of the paths so that they are processed at most one and that is using the top-down approach. However, because partial matching path queries interrupt the common prefix sharing and don't calculate from root, throughput of YFilter decreases. So we use sharing of commonality among path queries with the common postfixes of the paths and use the bottom-up approach instead of the top-down approach. This filtering technique is called as PoSFilter. And we verify this technique through comparing with YFilter about throughput.

$AB^2$ Semi-systolic Architecture over GF$GF(2^m)$ ($GF(2^m)$상에서 $AB^2$ 연산을 위한 세미시스톨릭 구조)

  • 이형목;전준철;유기영;김현성
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.12 no.2
    • /
    • pp.45-52
    • /
    • 2002
  • In this contributions, we propose a new MSB(most significant bit) algorithm based on AOP(All One Polynomial) and two parallel semi-systolic architectures to computes $AB^2$over finite field $GF(2^m)$. The proposed architectures are based on standard basis and use the property of irreducible AOP(All One Polynomial) which is all coefficients of 1. The proposed parallel semi-systolic architecture(PSM) has the critical path of $D_{AND2^+}D_{XOR2}$ per cell and the latency of m+1. The modified parallel semi-systolic architecture(WPSM) has the critical path of $D_{XOR2}$ per cell and has the same latency with PSM. The proposed two architectures, PSM and MPSM, have a low latency and a small hardware complexity compared to the previous architectures. They can be used as a basic architecture for exponentiation, division, and inversion. Since the proposed architectures have regularity, modularity and concurrency, they are suitable for VLSI implementation. They can be used as a basic architecture for algorithms, such as the Diffie-Hellman key exchange scheme, the Digital Signature Algorithm(DSA), and the ElGamal encryption scheme which are needed exponentiation operation. The application of the algorithms can be used cryptosystem implementation based on elliptic curve.

A Queriable XML Compression using Inferred Data Types (추론한 데이타 타입을 이용한 질의 가능 XML 압축)

  • ;;Chung Chin-Wan
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.441-451
    • /
    • 2005
  • HTML is mostly stored in native file systems instead of specialized repositories such as a database. Like HTML, XML, the standard for the exchange and the representation of data in the Internet, is mostly resident on native file systems. However. since XML data is irregular and verbose, the disk space and the network bandwidth are wasted compared to those of regularly structured data. To overcome this inefficiency of XML data, the research on the compression of XML data has been conducted. Among recently proposed XML compression techniques, some techniques do not support querying compressed data, while other techniques which support querying compressed data blindly encode data values using predefined encoding methods without considering the types of data values which necessitates partial decompression for processing range queries. As a result, the query performance on compressed XML data is degraded. Thus, this research proposes an XML compression technique which supports direct and efficient evaluations of queries on compressed XML data. This XML compression technique adopts an encoding method, called dictionary encoding, to encode each tag of XML data and applies proper encoding methods for encoding data values according to the inferred types of data values. Also, through the implementation and the performance evaluation of the XML compression technique proposed in this research, it is shown that the implemented XML compressor efficiently compresses real-life XML data lets and achieves significant improvements on query performance for compressed XML data.

Study of matching user operation name and operation classification code (ICD-9-CM) (Through OCS program use facilitation at operating room) (사용자 수술명과 수술분류 code (ICD-9-CM) 일치율 향상에 관한 연구 (수술실 OCS program 사용 활성화를 통하여))

  • Choi, Hyang-Ha;Kim, Mi-Young;Kim, Do-Jin;Yu, Ji-Won;Chang, Jung-Hwa;Park, Su-Jung;Park, Jae-Sung
    • Quality Improvement in Health Care
    • /
    • v.12 no.1
    • /
    • pp.104-112
    • /
    • 2006
  • Background : The necessity of unify and standardize codes used at hospital has been emphasized since OCS (Order Communicating System) was adopted. Therefore, the purpose of this study were to standardize operation code by continuous training of the ICD-9-CM code that is used as standard code in OCS program at operating room. Method : In 400 operation data, operation code entered in OCS program at operating room was compared to operation name recorded in medical record. In addition, a matching rate between input data of operation code by medical record department and computing input data of operation code in 3,710 cases was compared for each department. User operation name and operation code were matched and major diagnosis by operation department and operation name were also matched. Results : User operation name was reflected in operation classification code in detail, and operation code entered on user was registered. Input rate and matching rate of operation code were gradually improved after improvement activity. In particular, a matching rate was high at ophthalmology where operation name is segmented. Plastic surgery and orthopedics with a lot of emergency operation and comprehensive operation name show low input rates. Conclusions : As the medical field makes progress in computerlization, awareness of information exchange and sharing becomes higher. Among codes to classified medical institution, codes related to surgical operation are all different by user of hospital and department. Computerlization and standardization is essential. And when efforts of standardization continue in alliance with individual hospital and institution, initiative of preparing medical policy data at a national level will be accelerated.

  • PDF