• Title/Summary/Keyword: information processing scope

Search Result 172, Processing Time 0.025 seconds

Parsing the Wh-Interrogative Construction in Korean

  • Yang, Jaehyung;Kim, Jong-Bok
    • Language and Information
    • /
    • v.17 no.2
    • /
    • pp.51-66
    • /
    • 2013
  • Korean is a wh-in-situ language where the wh-expression stays in situ with an obligatory Q-particle marking its interrogative scope. This paper briefly reviews some basic properties of the wh-question construction in Korean and shows how a typed feature structure grammar, HPSG (Pollard and Sag 1994, Sag et al. 2003), together with the notions of 'type hierarchy' and 'constructions', can provide a robust basis for parsing the wh-construction in the language. We show that this system induces robust syntactic structures as well as enriched semantic representations for real-time applications such as machine translation, which require deep processing of the phenomena concerned.

  • PDF

A Survey of Cryptocurrencies based on Blockchain

  • Kim, Junsang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.2
    • /
    • pp.67-74
    • /
    • 2019
  • Since the announcement of bitcoin, new cryptocurrencies have been launched steadily and blockchain technology is also evolving with cryptocurrcies. In particular, security-related technologies such as consensus algorithm and hash algorithm have been improved and transaction processing speed has also been drastically improved to a level that can replace a centralized system. In addition, the advent of smart contract technology and the DApp platform also provides a means for cryptocurrency to decentralize social services beyond just payment. In this paper, we first describe the technologies for implementing cryptocurrency. And the major cryptocurrencies are described with a focus on the technical characteristics. In addition, the development of cryptocurrency technology is expanding the scope of use, so we tried to introduce various cryptocurrencies.

The Design of a Multiplexer for Multiview Image Processing

  • Kim, Do-Kyun;Lee, Yong-Joo;Koo, Gun-Seo;Lee, Yong-Surk
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.682-685
    • /
    • 2002
  • In this paper, we defined necessary operations and functional blocks of a multiplexer for 3-D video systems and present our multiplexer design. We adopted the ITU-T's recommendation(H.222.0) to define the operations and functions of the multiplexer and explained the data structures and details of the design for multiview image processing. The data structure of TS(Transport Stream) and PES (Packetized Elementary Stream) in ITU-T Recommendation H.222.0 does not fit our multiview image processing system, because this recommendation is fur wide scope of transmission of non-telephone signals. Therefore, we modified these TS and PES stream structures. The TS is modified to DSS(3D System Stream) and PES is modified to SPDU(DSS Program Data Unit). We constructed the multiplexer through these modified DSS and SPDU. The number of multiview image channels is nine, and the image class employed is MPEG-2 SD(Standard Definition) level which requires a bandwidth of 2∼6 Mbps. The required clock speed should be faster than 54(= 6 ${\times}$ 9)㎒ which is the outer interface clock speed. The inside part of the multiplexer requires a clock speed of only 1/8 of 54㎒, since the inside part of the multiplexer operates by the unit of byte. we used ALTERA Quartus II and the FPGA verification for the simulation.

  • PDF

Transformation of Continuous Aggregation Join Queries over Data Streams

  • Tran, Tri Minh;Lee, Byung-Suk
    • Journal of Computing Science and Engineering
    • /
    • v.3 no.1
    • /
    • pp.27-58
    • /
    • 2009
  • Aggregation join queries are an important class of queries over data streams. These queries involve both join and aggregation operations, with window-based joins followed by an aggregation on the join output. All existing research address join query optimization and aggregation query optimization as separate problems. We observe that, by putting them within the same scope of query optimization, more efficient query execution plans are possible through more versatile query transformations. The enabling idea is to perform aggregation before join so that the join execution time may be reduced. There has been some research done on such query transformations in relational databases, but none has been done in data streams. Doing it in data streams brings new challenges due to the incremental and continuous arrival of tuples. These challenges are addressed in this paper. Specifically, we first present a query processing model geared to facilitate query transformations and propose a query transformation rule specialized to work with streams. The rule is simple and yet covers all possible cases of transformation. Then we present a generic query processing algorithm that works with all alternative query execution plans possible with the transformation, and develop the cost formulas of the query execution plans. Based on the processing algorithm, we validate the rule theoretically by proving the equivalence of query execution plans. Finally, through extensive experiments, we validate the cost formulas and study the performances of alternative query execution plans.

Technical Protection Measures for Personal Information in Each Processing Phase in the Korean Public Sector

  • Shim, Min-A;Baek, Seung-Jo;Park, Tae-Hyoung;Seol, Jeong-Seon;Lim, Jong-In
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.3 no.5
    • /
    • pp.548-574
    • /
    • 2009
  • Personal information (hereinafter referred to as "PI") infringement has recently emerged as a serious social problem in Korea. PI infringement in the public and private sector is common. There were 182,666 cases of PI in 2,624 public organizations during the last three years. Online infringement cases have increased. PI leakage causes moral and economic damage and is an impediment to public confidence in public organizations seeking to manage e-government and maintain open and aboveboard administration. Thus, it is an important matter. Most cases of PI leakage result from unsatisfactory management of security, errors in home page design and insufficient system protection management. Protection management, such as encryption or management of access logs should be reinforced urgently. However, it is difficult to comprehend the scope of practical technology management satisfied legislation and regulations. Substantial protective countermeasures, such as access control, certification, log management and encryption need to be established. It is hard to deal with the massive leakage of PI and its security management. Therefore, in this study, we analyzed the conditions for the technical protection measures during the processing phase of PI. In addition, we classified the standard control items of protective measures suited to public circumstances. Therefore, this study provides a standard and checklist by which staff in public organizations can protect PI via technical management activities appropriate to laws and ordinances. In addition, this can lead to more detailed and clearer instructions on how to carry out technical protection measures and to evaluate the current status.

The Project and Prospects of Old Documents Information Systems in Korea (한국 고문헌 정보시스템의 구축 및 전망)

  • Kang Soon-Ae
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.31 no.4
    • /
    • pp.83-112
    • /
    • 1997
  • The purpose of this paper Is to describe the matters to plan the best information systems in Korean old books. It analyzes: i) a range of definition of old books, ii) its characteristics and current state of processing the old documents, iii) the scope of automation and building up the library institution, iv) the construction of Korean old books Information systems, v) its case study, and vi) the evaluation and vision of system. The old document information system have been organized on the basis of library networks systems with the National Central Library as leader, its implemented system has the subsystem such as cataloging system, annotation system, full-text or image-based system, and retrieval system. In case study, it is suggested two examples which has been built in the National Central Library and Sung Kyun Kwan university. finally, it provides the evaluation criteria and vision for the library which designs the old document information systems.

  • PDF

Complex Field Network Coding with MPSK Modulation for High Throughput in UAV Networks

  • Mingfei Zhao;Rui Xue
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2281-2297
    • /
    • 2024
  • Employing multiple drones as a swarm to complete missions can sharply improve the working efficiency and expand the scope of investigation. Remote UAV swarms utilize satellites as relays to forward investigation information. The increasing amount of data demands higher transmission rate and complex field network coding (CFNC) is deemed as an effective solution for data return. CFNC applied to UAV swarms enhances transmission efficiency by occupying only two time slots, which is less than other network coding schemes. However, conventional CFNC applied to UAVs is combined with constant coding and modulation scheme and results in a waste of spectrum resource when the channel conditions are better. In order to avoid the waste of power resources of the relay satellite and further improve spectral efficiency, a CFNC transmission scheme with MPSK modulation is proposed in this paper. For the proposed scheme, the satellite relay no longer directly forwards information, but transmits information after processing according to the current channel state. The proposed transmission scheme not only maintains throughput advantage of CFNC, but also enhances spectral efficiency, which obtains higher throughput performance. The symbol error probability (SEP) and throughput results corroborated by Monte Carlo simulation show that the proposed transmission scheme improves spectral efficiency in multiples compared to the conventional CFNC schemes. In addition, the proposed transmission scheme enhances the throughput performance for different topology structures while keeping SEP below a certain value.

Design of cache mechanism in distributed directory environment (분산 디렉토리 환경 하에서 효율적인 캐시 메카니즘 설계)

  • 이강우;이재호;임해철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.2
    • /
    • pp.205-214
    • /
    • 1997
  • In this paper, we suggest a cache mechanism to improve the speed fo query processing in distributed directory environment. For this, request and result and result about objects in remote site are store in the cache of local site. A cache mechanism developed through six phases; 1) Cached information which stored in distributed directory system is classified as application data, system data and meta data. 2) Cache system architecture is designed according to classified information. 3) Cache schema are designed for each cache information. 4) Least-TTL algorithms which use the weighted value of geograpical information and access frquency for replacements are developed for datacaches(application cache, system cache). 5) Operational algorithms are developed for meta data cache which has meta data tree. This tree is based on the information of past queries and improves the speed ofquery processing by reducing the scope of search space. 6) Finally, performance evaluations are performed by comparing with proposed cache mechanism and other mechanisms.

  • PDF

Implementation of C++ ID Compiler (C++ IDL 컴파일러 구현)

  • Park, Chan-Mo;Lee, Joon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.5
    • /
    • pp.970-976
    • /
    • 2001
  • In this paper, OUIG IDL CFE, provided by Sunsoft, is used to take a IDL definitions as inputs and parse those. OmniORB3 is introduced to support functionality of the ORB. Suns CFE produce AST after parsing inputs. Actually, the node of AST Is instances of classes which are derived from CFE classes. As the compiler back end visit the node of the AST using iterator class, UTL_ScopeActiveIterator, it dumps codes of output. During processing, two files are generated. Routines of generating code are invoked by BE_produce.cc and codes are produced while visiting root of AST, idl_global->root(). The dump* functions which dump codes is called according to the type of node. In this paper, Mapping C++ of IDL definition is experimented and results In the same as that of omniidl which is provided by omniORB3. The code of results behavior correctly on omniORB3. In the future, we are interested in optimizing the performance of marshalling code via IDL compiler.

  • PDF

Comparative Study of Exposure Assessment of Dust in Building Materials Enterprises Using ART and Monte Carlo

  • Wei Jiang;Zonghao Wu;Mengqi Zhang;Haoguang Zhang
    • Safety and Health at Work
    • /
    • v.15 no.1
    • /
    • pp.33-41
    • /
    • 2024
  • Background: Dust generated during the processing of building materials enterprises can pose a serious health risk. The study aimed to compare and analyze the results of ART and the Monte Carlo model for the dust exposure assessment in building materials enterprises, to derive the application scope of the two models. Methods: First, ART and the Monte Carlo model were used to assess the exposure to dust in each of the 15 building materials enterprises. Then, a comparative analysis of the exposure assessment results was conducted. Finally, the model factors were analyzed using correlation analysis and the scope of application of the models was determined. Results: The results show that ART is mainly influenced by four factors, namely, localized controls, segregation, dispersion, surface contamination, and fugitive emissions, and applies to scenarios where the workplace information of the building materials enterprises is specific and the average dust concentration is greater than or equal to 1.5 mg/m3. The Monte Carlo model is mainly influenced by the dust concentration in the workplace of building materials enterprises and is suitable for scenarios where the dust concentration in the workplace of the building materials enterprises is relatively uniform and the average dust concentration is less than or equal to 6mg/m3. Conclusion: ART is most accurate when workplace information is specific and average dust concentration is > 1.5 mg/m3; whereas, The Monte Carlo model is the best when dust concentration is homogeneous and average dust concentration is < 6 mg/m3.