• Title/Summary/Keyword: vendors

Search Result 471, Processing Time 0.03 seconds

Run-time Memory Optimization Algorithm for the DDMB Architecture (DDMB 구조에서의 런타임 메모리 최적화 알고리즘)

  • Cho, Jeong-Hun;Paek, Yun-Heung;Kwon, Soo-Hyun
    • The KIPS Transactions:PartA
    • /
    • v.13A no.5 s.102
    • /
    • pp.413-420
    • /
    • 2006
  • Most vendors of digital signal processors (DSPs) support a Harvard architecture, which has two or more memory buses, one for program and one or more for data and allow the processor to access multiple words of data from memory in a single instruction cycle. We already addressed how to efficiently assign data to multi-memory banks in our previous work. This paper reports on our recent attempt to optimize run-time memory. The run-time environment for dual data memory banks (DBMBs) requires two run-time stacks to control activation records located in two memory banks corresponding to calling procedures. However, activation records of two memory banks for a procedure are able to have different size. As a consequence, dual run-time stacks can be unbalanced whenever a procedure is called. This unbalance between two memory banks causes that usage of one memory bank can exceed the extent of on-chip memory area although there is free area in the other memory bank. We attempt balancing dual run-time slacks to enhance efficiently utilization of on-chip memory in this paper. The experimental results have revealed that although our algorithm is relatively quite simple, it still can utilize run-time memories efficiently; thus enabling our compiler to run extremely fast, yet minimizing the usage of un-time memory in the target code.

Odysseus/m: a High-Performance ORDBMS Tightly-Coupled with IR Features (오디세우스/IR: 정보 검색 기능과 밀결합된 고성능 객체 관계형 DBMS)

  • Whang Kyu-Young;Lee Min-Jae;Lee Jae-Gil;Kim Min-Soo;Han Wook-Shin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.3
    • /
    • pp.209-215
    • /
    • 2005
  • Conventional ORDBMS vendors provide extension mechanisms for adding user-defined types and functions to their own DBMSs. Here, the extension mechanisms are implemented using a high-level interface. We call this technique loose-coupling. The advantage of loose-coupling is that it is easy to implement. However, it is not preferable for implementing new data types and operations in large databases when high Performance is required. In this paper, we propose to use the notion of tight-coupling to satisfy this requirement. In tight-coupling, new data types and operations are integrated into the core of the DBMS engine. Thus, they are supported in a consistent manner with high performance. This tight-coupling architecture is being used to incorporate information retrieval(IR) features and spatial database features into the Odysseus/IR ORDBMS that has been under development at KAIST/AITrc. In this paper, we introduce Odysseus/IR and explain its tightly-coupled IR features (U.S. patented). We then demonstrate a web search engine that is capable of managing 20 million web pages in a non-parallel configuration using Odysseus/IR.

Data Processing Architecture for Cloud and Big Data Services in Terms of Cost Saving (비용절감 측면에서 클라우드, 빅데이터 서비스를 위한 대용량 데이터 처리 아키텍쳐)

  • Lee, Byoung-Yup;Park, Jae-Yeol;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.5
    • /
    • pp.570-581
    • /
    • 2015
  • In recent years, many institutions predict that cloud services and big data will be popular IT trends in the near future. A number of leading IT vendors are focusing on practical solutions and services for cloud and big data. In addition, cloud has the advantage of unrestricted in selecting resources for business model based on a variety of internet-based technologies which is the reason that provisioning and virtualization technologies for active resource expansion has been attracting attention as a leading technology above all the other technologies. Big data took data prediction model to another level by providing the base for the analysis of unstructured data that could not have been analyzed in the past. Since what cloud services and big data have in common is the services and analysis based on mass amount of data, efficient operation and designing of mass data has become a critical issue from the early stage of development. Thus, in this paper, I would like to establish data processing architecture based on technological requirements of mass data for cloud and big data services. Particularly, I would like to introduce requirements that must be met in order for distributed file system to engage in cloud computing, and efficient compression technology requirements of mass data for big data and cloud computing in terms of cost-saving, as well as technological requirements of open-source-based system such as Hadoop eco system distributed file system and memory database that are available in cloud computing.

Probe Vehicle Data Collecting Intervals for Completeness of Link-based Space Mean Speed Estimation (링크 공간평균속도 신뢰성 확보를 위한 프로브 차량 데이터 적정 수집주기 산정 연구)

  • Oh, Chang-hwan;Won, Minsu;Song, Tai-jin
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.5
    • /
    • pp.70-81
    • /
    • 2020
  • Point-by-point data, which is abundantly collected by vehicles with embedded GPS (Global Positioning System), generate useful information. These data facilitate decisions by transportation jurisdictions, and private vendors can monitor and investigate micro-scale driver behavior, traffic flow, and roadway movements. The information is applied to develop app-based route guidance and business models. Of these, speed data play a vital role in developing key parameters and applying agent-based information and services. Nevertheless, link speed values require different levels of physical storage and fidelity, depending on both collecting and reporting intervals. Given these circumstances, this study aimed to establish an appropriate collection interval to efficiently utilize Space Mean Speed information by vehicles with embedded GPS. We conducted a comparison of Probe-vehicle data and Image-based vehicle data to understand PE(Percentage Error). According to the study results, the PE of the Probe-vehicle data showed a 95% confidence level within an 8-second interval, which was chosen as the appropriate collection interval for Probe-vehicle data. It is our hope that the developed guidelines facilitate C-ITS, and autonomous driving service providers will use more reliable Space Mean Speed data to develop better related C-ITS and autonomous driving services.

Evaluation of Compaction and Thermal Characteristics of Recycled Aggregates for Backfilling Power Transmission Pipeline (송배전관로 되메움재로 활용하기 위한 국내 순환골재의 다짐 및 열적 특성 평가)

  • Wi, Ji-Hae;Hong, Sung-Yun;Lee, Dae-Soo;Park, Sang-Woo;Choi, Hang-Seok
    • Journal of the Korean Geotechnical Society
    • /
    • v.27 no.7
    • /
    • pp.17-33
    • /
    • 2011
  • Recently, the utilization of recycled aggregates for backfilling a power transmission pipeline trench has been considered due to the issues of eco-friendly construction and a lack of natural aggregate resource. It is important to identify the physical and thermal properties of domestic recycled aggregates that can be used as a backfill material. This paper evaluated thermal properties of concrete-based recycled aggregates with various particle size distributions. The thermal properties of the recycled aggregates and river sand provided by local vendors were measured using the transient hot wire method and the transient needle probe method after performing the standard compaction test. The needle probe method considerably overestimated the thermal resistivity of recycled aggregates especially at the dry of optimum water content because of experiencing disturbance while the needle probe is being inserted into the specimen. Similar to silica sand, the thermal resistivity of recycled aggregates decreased when the water content increased at a given dry density. Also, this paper evaluated some of the existing prediction models for the thermal resistivity of recycled aggregates with the experimental data, and developed a new prediction model for recycled aggregates. This study shows that recycled aggregates can be a promising backfill material substituting for natural aggregates when backfilling the power transmission pipeline trench.

Simple Credit Card Payment Protocols Based on SSL and Passwords (SSL과 패스워드 기반의 신용카드 간편결제 프로토콜)

  • Kim, Seon Beom;Kim, Min Gyu;Park, Jong Hwan
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.3
    • /
    • pp.563-572
    • /
    • 2016
  • Recently, a plenty of credit card payment protocols have been proposed in Korea. Several features of proposed protocols include: using passwords for user authentication in stead of official certificate for authenticity, and no need to download additional security module via ActiveX into user's devices. In this paper, we suggest two new credit card payment protocols that use both SSL(Security Socket Layer) as a standardized secure transaction protocol and password authentication to perform online shopping and payment. The first one is for the case where online shopping mall is different from PG(Payment Gateway) and can be compared to PayPal-based payment methods, and the second one is for the case where online shopping mall is the same as PG and thus can be compared to Amazon-like methods. Two proposed protocols do not require users to perform any pre-registration process which is separate from an underlying shopping process, instead users can perform both shopping and payment into a single process in a convenient way. Also, users are asked to input a distinct payment password, which increases the level of security in the payment protocols. We believe that two proposed protocols can help readers to better understand the recent payment protocols that are suggested by various vendors, and to analyze the security of their payment protocols.

Efficient Multicasting Mechanism for Mobile Computing Environment (웹 페이지 로딩시간 감축을 위한 HTML 5 분석)

  • Yun, Jun-soo;Park, Jin-tae;Hwang, Hyun-seo;Phyo, Gyung-soo;Moon, Il-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.775-778
    • /
    • 2015
  • HTML5-based Web platform is established as a next-generation national standards, Web services provider, has been competitively develop support technology HTML5-based app in smart media devices and smart TV. In accordance with the W3C is an international Web standards development organization, Microsoft, Apple, Mozilla, Google, such as Opera, various Web browser vendors are participating in standardization. Gradually emphasized the importance of HTML5, HTML5 -based Web pages, it is necessary to fast load times when contained a large amount of information. Therefore, in this paper, the initial studies in order to reduce the loading time of a web page, configure each browser-specific same Web page, and measure the initial loading time. Also, one by one remove the HTML5 tags, and CSS property, to analyze the tags and attributes of the account for a large proportion to the initial load time. Through the results, it is desired to provide a process which can reduce the Web page.

  • PDF

A Study on OLE/COM-based GIS Data Provider Component Development Toward Application System Development (응용시스템 구축을 위한 OLE/COM 기반의 GIS 데이터 제공자 컴포넌트 시스템에 관한 연구)

  • 김민수;김광수;오병우;이기원
    • Spatial Information Research
    • /
    • v.7 no.2
    • /
    • pp.175-190
    • /
    • 1999
  • Recently , as GIS technology is rapidly improved and stabilized, there are some needs to reuse pre-developed and powerful GIS technology. GIS standardization based on components and open interfaces becomes a way to solve those reusability of previous GIS technology. This GIS standardization currently focuses on building the GIS Data Infrastructure that is being deployed globally. Especially, OpenGIS consortium which is mainly made up of international GIS leading vendors is announcing some GIS abstract specifications and implementation specifications. This study focuses on how could we design and implement the OLE/COM-based data provider component based on various DBMS or file systems, how could these data provider components be used for enterprise UIS(Urban Information Systems) and how could various formatted GIS data be shared in one system. Also some problems practically caused by an implementation process of data provider component are listed up and some solutions are given. Furthermore, design and analysis of UML(Unified Modeling Language) was reformed through the data provider component development task and this UML methodology is able to indicate a standardized model for newly developed data provider component.

  • PDF

A Taxonomy of Workflow Architectures

  • Kim, Kwang-Hoon;Paik, Su-Ki
    • Proceedings of the Korea Database Society Conference
    • /
    • 1998.09a
    • /
    • pp.525-543
    • /
    • 1998
  • This paper proposes a conceptual taxonomy of architectures far workflow management systems. The systematic classification work is based on a framework for workflow architectures. The framework, consisting of generic-level, conceptual-level and implementation-level architectures, provides common architectural principles for designing a workflow management system. We define the taxonomy by considering the possibilities for centralization or distribution of data, control, and execution. That is, we take into account three criteria. How are the major components of a workflow model and system, like activities, roles, actors, and workcases, concretized in workflow architecture? Which of the components is represented as software modules of the workflow architecture? And how are they configured and operating in the architecture? The workflow components might be embodied, as active (processes or threads) modules or as passive (data) modules, in the software architecture of a workflow management system. One or combinations of the components might become software modules in the software architecture. Finally, they might be centralized or distributed. The distribution of the components should be broken into three: Vertically, Horizontally and Fully distributed. Through the combination of these aspects, we can conceptually generate about 64 software Architectures for a workflow management system. That is, it should be possible to comprehend and characterize all kinds of software architectures for workflow management systems including the current existing systems as well as future systems. We believe that this taxonomy is a significant contribution because it adds clarity, completeness, and "global perspective" to workflow architectural discussions. The vocabulary suggested here includes workflow levels and aspects, allowing very different architectures to be discussed, compared, and contrasted. Added clarity is obtained because similar architectures from different vendors that used different terminology and techniques can now be seen to be identical at the higher level. Much of the complexity can be removed by thinking of workflow systems. Therefore, it is used to categorize existing workflow architectures and suggest a plethora of new workflow architectures. Finally, the taxonomy can be used for sorting out gems and stones amongst the architectures possibly generated. Thus, it might be a guideline not only for characterizing the existing workflow management systems, but also for solving the long-term and short-term architectural research issues, such as dynamic changes in workflow, transactional workflow, dynamically evolving workflow, large-scale workflow, etc., that have been proposed in the literature.

  • PDF

A Taxonomy of Workflow Architectures

  • Kim, Kwang-Hoon;Paik, Su-Ki
    • The Journal of Information Technology and Database
    • /
    • v.5 no.1
    • /
    • pp.97-108
    • /
    • 1998
  • This paper proposes a conceptual taxonomy of architectures for workflow management systems. The systematic classification work is based on a framework for workflow architectures. The framework, consisting of generic-level, conceptual-level and implementation-level architectures, provides common architectural principles for designing a workflow management system. We define the taxonomy by considering the possibilities for centralization or distribution of data, control, and execution. That is, we take into account three criteria. How are the major components of a workflow model and system, like activities, roles, actors, and workcases, concretized in workflow architecture. Which of the components is represented as software modules of the workflow architecture\ulcorner And how are they configured and operating in the architecture\ulcorner The workflow components might be embodied, as active (processes or threads) modules or as passive (data) modules, in the software architecture of a workflow management system. One or combinations of the components might become software modules in the software architecture. Finally, they might be centralized or distributed. The distribution of the components should be broken into three: Vertically, Horizontally and Fully distributed. Through the combination of these aspects, we can conceptually generate about 64 software Architectures for a workflow management system. That is, it should be possible to comprehend and characterize all kinds of software architectures for workflow management systems including the current existing systems as well as future systems. We believe that this taxonomy is a significant contribution because it adds clarity, completeness, and global perspective to workflow architectural discussions. The vocabulary suggested here includes workflow levels and aspects, allowing very different architectures to be discussed, compared, and contrasted. Added clarity is obtained because similar architectures from different vendors that used different terminology and techniques can now be seen to be identical at the higher level. Much of the complexity can be removed by thinking of workflow systems. Therefore, it is used to categorize existing workflow architectures and suggest a plethora of new workflow architectures. Finally, the taxonomy can be used for sorting out gems and stones amongst the architectures possibly generated. Thus, it might be a guideline not only for characterizing the existing workflow management systems, but also for solving the long-term and short-term architectural research issues, such as dynamic changes in workflow, transactional workflow, dynamically evolving workflow, large-scale workflow, etc., that have been proposed in the literature.

  • PDF