• Title/Summary/Keyword: 아키텍쳐

Search Result 684, Processing Time 0.026 seconds

TV Anytime and MPEG-21 DIA based Ubiquitous Consumption of TV Contents in Digital Home Environment (TV Anytime 및 MPEG-21 DIA 기반 콘텐츠 이동성을 이용한 디지털 홈 환경에서의 유비쿼터스 TV 콘텐츠 소비)

  • Kim Munjo;Yang Chanseok;Lim Jeongyeon;Kim Munchurl;Park Sungjin;Kim Kwanlae;Oh Yunje
    • Journal of Broadcast Engineering
    • /
    • v.10 no.4 s.29
    • /
    • pp.557-575
    • /
    • 2005
  • Much research in core technologies has been done to make it possible the ubiquitous video services over various kinds of user information terminals anytime anywhere in the way the users want to consume. In this paper, we design plototypesystem architecture for the ubiquitous TV program content consumption based on user preference via various kinds of intelligent information terminals in digital home environment, and present an implementation and testing results for the prototype system. For the system design, we utilize the TV Anytime specification fur the consumption of TV program contents based on user preference in TV programs, and also use the MPEG-21 DIA (Digital Item Adaptation) tools which are the representation schema formats in order to describe the context information for user environments, user terminal characteristics, user characteristics for universal access and consumption of the preferred TV program contents. The proposed ubiquitous content mobility prototype system is designed to make it possible to seamlessly consume contents by a single user or multiple users via various kinds of user terminals for the TV program contents they watch together. The proposed ubiquitous content mobility prototype system in digital home environment consists of a home server, a display TV terminal, and an intelligent information terminal. We use 42 TV programs contents in eight different genres from four different TV channels in order to test our prototype system.

Odysseus/Parallel-OOSQL: A Parallel Search Engine using the Odysseus DBMS Tightly-Coupled with IR Capability (오디세우스/Parallel-OOSQL: 오디세우스 정보검색용 밀결합 DBMS를 사용한 병렬 정보 검색 엔진)

  • Ryu, Jae-Joon;Whang, Kyu-Young;Lee, Jae-Gil;Kwon, Hyuk-Yoon;Kim, Yi-Reun;Heo, Jun-Suk;Lee, Ki-Hoon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.4
    • /
    • pp.412-429
    • /
    • 2008
  • As the amount of electronic documents increases rapidly with the growth of the Internet, a parallel search engine capable of handling a large number of documents are becoming ever important. To implement a parallel search engine, we need to partition the inverted index and search through the partitioned index in parallel. There are two methods of partitioning the inverted index: 1) document-identifier based partitioning and 2) keyword-identifier based partitioning. However, each method alone has the following drawbacks. The former is convenient in inserting documents and has high throughput, but has poor performance for top h query processing. The latter has good performance for top-k query processing, but is inconvenient in inserting documents and has low throughput. In this paper, we propose a hybrid partitioning method to compensate for the drawback of each method. We design and implement a parallel search engine that supports the hybrid partitioning method using the Odysseus DBMS tightly coupled with information retrieval capability. We first introduce the architecture of the parallel search engine-Odysseus/parallel-OOSQL. We then show the effectiveness of the proposed system through systematic experiments. The experimental results show that the query processing time of the document-identifier based partitioning method is approximately inversely proportional to the number of blocks in the partition of the inverted index. The results also show that the keyword-identifier based partitioning method has good performance in top-k query processing. The proposed parallel search engine can be optimized for performance by customizing the methods of partitioning the inverted index according to the application environment. The Odysseus/parallel OOSQL parallel search engine is capable of indexing, storing, and querying 100 million web documents per node or tens of billions of web documents for the entire system.

A Construction of the C_MDR(Component_MetaData Registry) for the Environment of Exchanging the Component (컴포넌트 유통환경을 위한 컴포넌트 메타데이타 레지스트리 구축 : C_MDR)

  • Song, Chee-Yang;Yim, Sung-Bin;Baik, Doo-Kwon;Kim, Chul-Hong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.614-629
    • /
    • 2001
  • As the information-intensive society in 21c based on the environment of global internet is promoted, the software is getting more large and complex, and the demand for the software is increasing briskly. So, it becomes an important issue in academic and industrial field to activate reuse by developing and exchanging the standardized component. Currently, the information services as a product type of each company are provided in foreign market place for reusing a commercial component, but the components which are serviced in each market place are different, insufficient and unstandardized. That is, construction for Component Data Registry based on ISO 11179, is not accomplished. Hence, the national government has stepped up the plan for sending out public component at 2001. Therefore, the systems as a tool for sharing and exchange of data, have to support the meta-information of standardized component. In this paper, we will propose the C_MDR system: a tool to register and manage the standardized meta-information, based upon ISO 11179, for the commercialized common component. The purpose of this system is to systemically share and exchange the data in chain of acceleration of reusing the component. So, we will show the platform of specification for the component meta-information, then define the meta-information according to this platform, also represent the meta-information using XML for enhancing the interoperability of information with other system. Moreover, we will show that three-layered expression make modeling to be simple and understandable. The implementation of this system is to construct a prototype system of the component meta-information through the internet on www, this system uses ASP as a development language and RDBMS Oracle for PC. Thus, we may expect the standardization of the exchanged component metadata, and be able to apply to the exchanged reuse tool.

  • PDF

Design of MAHA Supercomputing System for Human Genome Analysis (대용량 유전체 분석을 위한 고성능 컴퓨팅 시스템 MAHA)

  • Kim, Young Woo;Kim, Hong-Yeon;Bae, Seungjo;Kim, Hag-Young;Woo, Young-Choon;Park, Soo-Jun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.81-90
    • /
    • 2013
  • During the past decade, many changes and attempts have been tried and are continued developing new technologies in the computing area. The brick wall in computing area, especially power wall, changes computing paradigm from computing hardwares including processor and system architecture to programming environment and application usage. The high performance computing (HPC) area, especially, has been experienced catastrophic changes, and it is now considered as a key to the national competitiveness. In the late 2000's, many leading countries rushed to develop Exascale supercomputing systems, and as a results tens of PetaFLOPS system are prevalent now. In Korea, ICT is well developed and Korea is considered as a one of leading countries in the world, but not for supercomputing area. In this paper, we describe architecture design of MAHA supercomputing system which is aimed to develop 300 TeraFLOPS system for bio-informatics applications like human genome analysis and protein-protein docking. MAHA supercomputing system is consists of four major parts - computing hardware, file system, system software and bio-applications. MAHA supercomputing system is designed to utilize heterogeneous computing accelerators (co-processors like GPGPUs and MICs) to get more performance/$, performance/area, and performance/power. To provide high speed data movement and large capacity, MAHA file system is designed to have asymmetric cluster architecture, and consists of metadata server, data server, and client file system on top of SSD and MAID storage servers. MAHA system softwares are designed to provide user-friendliness and easy-to-use based on integrated system management component - like Bio Workflow management, Integrated Cluster management and Heterogeneous Resource management. MAHA supercomputing system was first installed in Dec., 2011. The theoretical performance of MAHA system was 50 TeraFLOPS and measured performance of 30.3 TeraFLOPS with 32 computing nodes. MAHA system will be upgraded to have 100 TeraFLOPS performance at Jan., 2013.