• Title/Summary/Keyword: DB기술

Search Result 1,189, Processing Time 0.031 seconds

To Establish the Infrastructure of Broadcasting Education Information Based on the Digital Library (디지털도서관기술을 이용한 방송교육 정보 인프라 구축)

  • 이종문
    • Journal of Korean Library and Information Science Society
    • /
    • v.31 no.3
    • /
    • pp.71-88
    • /
    • 2000
  • 본 논문은 기존의 방송교육이 지니고 있는 정지성, 획일성, 일방통신성, 순간성 등의 문제점을 해결할 수 있는 하나의 방안으로 디지털도서관기술을 기반으로 한 방송교육정보 인프라 구축 방안을 제시하는데 목적이 있다. 방송사가 디지털도서관을 설립하여 자사의 방송교육정보 인프라를 구축하고 이를 통합할 수 있는 $\ulcorner$통합 가상방송교육센타$\lrcorner$를 인터넷상에 설립하여 통합 탐색이 가능한 인터페이스를 제공한다면 방송교육의 목적을 배가할 수 있다. 디지털도서관기술을 기반으로 한 방송 교육정보 인프라 구축 방안을 방송교육프로그램의 정보적 속성을 분석한 DB 프로그램의 설계, 멀티미디어기술을 기반으로 하여 서지정보와 본문정보가 상호 연동된 DB의 구축과 인터넷 웹서비스 기술에 의하여 서비스가 가능하도록 설계하는 것이다.

  • PDF

A Multi-speaker Speech Synthesis System Using X-vector (x-vector를 이용한 다화자 음성합성 시스템)

  • Jo, Min Su;Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.675-681
    • /
    • 2021
  • With the recent growth of the AI speaker market, the demand for speech synthesis technology that enables natural conversation with users is increasing. Therefore, there is a need for a multi-speaker speech synthesis system that can generate voices of various tones. In order to synthesize natural speech, it is required to train with a large-capacity. high-quality speech DB. However, it is very difficult in terms of recording time and cost to collect a high-quality, large-capacity speech database uttered by many speakers. Therefore, it is necessary to train the speech synthesis system using the speech DB of a very large number of speakers with a small amount of training data for each speaker, and a technique for naturally expressing the tone and rhyme of multiple speakers is required. In this paper, we propose a technology for constructing a speaker encoder by applying the deep learning-based x-vector technique used in speaker recognition technology, and synthesizing a new speaker's tone with a small amount of data through the speaker encoder. In the multi-speaker speech synthesis system, the module for synthesizing mel-spectrogram from input text is composed of Tacotron2, and the vocoder generating synthesized speech consists of WaveNet with mixture of logistic distributions applied. The x-vector extracted from the trained speaker embedding neural networks is added to Tacotron2 as an input to express the desired speaker's tone.

Multi-sensor Fusion based Autonomous Return of SUGV (다중센서 융합기반 소형로봇 자율복귀에 대한 연구)

  • Choi, Ji-Hoon;Kang, Sin-Cheon;Kim, Jun;Shim, Sung-Dae;Jee, Tae-Yong;Song, Jae-Bok
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.3
    • /
    • pp.250-256
    • /
    • 2012
  • Unmanned ground vehicles may be operated by remote control unit through the wireless communication or autonomously. However, the autonomous technology is still challenging and not perfectly developed. For some reason or other, the wireless communication is not always available. If wireless communication is abruptly disconnected, the UGV will be nothing but a lump of junk. What was worse, the UGV can be captured by enemy. This paper suggests a method, autonomous return technology with which the UGV can autonomously go back to a safer position along the reverse path. The suggested autonomous return technology for UGV is based on multi-correlated information based DB creation and matching. While SUGV moves by remote-control, the multi-correlated information based DB is created with the multi-sensor information; the absolute position of the trajectory is stored in DB if GPS is available and the hybrid MAP based on the fusion of VISION and LADAR is stored with the corresponding relative position if GPS is unavailable. In multi-correlated information based autonomous return, SUGV returns autonomously based on DB; SUGV returns along the trajectory based on GPS-based absolute position if GPS is available. Otherwise, the current position of SUGV is first estimated by the relative position using multi-sensor fusion followed by the matching between the query and DB. Then, the return path is created in MAP and SUGV returns automatically based on the MAP. Experimental results on the pre-built trajectory show the possibility of the successful autonomous return.

Development of Life Cycle Inventory (LCI) Database for Production of Liquid CO2 (액체 이산화탄소의 전과정목록(LCI) DB 구축에 관한 연구)

  • Lee, Soo-Sun;Kim, Young Sil;Ahn, Joong Woo
    • Clean Technology
    • /
    • v.21 no.1
    • /
    • pp.33-38
    • /
    • 2015
  • In this research, life cycle inventory database (LCI DB) was developed for liquid CO2 employing life cycle assessment (LCA) methodology. As are result of characterization and normalization process, production of liquid CO2 puts on environmental impact in the order of resource depletion, global warming, acidification, eutrophication and photochemical oxidation, and among a wide variety of input, electricity contributes in most of the impact categories. Air emission plays a key role in the acidification and eutrophication while ammonia affects most on the ozone depletion. It is anticipated that development of liquid CO2 LCI DB makes it possible for national environmental strategies to be more activated including environmental labeling scheme.

A Study on Monitoring System Architecture for Calculation of Practical Recycling Rate of End of Life Vehicle (폐자동차의 실질적 재활용률 산정을 위한 모니터링 체계에 관한 연구)

  • Park, Jung Whan;Yi, Hwa-Cho;Park, Myon Woong;Sohn, Young Tae
    • Clean Technology
    • /
    • v.18 no.4
    • /
    • pp.373-378
    • /
    • 2012
  • The end-of-life vehicles (ELV) are important recycling sources, and there are several stages involved in the recycling such as dismantling, shredding, and treatment of shredder residues (ASR). The legal recycling rate should be at least 95% on and after 2015, while we need a proper system to monitor recycling of ELV components and to calculate the practical recycling rate. The paper suggests a monitoring system that calculates practical recycling rates of dismantled components by use of a database of standard recycling rate as well as a web-based monitoring, which is linked to the Eco Assurance system for electric & electronic equipment and vehicle (EcoAS). Also the system supports dismantling and monitoring process by incorporating a standard vehicular component database, which facilitates recording dismantled weight data but also monitoring of dismantled components.

A Study about Performance Evaluation of Various NoSQL Databases (다양한 NoSQL 데이터베이스의 성능 평가 연구)

  • Park, Hong-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.298-305
    • /
    • 2016
  • Various NoSQL databases are more excellent to process a large amount of big data than existing relational databases such as MySQL, PostgreSQL and Oracle. Among widely used NoSQL databases, performance of HBase, Cassandra, MongoDB and Redis was comparatively assessed. For distributed processing of a large amount of data, 12 servers were connected through switching hub and Ubuntu was installed as operating system. As for benchmark tool, YCSB was applied. Read and update ratios changed from 50% and 50%, 95% and 5% and finally, 100% and 0% and each of them was assessed as 200,000 commands developed into 1,200,000 commands for each case. Cassandra was most excellent with transaction processing per second while MongoDB was most excellent with the number of processes carried out per unit time.

Numerical Web Model for Quality Management of Concrete based on Compressive Strength (압축강도 기반의 콘크리트 품질관리를 위한 웹 전산모델 개발)

  • Lee, Goon-Jae;Kim, Hak-Young;Lee, Hye-Jin;Hwang, Seung-Hyeon;Yang, Keun-Hyeok
    • Journal of the Korea Institute of Building Construction
    • /
    • v.21 no.3
    • /
    • pp.195-202
    • /
    • 2021
  • Concrete quality is mainly managed through the reliable prediction and control of compressive strength. Although related industries have established a relevant datasets based on the mixture proportions and compressive strength gain, whereas they have not been shared due to various reasons including technology leakage. Consequently, the costs and efforts for quality control have been wasted excessively. This study aimed to develop a web-based numerical model, which would present diverse optimal values including concrete strength prediction to the user, and to establish a sustainable database (DB) collection system by inducing the data entered by the user to be collected for the DB. The system handles the overall technology related to the concrete. Particularly, it predicts compressive strength at a mean accuracy of 89.2% by applying the artificial neural network method, modeled based on extensive DBs.

A study on the collection of river water usage using IRDiMS (IRDiMS를 활용한 하천수 사용량 수집 방안에 관한 연구)

  • Sang Uk Cho;Dong Heon Oh;Jong Seok Baek;Jeong Hwan Cheon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.440-440
    • /
    • 2023
  • 하천수 사용량 자료는 하천법에 따라 사용자가 월단위로 사용량(농업용수 8,000m3/s 이상)을 한강홍수통제소 하천수사용관리시스템에 직접 입력 또는 우편이나 공문 등으로 전달되어 입력되고 있다. 하지만 현재와 같이 월단위로 수집되는 자료로는 하천수 사용량 통계자료 분석이나 효율적인 물관리를 위한 하천 유량 관리에는 활용에 한계가 있어 정확한 하천수 사용량 관리가 어렵다. 따라서, 본 연구에서는 현장에서 계측되어 표준화된 자료로 전송되는 자료를 현재 한강홍수통제소에서 운영중인 첨단 계측장비를 이용해 하천 유량 자료를 실시간으로 수집 및 관리하고 있는 IRDiMS(Integrated Real-time Discharge Measurement System)를 활용하여 하천수사용관리시스템에 하천수 사용량 자료를 실시간으로 입력하도록 개선방안을 제시하고자 하였다. IRDiMS는 계측장비를 통해 하천 유량 자료를 수집하여 품질관리를 거쳐 하천의 유량자료를 관리하고 있어 실시간 하천수 사용량 자료는 IRDiMS 의 처리체계를 동일하게 활용할 수 있다. 또한 IRDiMS에서 처리된 자료는 물리적·소프트웨어 측면에서 한강홍수통제소 내부DB와 같은 공간에 위치하여 자료를 하천수사용관리시스템 외에도 국가수문DB 및 타시스템에 연계하는데 유리하다. 실시간 기반의 하천수 사용량 자료는 현장에서 실시간으로 계측되어 산정된 하천수 사용량 자료를 일정한 형식에 따라 10분 주기로 IRDiMS의 수신서버로 전송하고 수집된 자료를 처리서버에서 품질관리를 거친 후 자동유량DB에 저장된다. 저장된 자료는 정해진 규칙에 따라 국가수문DB로 자동 연계하고, 실시간으로 외부망에 있는 연계서버를 통해 하천수사용관리시스템DB에 자료를 저장하도록 하천수 사용량 수집에 대한 방안을 마련하였다.

  • PDF

A Study on the Database Integration Methodology using XML (XML을 이용한 데이터베이스 통합방안에 관한 연구)

  • Oh Se-Woong;Lee Hong-Girl;Lee Chul-Young;Park Jong-Min;Suh Sang-Hyung
    • Journal of Navigation and Port Research
    • /
    • v.29 no.10 s.106
    • /
    • pp.883-890
    • /
    • 2005
  • Database Integration problems has been recognized as a critical issue for effective logistics service in logistics environment. However, researches related to effective methodology for this have been studied theoretically in the DB schema integration, are insufficient in the side of the system realization. The aim of this paper is to present a schema integration technique to integrate DB using XML( eXtensible Markup Language) in the part of practical DB integration, a quantitative methodology for the identification of conflict that is a representative problem on database integration. To achieve this aim, we extracted the entity name and attribute name from DB schema and suggested a quantitative methodology to easily fine name conflict that frequently give raise to a trouble when schema integration, based on the level of semantic similarity between attributes and entities.

A Study on the Database Integration Methodology using XML (XML을 이용한 데이터베이스 통합방안에 관한 연구)

  • OH Se-Woong;Lee Hong-Girl;Lee Chul-Young;Park Jong-Min;Suh Sang-Hyung
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2005.10a
    • /
    • pp.353-360
    • /
    • 2005
  • Database Integration problems has been recognized as a critical issue for effective logistics service in logistics environment. However, researches related to effective methodology for this have been studied theoretically in the DB schema integration, are insufficient in the side of the system realization. The aim of this paper is to present a schema integration technique to integrate DB using XML(eXtensible Markup Language) in the part of practical DB integration, a quantitative methodology for the identification of conflict that is a representative problem on database integration. To achieve this aim, we extracted the entity name and attribute name from DB schema and suggested a quantitative methodology to easily fine name conflict that frequently give raise to a trouble when schema integration, based on the level of semantic similarity between attributes and entities.

  • PDF