• Title/Summary/Keyword: 연구리소스

Search Result 329, Processing Time 0.033 seconds

A Study on the Digital Filter Design for Radio Astronomy Using FPGA (FPGA를 이용한 전파천문용 디지털 필터 설계에 관한 기본연구)

  • Jung, Gu-Young;Roh, Duk-Gyoo;Oh, Se-Jin;Yeom, Jae-Hwan;Kang, Yong-Woo;Lee, Chang-Hoon;Chung, Hyun0Soo;Kim, Kwang-Dong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.1
    • /
    • pp.62-74
    • /
    • 2008
  • In this paper, we would like to propose the design of symmetric digital filter core in order to use in the radio astronomy. The function of FIR filter core would be designed by VHDL code required at the Data Acquisition System (DAS) of Korean VLBI Network (KVN) based on the FPGA chip of Vertex-4 SX55 model of Xilinx company. The designed digital filter has the symmetric structure to increase the effectiveness of system by sharing the digital filter coefficient. The SFFU(Symmetric FIR Filter Unit) use the parallel processing method to perform the data processing efficiently by using the constrained system clock. In this paper, therefore, for the effective design of SFFU, the Unified Synthesis software ISE Foundation and Core Generator which has excellent GUI environment were used to overall IP core synthesis and experiments. Through the synthesis results of digital filter core, we verified the resource usage is less than 40% such as Slice LUT and achieved the maximum operation frequency is more than 260MHz. We also confirmed the SFFU would be well operated without error according to the SFFU simulation result using the Modelsim 6.1a of Mentor Graphics Company. To verify the function of SFFU, we carried out the additional simulation experiments using the pseudo signal to the Matlab software. From the comparison experimental results of simulation and the designed digital FIR filter, we confirmed the FIR filter was well performed with filter's basic function. So we verified the effectiveness of the designed FIR digital filter with symmetric structure using FPGA and VHDL.

  • PDF

Research for the Element to Analyze the Performance of Modern-Web-Browser Based Applications (모던 웹 브라우저(Modern-Web-Browser) 기반 애플리케이션 성능분석을 위한 요소 연구)

  • Park, Jin-tae;Kim, Hyun-gook;Moon, Il-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.278-281
    • /
    • 2018
  • The early Web technology was to show text information through a browser. However, as web technology advances, it is possible to show large amounts of multimedia data through browsers. Web technologies are being applied in a variety of fields such as sensor network, hardware control, and data collection and analysis for big data and AI services. As a result, the standard has been prepared for the Internet of Things, which typically controls a sensor via HTTP communication and provides information to users, by installing a web browser on the interface of the Internet of Things. In addition, the recent development of web-assembly enabled 3D objects, virtual/enhancing real-world content that could not be run in web browsers through a native language of C-class. Factors that evaluate the performance of existing Web applications include performance, network resources, and security. However, since there are many areas in which web applications are applied, it is time to revisit and review these factors. In this thesis, we will conduct an analysis of the factors that assess the performance of a web application. We intend to establish an indicator of the development of web-based applications by reviewing the analysis of each element, its main points, and its needs to be supplemented.

  • PDF

A Rule-based Reasoning Engine supporting Hierarchical Taxonomy (계층적 분류체계를 지원하는 규칙기반 추론엔진)

  • Kim, Tae-Hyun;Kim, Jae-Ho;Won, Kwang-Ho;Lee, Ki-Hyuk;Sohn, Ki-Rack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.148-154
    • /
    • 2008
  • In a ubiquitous computing environment, a ubiquitous smart space is required to help devices provide intelligent services. The smart space embedded with mobile devices should have the capabilities of collecting data and refining the data to contact. Unfortunately, the context information in a ubiquitous smart space has many ambiguous characteristics. Therefore, it is necessary to adapt a standard taxonomy for contact information in the smart space and to implement an inference technique of the context information based on taxonomy. Rule-based inference engine, such as CLIPS, Jess, was employed for providing situation-aware services. However, it is difficult for these engines to be used in resource limited mobile devices. In this paper, we propose a light-weight inference engine providing autonomous situation aware services in mobile environment. It can be utilized for personal mobile devices tuck as mobile phone, PMP and navigation. It can also support both generalized rules and specialized rules as using hierarchical taxonomy information.

Design of an $SpO_2$ Transmission Agent based on ISO/IEEE 11073 Standard Protocol (ISO/IEEE 11073 표준 프로토콜 기반의 산소포화도 전송 에이전트 설계)

  • Pak, Ju-Geon;Im, Sung-Hyun;Park, Kee-Hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.462-465
    • /
    • 2011
  • A pulse oximeter is a device which provides non-invasive estimate of percentage oxygen saturation of haemoglobin (SpO2). Due to the limitations of resources of personal health devices (PHDs) including pulse oximeters, they generally transmit the estimated data to a remote monitoring server through a close manager (e.g. mobile device or PC). Therefore, communication protocols between PHDs and a manager is an important research topic in terms of interoperability. In this paper, we present design results of an SpO2 transmission agent based on the ISO/IEEE 11073 (X73) protocol. The protocol is an international standard for PHDs. The agent is an embedded program which generates X73 messages from the estimated pulse rates and SpO2, and transmits the messages to a close manager. The agent consists of a Session, Message and Memory Handler. The Session Handler manages a communication session with the manager, and the Message Handler generates and analyzes the exchanged messages according to the X73 protocol. The Memory Handler extracts pulse rates and SpO2s which are stored in a memory of the pulse oximeter. The SpO2 transmission agent allows pulse oximeters to communicate with managers based on x73 standard. Consequently, the interoperability between the pulse oximeters and the managers is guaranteed.

  • PDF

Design and Performance Analysis of EU Directory Service (ENUM 디렉터리 서비스 설계 및 성능 평가)

  • 이혜원;윤미연;신용태;신성우;송관우
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.4
    • /
    • pp.559-571
    • /
    • 2003
  • ENUM(tElephon NUmbering Mapping) is protocol that brings convergence between PSTN Networks and IP Networks using a unique worldwide E.164 telephone number as an identifier between different communication infrastructure. The mechanism provides a bridge between two completely different environments with E.164 number; IP based application services used in PSTN networks, and PSTN based application services used in IP networks. We propose a new way to organize and handle ENUM Tier 2 name servers to improve performance at the name resolution process in ENUM based application service. We build an ENUM based network model when NAPTR(Naming Authority PoinTeR) resource record is registered and managed by area code at the initial registration step. ENUM promises convenience and flexibility to both PSTN and IP users, yet there is no evidence how much patience is required when users decide to use ENUM instead of non-ENUM based applications. We have estimated ENUM response time, and proved how to improve performance up to 3 times when resources are managed by the proposed mechanism. The proposition of this thesis favorably influences users and helps to establish the policy for Tier 2 name server management.

A Study on the Development of Textile Design Contents Reflecting The Cultural Characteristics of Multi-cultural Society - Focused on Folk Paintings in China, Vietnam and Japan - (다문화사회의 문화적 특성을 반영한 텍스타일디자인 콘텐츠 개발 연구 - 중국, 베트남, 일본의 민화를 중심으로 -)

  • Park, Sang Oh
    • Korea Science and Art Forum
    • /
    • v.30
    • /
    • pp.119-127
    • /
    • 2017
  • Multi-cultural societies in the era of globalization are now common phenomena all over the world. Since our country has already entered into a multi-cultural society, we can no longer stay in the ideology of a single nation. However, current national policies and researches related to multi-cultural society in Korea are limited to institutional aspects and unilateral education of Korean culture. Therefore, this study aims to overcome these practical limitations. The purpose of this study is to acquire design resources in the folk paintings reflecting the culture of each country. And We will develop textile design content that can be applied to most closely related textile products in daily life. Through this, it is aimed to raise awareness of various cultures and to suggest a communication method through cultural exchange. Therefore, this study has developed color and textile pattern design contents through analysis of characteristics of China, Vietnam, and Japan peoples of the three most frequent countries based on the status of domestic marriage immigrants. And tried to apply it immediately to various textile products. The results and contents of the study are as follows. First, the domestic multi-cultural society was formed through international marriage, and the largest number of marriage immigrants came from China, Vietnam, Japan, the Philippines, Cambodia, Thailand, Mongolia and others. Second, folk paintings are suitable for developing textile design contents as an important factor implied by different cultures of different countries. Thirdly, we have developed the pattern and coloring DB and textile pattern design contents by using folk paintings of China, Vietnam and Japan. As a result, we could verify the utilization of contents reflecting the cultural characteristics of each country and the possibility of commercialization. Based on the results of this research, we hope to contribute to the harmonization of the emotional and artistic aspects that naturally share the culture among multi-cultural society members and to develop differentiated related products.

Analysis of Metadata Standards of Record Management for Metadata Interoperability From the viewpoint of the Task model and 5W1H (메타데이터 상호운용성을 위한 기록관리 메타데이터 표준 분석 5W1H와 태스크 모델의 관점에서)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • The Korean Journal of Archival Studies
    • /
    • no.32
    • /
    • pp.127-176
    • /
    • 2012
  • Metadata is well recognized as one of the foundational factors in archiving and long-term preservation of digital resources. There are several metadata standards for records management, archives and preservation, e.g. ISAD(G), EAD, AGRkMs, PREMIS, and OAIS. Consideration is important in selecting appropriate metadata standards in order to design metadata schema that meet the requirements of a particular archival system. Interoperability of metadata with other systems should be considered in schema design. In our previous research, we have presented a feature analysis of metadata standards by identifying the primary resource lifecycle stages where each standard is applied. We have clarified that any single metadata standard cannot cover the whole records lifecycle for archiving and preservation. Through this feature analysis, we analyzed the features of metadata in the whole records lifecycle, and we clarified the relationships between the metadata standards and the stages of the lifecycle. In the previous study, more detailed analysis was left for future study. This paper proposes to analyze the metadata schemas from the viewpoint of tasks performed in the lifecycle. Metadata schemas are primarily defined to describe properties of a resource in accordance with the purposes of description, e.g. finding aids, records management, preservation and so forth. In other words, the metadata standards are resource- and purpose-centric, and the resource lifecycle is not explicitly reflected in the standards. There are no systematic methods for mapping between different metadata standards in accordance with the lifecycle. This paper proposes a method for mapping between metadata standards based on the tasks contained in the resource lifecycle. We first propose a Task Model to clarify tasks applied to resources in each stage of the lifecycle. This model is created as a task-centric model to identify features of metadata standards and to create mappings among elements of those standards. It is important to categorize the elements in order to limit the semantic scope of mapping among elements and decrease the number of combinations of elements for mapping. This paper proposes to use 5W1H (Who, What, Why, When, Where, How) model to categorize the elements. 5W1H categories are generally used for describing events, e.g. news articles. As performing a task on a resource causes an event and metadata elements are used in the event, we consider that the 5W1H categories are adequate to categorize the elements. By using these categories, we determine the features of every element of metadata standards which are AGLS, AGRkMS, PREMIS, EAD, OAIS and an attribute set extracted from DPC decision flow. Then, we perform the element mapping between the standards, and find the relationships between the standards. In this study, we defined a set of terms for each of 5W1H categories, which typically appear in the definition of an element, and used those terms to categorize the elements. For example, if the definition of an element includes the terms such as person and organization that mean a subject which contribute to create, modify a resource the element is categorized into the Who category. A single element can be categorized into one or more 5W1H categories. Thus, we categorized every element of the metadata standards using the 5W1H model, and then, we carried out mapping among the elements in each category. We conclude that the Task Model provides a new viewpoint for metadata schemas and is useful to help us understand the features of metadata standards for records management and archives. The 5W1H model, which is defined based on the Task Model, provides us a core set of categories to semantically classify metadata elements from the viewpoint of an event caused by a task.

Nd, Sr and Noble Gas Isotopic Compositions of Alkali Basaltic Rocks and Mantle Xenoliths in the Baegryongdo (백령도에 분포하는 알칼리 현무암과 맨틀 포획암의 Nd-Sr과 영족기체 동위원소 조성)

  • ;Nagao Keisuke;;Sumino Hirochika
    • Economic and Environmental Geology
    • /
    • v.35 no.6
    • /
    • pp.523-532
    • /
    • 2002
  • The rare earth elements (REE) and Nd, Sr and noble gas isotopic compositions eHer'He, 4$^{\circ}$Arp6Ar) for the Quaternary alkali basaltic rocks and mantle xenoliths in the basaltic rocks from the Baegryongdo were investigated to decipher the origin of alkali basaltic magma and xenolith beneath the Sino-Korean craton. Analytical results are summarized as follows; (1) The alkali volcanic rocks with voluminous xenoliths which are represented by the Mg-olivine and clinopyroxene dominant spinel-lherzolite in the Baegryongdo consist mainly of the basalt-mugearite and basaltic andesite. (2) The REE pattern of alkali basaltic rocks characterized by high HREE is similar to that of oceanic island basalt (OlB). Relatively concordant REE patterns of the basaltic rocks suggest that the alkali basaltic magma be formed by the identical source materials. (3) The Nd-Sr isotopic data of the alkali basaltic rocks suggest that the alkali basaltic magma be originated from the depleted mantle source with a little contamination of the continental crustal materials. (4) The $^3$He/ $^4$He ratios in olivines of xenoliths ranging from 5.0${\pm}$1.lRa to 6.7${\pm}$1.3Ra are lower than that of MORB (ca. 8.0Ra). It suggest that the xenolith be derived from the subcontinental lithospheric mantle. However, the high $^3$Her'He value of 16.8${\pm}$3.IRa at 1800$^{\circ}$C fraction (sample no OL-7) might be resulted from the post-eruptive cosmogenic $^3$He. The 4OAr/ 36 Ar ratios in olivines of mantle xenoliths are comparable to that of atmospheric argon, and are much lower than that of the MORB type mantle. These facts can lead to conclusion that the olivine of the xenolith in the Baegryongdo is affected by the post-eruptive atmospheric contamination during the slow degassing process.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.