• Title/Summary/Keyword: 정보소스

Search Result 2,225, Processing Time 0.026 seconds

The Performance Evaluation of the Adaptive UPC Mechanism in ATM (ATM망에서의 적응적 UPC 메커니즘의 성능 평가)

  • 안옥정
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1994.10a
    • /
    • pp.11-11
    • /
    • 1994
  • 트래픽의 흐름을 조절하고 ATM 네트워크 자원의 사용을 최적화하기위해서는 폭주로 인한 성능 저하를 막기 위한 적절한 제어가 필요하다. 기존의 사용자 감시 제어(UPC) 메커니즘이 네트워크의 상황과는 관계없이 매우 불안정한 예방적 기능만을 수행하였고, qvj로 인해 셀 지연을 가중시키는 한계점을 갖고 있었다. 본 논문은 OAM 셀을 이용하여 네트워크의 상태에 따라 능동적으로 반응하는 적응적 사용자 감시 제어 메커니즘을 제안하고ㅡ 버퍼로 인한 지연을 고려하여 서비스의 한층 더 높이고자 한다. 제시한 사용자 감시 제어방식은 OAM 셀이 주는 정보를 바탕으로 네트워크 내의 상황을 판단하여 사용자가 요구한 서비스의 질을 고려할 수 있도록 리키율과 버퍼의 문턱값을 조정하였다. 네트워크가 분주시에는 리키율을 낮추고 버퍼를 줄여서 네트워크 내에 유입되는 셀을 막는 역할을 하고 네트워크가 한가할 때는 리키율을 높히고 버퍼를 늘여서 빠르게 네트워크 내로 셀이 유일도리 수 있\ulcorner 한다. 폭주 발생 시에는 셀의 유입을 막고 푹주 상태가 해결될때까지 스페이서의 작동을 멈춘다. 본 논문에서 제시한 사용자 감시 제어 메커니즘의 트래픽 소스 모델은 IPP로 모델링하였고, 트래픽은 음성과 고속 데이터를 중심으로 시뮬레이션하였다. 음성과 고속 대이타 각각의 경우에 시뮬레이션한 결과를 기존의 방식과 비교, 분석한 결과에서 음성에서는 버퍼지연이 대폭 줄였고 고속 데이터인 경우에는 셀 손실율이 줄어드는 것을 볼 수 있었다. 따라서 제시한 방식에 의해 사용자가 요구하는 서비스의 질을 유지하면서 동시에 네트워크의 자원을 효율적으로 사용하였음을 알 수있었다.에 적합한가를 고찰하였다.베이터에 의한 아파트의 소음 및 진동에 관하여 그 현황, 원인 그리고 대책에 관한여 논하고자 한다.감 방법을 연구하였고, T.Sakai는 5자유도 모델을 이용하여 엔진 공회전시 발생하는 치타음에 대해 이론과 실험을 통해 해석하고, 엔진 회전수 변동, 클러치 특성, 변속기의 드래그(drag) 토크의 영향과 치타음 저감을 위한 개선된 클러치 특성을 제시하였다. 이 외에도 Thomas C.T.와 E.P.Petkus는 특정 차량에 대한 동력전달계의 비틀림 진동 현상에 대해 연구하였다. 이러한 연구들로 볼 때, 자동차 동력전달계에서 발생하는 진동은 이론과 실험을 통해 그 해석이 가능하며 설계에 매우 유용하게 이용되고 있음을 알 수 있다. 따라서, 본 연구는 4 실린더 4 싸이클 1.5 L 엔진을 장착한 경승용차의 실차 주행실험을 통해 가속 페달의 급조작에 따른 차체의 종진동 현상을 측정하고, 엔진-변속기-타이어-차체의 반환정계 4자유도 진동모델로 시뮬레이션을 수행하여 실차 주행실험의 결과치와 비교, 분석한 후 클러치 비틀림 특성을 비롯한 자동차 동력전달계의 각 설계인자들이 차체의 종진동에 어떠한 영향을 미치는가를 해석하고자 한다.be presented.LIFO, 우선 순위 방식등을 선택할 수 있도록 확장하였다. SIMPLE는 자료구조 및 프로그램이 공개되어 있으므로 프로그래머가 원하는 기능을 쉽게 추가할 수 있는 장점도 있다. 아울러 SMPLE에서 새로이 추가된 자료구조와 함수 및 설비제어 방식등을 활용하여 실제 중형급 시스템에 대한 시뮬레이션 구현과 시스템 분석의 예를 보인다._3$", chain segment,

  • PDF

A Study on Participatory Digital Archives (참여형 디지털 아카이브 활성화 방안 연구)

  • Park, Jinkyung;Kim, You-seung
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.28 no.2
    • /
    • pp.219-243
    • /
    • 2017
  • This study aims to provide alternative strategies for promoting active engagement of users in participatory archives. It focuses on users and their active participation in digital archives beyond providing simple participation opportunities. In doing so, the study reviewed relevant literature that analyzes interpretation and development of participatory digital archives. Moreover, it examined several cases of participatory digital archives as to how they apply for user participation, policy, and service. As a general property, main participants, duration, and technology were examined. Technology was further subdivided into open source software, availability of Open API, availability of mobile web, and offline archives. Participation method was divided into active participation, hub participation, and passive participation according to degree of user participation, and the participation functions provided by each archive were compared and analyzed. In policy area, terms of use, personal information processing policy, copyright policy, collection policy, major collections, scope of collections, classification methods, and descriptive elements of each archive were discussed. Services were divided into content, search, and communication area. Based on such analysis, this study proposed ways for promoting active engagement of users in participatory digital archives in terms of participation, policy, content service, and communication service.

$SiO_2/Si_3N_4/SiO_2$$Si_3N_4/SiO_2/Si_3N_4$ 터널 장벽을 사용한 금속 실리사이드 나노입자 비휘발성 메모리소자의 열적 안정성에 관한 연구

  • Lee, Dong-Uk;Kim, Seon-Pil;Han, Dong-Seok;Lee, Hyo-Jun;Kim, Eun-Gyu;Yu, Hui-Uk;Jo, Won-Ju
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2010.02a
    • /
    • pp.139-139
    • /
    • 2010
  • 금속 실리사이드 나노입자는 열적 및 화학적 안정성이 뛰어나고, 절연막내에 일함수 차이에 따라 깊은 양자 우물구조가 형성되어 비휘발성 메모리 소자를 제작할 수 있다. 그러나 단일 $SiO_2$ 절연막을 사용하였을 경우 저장된 전하의 정보 저장능력 및 쓰기/지우기 시간을 향상시키는 데 물리적 두께에 따른 제한이 따른다. 본 연구에서는 터널장벽 엔지니어링을 통하여 물리적인 두께는 단일 $SiO_2$ 보다는 두꺼우나 쓰기/지우기 동작을 위하여 인가되는 전기장에 의하여 상대적으로 전자가 느끼는 상대적인 터널 절연막 두께를 감소시키는 방법으로 동작속도를 향상 시킨 $SiO_2/Si_3N_4/SiO_2$$Si_3N_4/SiO_2/Si_3N_4$ 터널 절연막을 사용한 금속 실리사이드 나노입자 비휘발성 메모리를 제조하였다. 제조방법은 우선 p-type 실리콘 웨이퍼 위에 100 nm 두께로 증착된 Poly-Si 층을 형성 한 이후 소스와 드레인 영역을 리소그래피 방법으로 형성시켜 트랜지스터의 채널을 형성한 이후 그 상부에 $SiO_2/Si_3N_4/SiO_2$ (2 nm/ 2 nm/ 3 nm) 및 $Si_3N_4/SiO_2/Si_3N_4$ (2 nm/ 3 nm/ 3 nm)를 화학적 증기 증착(chemical vapor deposition)방법으로 형성 시킨 이후, direct current magnetron sputtering 방법을 이용하여 2~5 nm 두께의 $WSi_2$$TiSi_2$ 박막을 증착하였으며, 나노입자 형성을 위하여 rapid thermal annealing(RTA) system을 이용하여 $800{\sim}1000^{\circ}C$에서 질소($N_2$) 분위기로 1~5분 동안 열처리를 하였다. 이후 radio frequency magnetron sputtering을 이용하여 $SiO_2$ control oxide layer를 30 nm로 증착한 후, RTA system을 이용하여 $900^{\circ}C$에서 30초 동안 $N_2$ 분위기에서 후 열처리를 하였다. 마지막으로 thermal evaporator system을 이용하여 Al 전극을 200 nm 증착한 이후 리소그래피와 식각 공정을 통하여 채널 폭/길이 $2{\sim}5{\mu}m$인 비휘발성 메모리 소자를 제작하였다. 제작된 비휘발성 메모리 소자는 HP 4156A semiconductor parameter analyzer와 Agilent 81101A pulse generator를 이용하여 전기적 특성을 확인 하였으며, 측정 온도를 $25^{\circ}C$, $85^{\circ}C$, $125^{\circ}C$로 변화시켜가며 제작된 비휘발성 메모리 소자의 열적 안정성에 관하여 연구하였다.

  • PDF

A Security Nonce Generation Algorithm Scheme Research for Improving Data Reliability and Anomaly Pattern Detection of Smart City Platform Data Management (스마트시티 플랫폼 데이터 운영의 이상패턴 탐지 및 데이터 신뢰성 향상을 위한 보안 난수 생성 알고리즘 방안 연구)

  • Lee, Jaekwan;Shin, Jinho;Joo, Yongjae;Noh, Jaekoo;Kim, Jae Do;Kim, Yongjoon;Jung, Namjoon
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.4 no.2
    • /
    • pp.75-80
    • /
    • 2018
  • The smart city is developing an energy system efficiently through a common management of the city resource for the growth and a low carbon social. However, the smart city doesn't counter a verification effectively about a anomaly pattern detection when existing security technology (authentication, integrity, confidentiality) is used by fixed security key and key deodorization according to generated big data. This paper is proposed the "security nonce generation based on security nonce generation" for anomaly pattern detection of the adversary and a safety of the key is high through the key generation of the KDC (Key Distribution Center; KDC) for improvement. The proposed scheme distributes the generated security nonce and authentication keys to each facilities system by the KDC. This proposed scheme can be enhanced to the security by doing the external pattern detection and changed new security key through distributed security nonce with keys. Therefore, this paper can do improving the security and a responsibility of the smart city platform management data through the anomaly pattern detection and the safety of the keys.

Development of an open source-based APT attack prevention Chrome extension (오픈소스 기반 APT 공격 예방 Chrome extension 개발)

  • Kim, Heeeun;Shon, Taeshik;Kim, Duwon;Han, Gwangseok;Seong, JiHoon
    • Journal of Platform Technology
    • /
    • v.9 no.3
    • /
    • pp.3-17
    • /
    • 2021
  • Advanced persistent threat (APT) attacks are attacks aimed at a particular entity as a set of latent and persistent computer hacking processes. These APT attacks are usually carried out through various methods, including spam mail and disguised banner advertising. The same name is also used for files, since most of them are distributed via spam mail disguised as invoices, shipment documents, and purchase orders. In addition, such Infostealer attacks were the most frequently discovered malicious code in the first week of February 2021. CDR is a 'Content Disarm & Reconstruction' technology that can prevent the risk of malware infection by removing potential security threats from files and recombining them into safe files. Gartner, a global IT advisory organization, recommends CDR as a solution to attacks in the form of attachments. There is a program using CDR techniques released as open source is called 'Dangerzone'. The program supports the extension of most document files, but does not support the extension of HWP files that are widely used in Korea. In addition, Gmail blocks malicious URLs first, but it does not block malicious URLs in mail systems such as Naver and Daum, so malicious URLs can be easily distributed. Based on this problem, we developed a 'Dangerzone' program that supports the HWP extension to prevent APT attacks, and a Chrome extension that performs URL checking in Naver and Daum mail and blocking banner ads.

An Implementation of the OTB Extension to Produce RapidEye Surface Reflectance and Its Accuracy Validation Experiment (RapidEye 영상정보의 지표반사도 생성을 위한 OTB Extension 개발과 정확도 검증 실험)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.485-496
    • /
    • 2022
  • This study is for the software implementation to generate atmospheric and surface reflectance products from RapidEye satellite imagery. The software is an extension based on Orfeo Toolbox (OTB) and an open-source remote sensing software including calibration modules which use an absolute atmospheric correction algorithm. In order to verify the performance of the program, the accuracy of the product was validated by a test image on the Radiometric Calibration Network (RadCalNet) site. In addition, the accuracy of the surface reflectance product generated from the KOMPSAT-3A image, the surface reflectance of Landsat Analysis Ready Data (ARD) of the same site, and near acquisition date were compared with RapidEye-based one. At the same time, a comparative study was carried out with the processing results using QUick Atmospheric Correction (QUAC) and Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) tool supported by a commercial tool for the same image. Similar to the KOMPSAT-3A-based surface reflectance product, the results obtained from RapidEye Extension showed accuracy of agreement level within 5%, compared with RadCalNet data. They also showed better accuracy in all band images than the results using QUAC or FLAASH tool. As the importance of the Red-Edge band in agriculture, forests, and the environment applications is being emphasized, it is expected that the utilization of the surface reflectance products of RapidEye images produced using this program will also increase.

Sequential Use of COMSOL Multiphysics® and PyLith for Poroelastic Modeling of Fluid Injection and Induced Earthquakes (COMSOL Multiphysics®와 PyLith의 순차 적용을 통한 지중 유체 주입과 유발지진 공탄성 수치 모사 기법 연구)

  • Jang, Chan-Hee;Kim, Hyun Na;So, Byung-Dal
    • The Journal of Engineering Geology
    • /
    • v.32 no.4
    • /
    • pp.643-659
    • /
    • 2022
  • Geologic sequestration technologies such as CCS (carbon capture and storage), EGS (enhanced geothermal systems), and EOR (enhanced oil recovery) have been widely implemented in recent years, prompting evaluation of the mechanical stability of storage sites. As fluid injection can stimulate mechanical instability in storage layers by perturbing the stress state and pore pressure, poroelastic models considering various injection scenarios are required. In this study, we calculate the pore pressure, stress distribution, and vertical displacement along a surface using commercial finite element software (COMSOL); fault slips are subsequently simulated using PyLith, an open-source finite element software. The displacement fields, are obtained from PyLith is transferred back to COMSOL to determine changes in coseismic stresses and surface displacements. Our sequential use of COMSOL-PyLith-COMSOL for poroelastic modeling of fluid-injection and induced-earthquakes reveals large variations of pore pressure, vertical displacement, and Coulomb failure stress change during injection periods. On the other hand, the residual stress diffuses into the remote field after injection stops. This flow pattern suggests the necessity of numerical modeling and long-term monitoring, even after injection has stopped. We found that the time at which the Coulomb failure stress reaches the critical point greatly varies with the hydraulic and poroelastic properties (e.g., permeability and Biot-Willis coefficient) of the fault and injection layer. We suggest that an understanding of the detailed physical properties of the surrounding layer is important in selecting the injection site. Our numerical results showing the surface displacement and deviatoric stress distribution with different amounts of fault slip highlight the need to test more variable fault slip scenarios.

Evaluation method for interoperability of weapon systems applying natural language processing techniques (자연어처리 기법을 적용한 무기체계의 상호운용성 평가방법)

  • Yong-Gyun Kim;Dong-Hyen Lee
    • Journal of The Korean Institute of Defense Technology
    • /
    • v.5 no.3
    • /
    • pp.8-17
    • /
    • 2023
  • The current weapon system is operated as a complex weapon system with various standards and protocols applied, so there is a risk of failure in smooth information exchange during combined and joint operations on the battlefield. The interoperability of weapon systems to carry out precise strikes on key targets through rapid situational judgment between weapon systems is a key element in the conduct of war. Since the Korean military went into service, there has been a need to change the configuration and improve performance of a large number of software and hardware, but there is no verification system for the impact on interoperability, and there are no related test tools and facilities. In addition, during combined and joint training, errors frequently occur during use after arbitrarily changing the detailed operation method and software of the weapon/power support system. Therefore, periodic verification of interoperability between weapon systems is necessary. To solve this problem, rather than having people schedule an evaluation period and conduct the evaluation once, AI should continuously evaluate the interoperability between weapons and power support systems 24 hours a day to advance warfighting capabilities. To solve these problems, To this end, preliminary research was conducted to improve defense interoperability capabilities by applying natural language processing techniques (①Word2Vec model, ②FastText model, ③Swivel model) (using published algorithms and source code). Based on the results of this experiment, we would like to present a methodology (automated evaluation of interoperability requirements evaluation / level measurement through natural language processing model) to implement an automated defense interoperability evaluation tool without relying on humans.

  • PDF

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Design and Implementation of an Execution-Provenance Based Simulation Data Management Framework for Computational Science Engineering Simulation Platform (계산과학공학 플랫폼을 위한 실행-이력 기반의 시뮬레이션 데이터 관리 프레임워크 설계 및 구현)

  • Ma, Jin;Lee, Sik;Cho, Kum-won;Suh, Young-kyoon
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.77-86
    • /
    • 2018
  • For the past few years, KISTI has been servicing an online simulation execution platform, called EDISON, allowing users to conduct simulations on various scientific applications supplied by diverse computational science and engineering disciplines. Typically, these simulations accompany large-scale computation and accordingly produce a huge volume of output data. One critical issue arising when conducting those simulations on an online platform stems from the fact that a number of users simultaneously submit to the platform their simulation requests (or jobs) with the same (or almost unchanging) input parameters or files, resulting in charging a significant burden on the platform. In other words, the same computing jobs lead to duplicate consumption computing and storage resources at an undesirably fast pace. To overcome excessive resource usage by such identical simulation requests, in this paper we introduce a novel framework, called IceSheet, to efficiently manage simulation data based on execution metadata, that is, provenance. The IceSheet framework captures and stores each provenance associated with a conducted simulation. The collected provenance records are utilized for not only inspecting duplicate simulation requests but also performing search on existing simulation results via an open-source search engine, ElasticSearch. In particular, this paper elaborates on the core components in the IceSheet framework to support the search and reuse on the stored simulation results. We implemented as prototype the proposed framework using the engine in conjunction with the online simulation execution platform. Our evaluation of the framework was performed on the real simulation execution-provenance records collected on the platform. Once the prototyped IceSheet framework fully functions with the platform, users can quickly search for past parameter values entered into desired simulation software and receive existing results on the same input parameter values on the software if any. Therefore, we expect that the proposed framework contributes to eliminating duplicate resource consumption and significantly reducing execution time on the same requests as previously-executed simulations.