• Title/Summary/Keyword: 인터넷 플로우

Search Result 265, Processing Time 0.024 seconds

The Dynamic Interface Representation of Web Sites using EMFG (EMFG를 이용한 웹사이트의 동적 인터페이스 표현)

  • Kim, Eun-Sook;Yeo, Jeong-Mo
    • The KIPS Transactions:PartD
    • /
    • v.15D no.5
    • /
    • pp.691-698
    • /
    • 2008
  • Web designers generally use a story board, a site map, a flow chart or the combination of these for representing web sites. But these methods are difficult to represent the entire architecture of a web site, and may be not adaptive for describing the detail flow of web pages. To solve these problems to some degree, there were works using EMFG(Extended Mark Flow Graph) recently. However the conventional EMFG representation method is not adaptive to represent the dynamic interface of web sites because that cover only the static parts of a web site. Internet utilization is rapidly growing in our life and we cannot imagine the worlds of work, study and business without internet. And web sites recently have not only more complex and various architecture but also web pages containing the dynamic interface. Therefore we propose the representation method of these web sites - for example, a web site containing varying pages with time and varying page status or contents with mouse operations - using EMFG. We expect our work to be help the design and maintenance of web sites.

Security Verification of Korean Open Crypto Source Codes with Differential Fuzzing Analysis Method (차분 퍼징을 이용한 국내 공개 암호소스코드 안전성 검증)

  • Yoon, Hyung Joon;Seo, Seog Chung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1225-1236
    • /
    • 2020
  • Fuzzing is an automated software testing methodology that dynamically tests the security of software by inputting randomly generated input values outside of the expected range. KISA is releasing open source for standard cryptographic algorithms, and many crypto module developers are developing crypto modules using this source code. If there is a vulnerability in the open source code, the cryptographic library referring to it has a potential vulnerability, which may lead to a security accident that causes enormous losses in the future. Therefore, in this study, an appropriate security policy was established to verify the safety of block cipher source codes such as SEED, HIGHT, and ARIA, and the safety was verified using differential fuzzing. Finally, a total of 45 vulnerabilities were found in the memory bug items and error handling items, and a vulnerability improvement plan to solve them is proposed.

Policy-based In-Network Security Management using P4 Network DataPlane Programmability (P4 프로그래머블 네트워크를 통한 정책 기반 인-네트워크 보안 관리 방법)

  • Cho, Buseung
    • Convergence Security Journal
    • /
    • v.20 no.5
    • /
    • pp.3-10
    • /
    • 2020
  • Recently, the Internet and networks are regarded as essential infrastructures that constitute society, and security threats have been constantly increased. However, the network switch that actually transmits packets in the network can cope with security threats only through firewall or network access control based on fixed rules, so the effective defense for the security threats is extremely limited in the network itself and not actively responding as well. In this paper, we propose an in-network security framework using the high-level data plane programming language, P4 (Programming Protocol-independent Packet Processor), to deal with DDoS attacks and IP spoofing attacks at the network level by monitoring all flows in the network in real time and processing specific security attack packets at the P4 switch. In addition, by allowing the P4 switch to apply the network user's or administrator's policy through the SDN (Software-Defined Network) controller, various security requirements in the network application environment can be reflected.

A Scheme to Support QoS based-on Differentiated Services in MPLS Network (MPLS망에서 Differentiated Services 기반 QoS 지원 방안)

  • 박천관;정원일
    • The Journal of Information Technology
    • /
    • v.4 no.3
    • /
    • pp.87-100
    • /
    • 2001
  • IETF has proposed integrated services model(Int-Serv) and differentiated service(Diff-Serv) to supply IP QoS in Internet[1][2]. Int-Serv model uses state information of each IP flow, so satisfies QoS according to traffic characteristics, but increases the amount of flow state information with increasing flow number. Diff-Serv uses PHP(Per Hop Behaviour) and there are well-defined classes to provide differentiated traffics with different services according to delay and loss sensitivity. Diff-Serv model can provide diverse services in Internet because of having no the state and signal information of each flow. As MPLS uses the packet forwarding technology based on label, it implements the forwarding engine of high performance easily. The MPLS can set up the path having different and variable bandwidth and assign each path to particular CoS (Class of Service). Therefore it is possible to support the Diff-Serv model of well- defined classes that can provide the differentiated traffic with different services according to delay and loss sensitivity in IP QoS models of IETF. In this paper we propose a scheme that can accommodate Diff-Serv model to provide QoS. The system performance has been estimated by scheduling plan according to traffic classes.

  • PDF

A Framework of Intelligent Middleware for DNA Sequence Analysis in Cloud Computing Environment (DNA 서열 분석을 위한 클라우드 컴퓨팅 기반 지능형 미들웨어 설계)

  • Oh, Junseok;Lee, Yoonjae;Lee, Bong Gyou
    • Journal of Internet Computing and Services
    • /
    • v.15 no.1
    • /
    • pp.29-43
    • /
    • 2014
  • The development of NGS technologies, such as scientific workflows, has reduced the time required for decoding DNA sequences. Although the automated technologies change the genome sequence analysis environment, limited computing resources still pose problems for the analysis. Most scientific workflow systems are pre-built platforms and are highly complex because a lot of the functions are implemented into one system platform. It is also difficult to apply components of pre-built systems to a new system in the cloud environment. Cloud computing technologies can be applied to the systems to reduce analysis time and enable simultaneous analysis of massive DNA sequence data. Web service techniques are also introduced for improving the interoperability between DNA sequence analysis systems. The workflow-based middleware, which supports Web services, DBMS, and cloud computing, is proposed in this paper for expecting to reduceanalysis time and aiding lightweight virtual instances. It uses DBMS for managing the pipeline status and supporting the creation of lightweight virtual instances in the cloud environment. Also, the RESTful Web services with simple URI and XML contents are applied for improving the interoperability. The performance test of the system needs to be conducted by comparing results other developed DNA analysis services at the stabilization stage.

Leision Detection in Chest X-ray Images based on Coreset of Patch Feature (패치 특징 코어세트 기반의 흉부 X-Ray 영상에서의 병변 유무 감지)

  • Kim, Hyun-bin;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.35-45
    • /
    • 2022
  • Even in recent years, treatment of first-aid patients is still often delayed due to a shortage of medical resources in marginalized areas. Research on automating the analysis of medical data to solve the problems of inaccessibility for medical services and shortage of medical personnel is ongoing. Computer vision-based medical inspection automation requires a lot of cost in data collection and labeling for training purposes. These problems stand out in the works of classifying lesion that are rare, or pathological features and pathogenesis that are difficult to clearly define visually. Anomaly detection is attracting as a method that can significantly reduce the cost of data collection by adopting an unsupervised learning strategy. In this paper, we propose methods for detecting abnormal images on chest X-RAY images as follows based on existing anomaly detection techniques. (1) Normalize the brightness range of medical images resampled as optimal resolution. (2) Some feature vectors with high representative power are selected in set of patch features extracted as intermediate-level from lesion-free images. (3) Measure the difference from the feature vectors of lesion-free data selected based on the nearest neighbor search algorithm. The proposed system can simultaneously perform anomaly classification and localization for each image. In this paper, the anomaly detection performance of the proposed system for chest X-RAY images of PA projection is measured and presented by detailed conditions. We demonstrate effect of anomaly detection for medical images by showing 0.705 classification AUROC for random subset extracted from the PadChest dataset. The proposed system can be usefully used to improve the clinical diagnosis workflow of medical institutions, and can effectively support early diagnosis in medically poor area.

FinBERT Fine-Tuning for Sentiment Analysis: Exploring the Effectiveness of Datasets and Hyperparameters (감성 분석을 위한 FinBERT 미세 조정: 데이터 세트와 하이퍼파라미터의 효과성 탐구)

  • Jae Heon Kim;Hui Do Jung;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.127-135
    • /
    • 2023
  • This research paper explores the application of FinBERT, a variational BERT-based model pre-trained on financial domain, for sentiment analysis in the financial domain while focusing on the process of identifying suitable training data and hyperparameters. Our goal is to offer a comprehensive guide on effectively utilizing the FinBERT model for accurate sentiment analysis by employing various datasets and fine-tuning hyperparameters. We outline the architecture and workflow of the proposed approach for fine-tuning the FinBERT model in this study, emphasizing the performance of various datasets and hyperparameters for sentiment analysis tasks. Additionally, we verify the reliability of GPT-3 as a suitable annotator by using it for sentiment labeling tasks. Our results show that the fine-tuned FinBERT model excels across a range of datasets and that the optimal combination is a learning rate of 5e-5 and a batch size of 64, which perform consistently well across all datasets. Furthermore, based on the significant performance improvement of the FinBERT model with our Twitter data in general domain compared to our news data in general domain, we also express uncertainty about the model being further pre-trained only on financial news data. We simplify the complex process of determining the optimal approach to the FinBERT model and provide guidelines for selecting additional training datasets and hyperparameters within the fine-tuning process of financial sentiment analysis models.

'Collective intelligence Structure' Analysis (지식 생산 방식에 따른 집단지성 구조 분석 -네이버 지식IN과 위키피디아를 중심으로-)

  • Han, Chang-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.1363-1373
    • /
    • 2009
  • 본 연구는 두 집단지성의 가장 대표적인 서비스인 네이버 지식iN과 위키피디아의 구조적, 경험적 차이를 바탕으로 생산의 차원에서 생산 주기, 생산 참여자, 생산물의 모델을 설정하고, 새롭게 탄생하는 지식을 중심으로 검증함으로써 최종 지식 소비 행위를 반영한 각각의 종합모델을 도출하였다. 우리는 웹에서 집단지성의 일상화를 확인할 수 있다. 지식 획득 매체가 매스미디어에서 인터넷으로 변화하는 과정에서 등장한 포털 및 검색사이트는 지식의 생산이 전문가패러다임에서 소비자 중심으로 재편될 수 있는 가능성을 열어주었다. 그리고 이러한 생산 방식의 변화는 '지식'의 개념 역시 변화시키고 있다. 즉, 집단지성이라는 새로운 웹2.0의 현상이 지식생산방식을 변화시키고 변화된 지식생산방식은 '지식'자체를 변화시킨다는 이론적 가설을 도출할 수 있는 것이다. 본 연구는 이러한 새로운 현상들을 분석하기 위해서는 먼저 보다 엄밀하게 집단지성의 개념을 규정할 필요성에 출발하였다. 현재 집단지성이라는 이름으로 불리면서 급격히 성장하고 있는 위키 방식의 인터넷 서비스와 지식검색 방식의 인터넷 서비스를 비교함으로써 보다 정교한 집단지성의 모델을 구축하고자 하였다. 위키형 집단지성과 지식검색형 집단지성의 차이점은 경험적으로도 뚜렷하게 확인할 수 있다. 본 연구는 이러한 경험적 차이와 기존의 문헌에서 밝혀진 사실들을 바탕으로 두 서비스의 지식생산 방식을 생산플로우, 생산참여자 성향, 생산물(지식)의 성향과 같이 세 영역으로 나누어 각각의 가설 모델을 설정하고 이 모델을 선정된 질의어를 바탕으로 검증한 뒤에 최종적인 모델을 도출하는 방식으로 진행되었다. 지식검색형 집단지성은 '질문-답변-채택'의 구조이고, 그 구조 속에서 '질문기-답변기-순서화기'를 거쳐 하나의 지식 덩어리인 'K-let'을 생산한다. 생산된 'K-let'들은 지식검색서비스의 데이터베이스에 축적되고, 이는 공통된 질의어를 기준으로 소비자들에 의해서 검색되어 소비된다. 하나의 질문에 대해 여러 개의 답변들이 존재하고, 답변자의 성향은 크게 전문성과 체계성을 바탕으로 한 전문가형 답변자와 경험적이고 의견지향적인 대화형 답변자로 나눠진다. 다수의 네티즌들의 참여에 의해서 지식의 생산이 진행되므로 질문의 성향 역시 사실, 의견, 경험 등 다양한 스펙트럼을 가지는 모델로 설정하였다. 반면에 위키형 집단지성은 개방형 플랫폼을 바탕으로 한 백과사전의 형식이며, 이러한 형식 속에서 최초의 개념어 등록과 다수의 편집활동을 거치면서 완성되지 않는 하나의 아티클인 'W-let'을 생산한다. 이러한 'W-let'은 생성 초기에 소수에 의한 활발한 내용 입력 활동으로 어느 정도의 안정화를 거친 후에는 꾸준한 다수의 수정활동을 통해서 'W-let'의 생명력을 유지함으로써 지식의 실제적인 변화를 반영한다. 생산된 'W-let'들은 위키형 집단지성 서비스의 데이터베이스에 축적되고, 이것들은 내부링크를 통해서 모두 연결되어 있다. 백과사전 형식으로 하나의 개념어를 설명하는 하나의 아티클은 오로지 사실적인 지식들로만 구성되나 내부링크와 외부링크를 통해서 다양한 스펙트럼을 가지는 모델로 설정하였다. 위와 같이 설정된 모델을 바탕으로 공통된 질의어 및 개념어를 선정하여 각각의 서비스에 노출시켰다. 이를 통해서 얻어진 각 서비스의 데이터베이스에 축적된 모든 데이터들 중에서 일정한 기간을 기준으로 각각의 모델 검증에 필요한 데이터를 추출하여 분석하는 방식으로 진행되었다. 그 결과 지식검색형 집단지성에서는 '질문-답변-채택'의 생산 구조 속에 다수가 참여하여 질문-채택답변-기타답변으로 배열되어 있는 완성된 형태의 K-let들을 지속적으로 생산하며 비슷한 성향을 가진 K-let들이 반복적으로 생산되어 지식검색 데이터베이스에 누적된다. 지식 소비자들은 질의어 검색을 통해서 다양한 K-let들을 선택하여 비교, 검토한 후에 선택된 K-let들의 배열은 해체되어 소비자들에 의해서 재배열됨을 발견할 수 있었다. 이에 지식검색형 집단지성이란 다수의 의해서 생산되고 누적된 지식들이 소비자의 검색과 선택에 의해 해체되어 재배열되는 지식의 맞춤화 과정이라고 정의내릴 수 있었다. 반면에 위키형 집단지성에서는 '내용입력-미세수정' 구조 속에서 생명력 있는 W-let을 생성한다. W-let은 백과사전처럼 정리되어 내부링크를 통해서 서로 연결되고, 외부링크를 통해 확장되고, 지식소비자들은 검색을 통해 최초의 W-let에 도달한 후에 링크를 선택함으로써 지식을 확장시킴을 검증할 수 있었다. 따라서 위키형 집단지성이란 다수의 의해서 생산되고 정리된 지식들이 소비자의 검색과 링크에 의해 무한히 확장되는 지식의 확대 재생산되는 과정이라고 정의 내릴 수 있다. 결국, 현재의 집단지성이란 지식이 다수의 참여로 생산됨으로써 개인에게 맞춤화되고, 끊임없이 확대 재생산되는 과정을 의미한다. 그리고 이러한 집단지성의 방식은 지식이라는 현재의 차원을 넘어서 정치, 경제를 비롯한 사회의 전 영역으로 점차적으로 확대되어갈 것이다. 앞으로 연구들은 두 가지 모델이 혼재되어 있는 현재의 집단지성이 어떠한 새로운 모델을 만들면서 다른 영역으로 확장되어갈 것인지에 대해서 초점을 맞춰 나가야할 것이다.

  • PDF

A Study on The Security Vulnerability Analysis of Open an Automatic Demand Response System (개방형 자동 수요 반응 시스템 보안 취약성 분석에 관한 연구)

  • Chae, Hyeon-Ho;Lee, June-Kyoung;Lee, Kyoung-Hak
    • Journal of Digital Convergence
    • /
    • v.14 no.5
    • /
    • pp.333-339
    • /
    • 2016
  • Technology to optimize and utilize the use and supply of the electric power between consumer and supplier has been on the rise among the smart grid power market network in electric power demand management based on the Internet. Open Automated Demand Response system protocol, which can deliver Demand Response needed in electric power demand management to electricity supplier, system supplier and even the user is openADR 2.0b. This paper used the most credible, cosmopolitanly proliferated EPRI open source and analysed the variety of security vulnerability that developed VEN and VTN system may have. Using the simulator for attacking openADR protocol, the VEN/VTN system that has been implemented as EPRI open source was conducted to attack in a variety of ways. As a result of the analysis, we were able to get the results that the VEN/VTN system has security vulnerabilities to the parameter tampering attacks and service flow falsification attack. In conclusion, if you want to implement the openADR2.0b protocol system in the open or two-way communication environment smart grid network, considering a variety of security vulnerability should be sure to seek security technology and services.

eXtensible Rule Markup Language (XRML): Design Principles and Application (확장형 규칙 표식 언어(eXtensible Rule Markup Language) : 설계 원리 및 응용)

  • 이재규;손미애;강주영
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.1
    • /
    • pp.141-157
    • /
    • 2002
  • extensible Markup Language (XML) is a new markup language for data exchange on the Internet. In this paper, we propose a language extensible Rule Markup Language (XRML) which is an extension of XML. The implicit rules embedded in the Web pages should be identifiable, interchangeable with structured rule format, and finally accessible by various applications. It is possible to realize by using XRML. In this light, Web based Knowledge Management Systems (KMS) can be integrated with rule-based expert systems. To meet this end, we propose the six design criteria: Expressional Completeness, Relevance Linkability, Polymorphous Consistency, Applicative Universality, Knowledge Integrability and Interoperability. Furthermore, we propose three components such as RIML (Rule Identification Markup Language), RSML (Rule Structure Markup Language) and RTML (Rule Triggering Markup Language), and the Document Type Definition DTD). We have designed the XRML version 0.5 as illustrated above, and developed its prototype named Form/XRML which is an automated form processing for disbursement of the research fund in the Korea Advanced Institute of Science and Technology (KAISI). Since XRML allows both human and software agent to use the rules, there is huge application potential. We expect that XRML can contribute to the progress of Semantic Web platforms making knowledge management and e-commerce more intelligent. Since there are many emerging research groups and vendors who investigate this issue, it will not take long to see XRML commercial products. Matured XRML applications may change the way of designing information and knowledge systems in the near future.

  • PDF