• Title/Summary/Keyword: LINUX

Search Result 1,583, Processing Time 0.024 seconds

Design and Implementation of a Metadata Structure for Large-Scale Shared-Disk File System (대용량 공유디스크 파일 시스템에 적합한 메타 데이타 구조의 설계 및 구현)

  • 이용주;김경배;신범주
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.1
    • /
    • pp.33-49
    • /
    • 2003
  • Recently, there have been large storage demands for manipulating multimedia data. To solve the tremendous storage demands, one of the major researches is the SAN(Storage Area Network) that provides the local file requests directly from shared-disk storage and also eliminates the server bottlenecks to performance and availability. SAN also improve the network latency and bandwidth through new channel interface like FC(Fibre Channel). But to manipulate the efficient storage network like SAN, traditional local file system and distributed file system are not adaptable and also are lack of researches in terms of a metadata structure for large-scale inode object such as file and directory. In this paper, we describe the architecture and design issues of our shared-disk file system and provide the efficient bitmap for providing the well-formed block allocation in each host, extent-based semi flat structure for storing large-scale file data, and two-phase directory structure of using Extendible Hashing. Also we describe a detailed algorithm for implementing the file system's device driver in Linux Kernel and compare our file system with the general file system like EXT2 and shard disk file system like GFS in terms of file creation, directory creation and I/O rate.

The Design and Implementation of High Performance Intrusion Prevention Algorithm based on Signature Hashing (시그너처 해싱 기반 고성능 침입방지 알고리즘 설계 및 구현)

  • Wang, Jeong-Seok;Jung, Yun-Jae;Kwon, H-Uing;Chung, Kyu-Sik;Kwak, Hu-Keun
    • The KIPS Transactions:PartC
    • /
    • v.14C no.3 s.113
    • /
    • pp.209-220
    • /
    • 2007
  • IPS(Intrusion Prevention Systems), which is installed in inline mode in a network, protects network from outside attacks by inspecting the incoming/outgoing packets and sessions, and dropping the packet or closing the sessions if an attack is detected in the packet. In the signature based filtering, the payload of a packet passing through IPS is matched with some attack patterns called signatures and dropped if matched. As the number of signatures increases, the time required for the pattern matching for a packet increases accordingly so that it becomes difficult to develop a high performance US working without packet delay. In this paper, we propose a high performance IPS based on signature hashing to make the pattern matching time independent of the number of signatures. We implemented the proposed scheme in a Linux kernel module in a PC and tested it using worm generator, packet generator and network performance measure instrument called smart bit. Experimental results show that the performance of existing method is degraded as the number of signatures increases whereas the performance of the proposed scheme is not degraded.

A Study on ICS Security Information Collection Method Using CTI Model (CTI 모델 활용 제어시스템 보안정보 수집 방안 연구)

  • Choi, Jongwon;Kim, Yesol;Min, Byung-gil
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.2
    • /
    • pp.471-484
    • /
    • 2018
  • Recently, cyber threats are frequently occurring in ICS(industrial control systems) of government agencies, infrastructure, and manufacturing companies. In order to cope with such cyber threats, it is necessary to apply CTI to ICS. For this purpose, a security information collection system is needed. However, it is difficult to install security solution in control devices such as PLC. Therefor, it is difficult to collect security information of ICS. In addition, there is a problem that the security information format generated in various assets is different. Therefore, in this paper, we propose an efficient method to collect ICS security information. We utilize CybOX/STIX/TAXII CTI models that are easy to apply to ICS. Using this model, we designed the formats to collect security information of ICS assets. We created formats for system logs, IDS logs, and EWS application logs of ICS assets using Windows and Linux. In addition, we designed and implemented a security information collection system that reflects the designed formats. This system can be used to apply monitoring system and CTI to future ICS.

An Improved Signature Hashing Algorithm for High Performance Network Intrusion Prevention System (고성능 네트워크 침입방지시스템을 위한 개선된 시그니처 해싱 알고리즘)

  • Ko, Joong-Sik;Kwak, Hu-Keun;Wang, Jeong-Seok;Kwon, Hui-Ung;Chung, Kyu-Sik
    • The KIPS Transactions:PartC
    • /
    • v.16C no.4
    • /
    • pp.449-460
    • /
    • 2009
  • The signature hashing algorithm[9] provides the fast pattern matching speed for network IPS(Intrusion Prevention System) using the hash table. It selects 2 bytes from all signature rules and links to the hash table by the hash value. It has an advantage of performance improvement because it reduces the number of inspecting rules in the pattern matching. However it has a disadvantage of performance drop if the number of rules with the same hash value increases when the number of rules are large and the corelation among rules is strong. In this paper, we propose a method to make all rules distributed evenly to the hash table independent of the number of rules and corelation among rules for overcoming the disadvantage of the signature hashing algorithm. In the proposed method, it checks whether or not there is an already assigned rule linked to the same hash value before a new rule is linked to a hash value in the hash table. If there is no assigned rule, the new rule is linked to the hash value. Otherwise, the proposed method recalculate a hash value to put it in other position. We implemented the proposed method in a PC with a Linux module and performed experiments using Iperf as a network performance measurement tool. The signature hashing method shows performance drop if the number of rules with the same hash value increases when the number of rules are large and the corelation among rules is strong, but the proposed method shows no performance drop independent of the number of rules and corelation among rules.

A File System for User Special Functions using Speed-based Prefetch in Embedded Multimedia Systems (임베디드 멀티미디어 재생기에서 속도기반 미리읽기를 이용한 사용자기능 지원 파일시스템)

  • Choe, Tae-Young;Yoon, Hyeon-Ju
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.7
    • /
    • pp.625-635
    • /
    • 2008
  • Portable multimedia players have some different properties compared to general multimedia file server. Some of those properties are single user ownership, relatively low hardware performance, I/O burst by user special functions, and short software development cycles. Though suitable for processing multiple user requests at a time, the general multimedia file systems are not efficient for special user functions such as fast forwards/backwards. Soml' methods has been proposed to improve the performance and functionality, which the application programs give prediction hints to the file system. Unfortunately, they require the modification of all applications and recompilation. In this paper, we present a file system that efficiently supports user special functions in embedded multimedia systems using file block allocation, buffer-cache, and prefetch. A prefetch algorithm, SPRA (SPeed-based PRefetch Algorithm) predicts the next block using I/O patterns instead of hints from applications and it is resident in the file system, so doesn't affect application development process. From the experimental file system implementation and comparison with Linux readahead-based algorithms, the proposed system shows $4.29%{\sim}52.63%$ turnaround time and 1.01 to 3,09 times throughput in average.

Design and Implementation of a Hardware-based Transmission/Reception Accelerator for a Hybrid TCP/IP Offload Engine (하이브리드 TCP/IP Offload Engine을 위한 하드웨어 기반 송수신 가속기의 설계 및 구현)

  • Jang, Han-Kook;Chung, Sang-Hwa;Yoo, Dae-Hyun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.9
    • /
    • pp.459-466
    • /
    • 2007
  • TCP/IP processing imposes a heavy load on the host CPU when it is processed by the host CPU on a very high-speed network. Recently the TCP/IP Offload Engine (TOE), which processes TCP/IP on a network adapter instead of the host CPU, has become an attractive solution to reduce the load in the host CPU. There have been two approaches to implement TOE. One is the software TOE in which TCP/IP is processed by an embedded processor and the other is the hardware TOE in which TCP/IP is processed by a dedicated ASIC. The software TOE has poor performance and the hardware TOE is neither flexible nor expandable enough to add new features. In this paper we designed and implemented a hybrid TOE architecture, in which TCP/IP is processed by cooperation of hardware and software, based on an FPGA that has two embedded processor cores. The hybrid TOE can have high performance by processing time-critical operations such as making and processing data packets in hardware. The software based on the embedded Linux performs operations that are not time-critical such as connection establishment, flow control and congestions, thus the hybrid TOE can have enough flexibility and expandability. To improve the performance of the hybrid TOE, we developed a hardware-based transmission/reception accelerator that processes important operations such as creating data packets. In the experiments the hybrid TOE shows the minimum latency of about $19{\mu}s$. The CPU utilization of the hybrid TOE is below 6 % and the maximum bandwidth of the hybrid TOE is about 675 Mbps.

Development of Haplotype Reconstruction System Using Public Resources (공개용 리소스를 활용한 Haplotype 재조합 시스템 개발)

  • Kim, Ki-Bong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.2
    • /
    • pp.720-726
    • /
    • 2010
  • Haplotype-based research has become increasingly important in the field of personalized medicine since the haplotype reflects a set of SNPs (Single Nucleotide Polymorphisms) that are genetically associated and inherited together. Currently, the most widely used application softwares available for haplotype reconstruction, based on in silico method, include PL-EM, Haplotyper, PHASE and HAP. PL-EM, Haplotyper and PHASE are command-line application running on LINUX or Unix system and HAP is a web-based client-server application. This paper deals with an integrated haplotype reconstruction system that have been developed with PL-EM and Haplotyper selected from the accuracy test with experimentally verified data on public application softwares. This integrated system is a kind of client-sever one with user friendly web interface and can provide end-users with a high quality of haplotype analysis. SNPs genotype data with a length of 5 derived from 5 people and SNPs genotype data with a length of 13 derived from 15 people were used to test the analysis results of Haplotyper and PL-EM respectively. As a result, this system has been confirmed to provide the systematic and easy-to-understand analysis results that consist of two main parts, i.e. individual haplotype information and haplotype pool information. In this respect, the integration system will be utilized as a useful tool for the discovery of disease related genes and the development of personalized drugs through facilitating the reconstruction of haplotype maps.

Attacking OpenSSL Shared Library Using Code Injection (코드 주입을 통한 OpenSSL 공유 라이브러리의 보안 취약점 공격)

  • Ahn, Woo-Hyun;Kim, Hyung-Su
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.4
    • /
    • pp.226-238
    • /
    • 2010
  • OpenSSL is an open-source library implementing SSL that is a secure communication protocol. However, the library has a severe vulnerability that its security information can be easily exposed to malicious software when the library is used in a form of shared library on Linux and UNIX operating systems. We propose a scheme to attack the vulnerability of the OpenSSL library. The scheme injects codes into a running client program to execute the following attacks on the vulnerability in a SSL handshake. First, when a client sends a server a list of cryptographic algorithms that the client is willing to support, our scheme replaces all algorithms in the list with a specific algorithm. Such a replacement causes the server to select the specific algorithm. Second, the scheme steals a key for data encryption and decryption when the key is generated. Then the key is sent to an outside attacker. After that, the outside attacker decrypts encrypted data that has been transmitted between the client and the server, using the specified algorithm and the key. To show that our scheme is realizable, we perform an experiment of collecting encrypted login data that an ftp client using the OpenSSL shared library sends its server and then decrypting the login data.

Wireless u-PC: Personal workspace on an Wireless Network Storage (Wireless u-PC : 무선 네트워크 스토리지를 이용한 개인 컴퓨팅 환경의 이동성을 지원하는 서비스)

  • Sung, Baek-Jae;Hwang, Min-Kyung;Kim, In-Jung;Lee, Woo-Joong;Park, Chan-Ik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.9
    • /
    • pp.916-920
    • /
    • 2008
  • The personal workspace consists of user- specified computing environment such as user profile, applications and their configurations, and user data. Mobile computing devices (i.e., cellular phones, PDAs, laptop computers, and Ultra Mobile PC) are getting smaller and lighter to provide personal work-space ubiquitously. However, various personal work-space mobility solutions (c.f. VMWare Pocket ACE[1], Mojopac[2], u-PC[3], etc.) are appeared with the advance of virtualization technology and portable storage technology. The personal workspace can be loaded at public PC using above solutions. Especially, we proposed a framework called ubiquitous personal computing environment (u-PC) that supports mobility of personal workspace based on wireless iSCSI network storage in our previous work. However, previous u-PC could support limited applications, because it uses IRP (I/O Request Packet) forwarding technique at filter driver level on Windows operating system. In this paper, we implement OS-level virtualization technology using system call hooking on Windows operating system. It supports personal workspace mobility and covers previous u-PC limitation. Also, it overcomes personal workspace loading overhead that is limitation of other solutions (i.e., VMWare Pocket ACE, Mojopac, etc). We implement a prototype consisting of Windows XP-based host PC and Linux-based mobile device connected via WiNET protocol of UWB. We leverage several use~case models of our framework for proving its usability.

Design and Implementation of Static Program Analyzer Finding All Buffer Overrun Errors in C Programs (C 프로그램의 버퍼 오버런(buffer overrun) 오류를 찾아 주는 정적 분석기의 설계와 구현)

  • Yi Kwang-Keun;Kim Jae-Whang;Jung Yung-Bum
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.5
    • /
    • pp.508-524
    • /
    • 2006
  • We present our experience of combining, in a realistic setting, a static analyzer with a statistical analysis. This combination is in order to reduce the inevitable false alarms from a domain-unaware static analyzer. Our analyzer named Airac(Array Index Range Analyzer for C) collects all the true buffer-overrun points in ANSI C programs. The soundness is maintained, and the analysis' cost-accuracy improvement is achieved by techniques that static analysis community has long accumulated. For still inevitable false alarms (e.g. Airac raised 970 buffer-overrun alarms in commercial C programs of 5.3 million lines and 737 among the 970 alarms were false), which are always apt for particular C programs, we use a statistical post analysis. The statistical analysis, given the analysis results (alarms), sifts out probable false alarms and prioritizes true alarms. It estimates the probability of each alarm being true. The probabilities are used in two ways: 1) only the alarms that have true-alarm probabilities higher than a threshold are reported to the user; 2) the alarms are sorted by the probability before reporting, so that the user can check highly probable errors first. In our experiments with Linux kernel sources, if we set the risk of missing true error is about 3 times greater than false alarming, 74.83% of false alarms could be filtered; only 15.17% of false alarms were mixed up until the user observes 50% of the true alarms.