• Title/Summary/Keyword: IP address management system

Search Result 37, Processing Time 0.02 seconds

Design of an Intrusion Detection System for Defense in Depth (계층적 방어를 위한 침입탐지 시스템 설계)

  • Koo, Min-Jeong;Han, Woo-Chul;Chang, Young-Hyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2010.07a
    • /
    • pp.525-526
    • /
    • 2010
  • 2000년 대규모 DDoS 공격이래, 2009년 7월 7일 국가주요정부기관 및 인터넷 포털, 금융권 등의 웹사이트 대상으로 1차, 2차, 3차로 나누어 대규모 사이버 공격이 발생하였다. 지속적으로 발전되는 행태를 보이고 DDoS 공격에 대해 본 논문에서는 계층적인 침입탐지시스템을 설계하였다. 네트워크 패킷을 분석하기 위해 e-Watch, NetworkMiner등의 패킷, 프로토콜 분석도구를 이용하여 TCP/IP의 Layer별 공격을 분석한 후 패킷의 유입량, 로그정보, 접속정보, Port, Address 정보를 분석하고 계층침입에 대한 방어를 수행하도록 설계하였다. 본 논문은 DDoS(Distributed Denial of Service)에 대한 패킷 전송에 대해 계층적인 방어를 통해 보다 안정적인 패킷수신이 이루어진다.

  • PDF

Design of Intrusion Responsible System For Enterprise Security Management (통합보안 관리를 위한 침입대응 시스템 설계)

  • Lee, Chang-Woo;Sohn, Woo-Yong;Song, Jung-Gil
    • Convergence Security Journal
    • /
    • v.5 no.2
    • /
    • pp.51-56
    • /
    • 2005
  • Service operating management to keep stable and effective environment according as user increase and network environment of the Internet become complex gradually and requirements of offered service and user become various is felt constraint gradually. To solve this problem, invasion confrontation system through proposed this log analysis can be consisted as search of log file that is XML's advantage storing log file by XML form is easy and fast, and can have advantage log files of system analyze unification and manages according to structure anger of data. Also, created log file by Internet Protocol Address sort by do log and by Port number sort do log, invasion type sort log file and comparative analysis created in other invasion feeler system because change sort to various form such as do log by do logarithm, feeler time possible.

  • PDF

An Automatic Portscan Detection System with Adaptive Threshold Setting

  • Kim, Sang-Kon;Lee, Seung-Ho;Seo, Seung-Woo
    • Journal of Communications and Networks
    • /
    • v.12 no.1
    • /
    • pp.74-85
    • /
    • 2010
  • For the purpose of compromising hosts, attackers including infected hosts initially perform a portscan using IP addresses in order to find vulnerable hosts. Considerable research related to portscan detection has been done and many algorithms have been proposed and implemented in the network intrusion detection system (NIDS). In order to distinguish portscanners from remote hosts, most portscan detection algorithms use a fixed threshold that is manually managed by the network manager. Because the threshold is a constant, even though the network environment or the characteristics of traffic can change, many false positives and false negatives are generated by NIDS. This reduces the efficiency of NIDS and imposes a high processing burden on a network management system (NMS). In this paper, in order to address this problem, we propose an automatic portscan detection system using an fast increase slow decrease (FISD) scheme, that will automatically and adaptively set the threshold based on statistical data for traffic during prior time periods. In particular, we focus on reducing false positives rather than false negatives, while the threshold is adaptively set within a range between minimum and maximum values. We also propose a new portscan detection algorithm, rate of increase in the number of failed connection request (RINF), which is much more suitable for our system and shows better performance than other existing algorithms. In terms of the implementation, we compare our scheme with other two simple threshold estimation methods for an adaptive threshold setting scheme. Also, we compare our detection algorithm with other three existing approaches for portscan detection using a real traffic trace. In summary, we show that FISD results in less false positives than other schemes and RINF can fast and accurately detect portscanners. We also show that the proposed system, including our scheme and algorithm, provides good performance in terms of the rate of false positives.

Big Data Management System for Biomedical Images to Improve Short-term and Long-term Storage

  • Qamar, Shamweel;Kim, Eun Sung;Park, Peom
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.15 no.2
    • /
    • pp.66-71
    • /
    • 2019
  • In digital pathology, an electronic system in the biomedical domain storage of the files is a big constrain and because all the analysis and annotation takes place at every user-end manually, it becomes even harder to manage the data that is being shared inside an enterprise. Therefore, we need such a storage system which is not only big enough to store all the data but also manage it and making communication of that data much easier without losing its true from. A virtual server setup is one of those techniques which can solve this issue. We set a main server which is the main storage for all the virtual machines(that are being used at user-end) and that main server is controlled through a hypervisor so that if we want to make changes in storage overall or the main server in itself, it could be reached remotely from anywhere by just using the server's IP address. The server in our case includes XML-RPC based API which are transmitted between computers using HTTP protocol. JAVA API connects to HTTP/HTTPS protocol through JAVA Runtime Environment and exists on top of other SDK web services for the productivity boost of the running application. To manage the server easily, we use Tkinter library to develop the GUI and pmw magawidgets library which is also utilized through Tkinter. For managing, monitoring and performing operations on virtual machines, we use Python binding to XML-RPC based API. After all these settings, we approach to make the system user friendly by making GUI of the main server. Using that GUI, user can perform administrative functions like restart, suspend or resume a virtual machine. They can also logon to the slave host of the pool in case of emergency and if needed, they can also filter virtual machine by the host. Network monitoring can be performed on multiple virtual machines at same time in order to detect any loss of network connectivity.

Pre-Processing of Query Logs in Web Usage Mining

  • Abdullah, Norhaiza Ya;Husin, Husna Sarirah;Ramadhani, Herny;Nadarajan, Shanmuga Vivekanada
    • Industrial Engineering and Management Systems
    • /
    • v.11 no.1
    • /
    • pp.82-86
    • /
    • 2012
  • In For the past few years, query log data has been collected to find user's behavior in using the site. Many researches have studied on the usage of query logs to extract user's preference, recommend personalization, improve caching and pre-fetching of Web objects, build better adaptive user interfaces, and also to improve Web search for a search engine application. A query log contain data such as the client's IP address, time and date of request, the resources or page requested, status of request HTTP method used and the type of browser and operating system. A query log can offer valuable insight into web site usage. A proper compilation and interpretation of query log can provide a baseline of statistics that indicate the usage levels of website and can be used as tool to assist decision making in management activities. In this paper we want to discuss on the tasks performed of query logs in pre-processing of web usage mining. We will use query logs from an online newspaper company. The query logs will undergo pre-processing stage, in which the clickstream data is cleaned and partitioned into a set of user interactions which will represent the activities of each user during their visits to the site. The query logs will undergo essential task in pre-processing which are data cleaning and user identification.

Managing Duplicate Memberships of Websites : An Approach of Social Network Analysis (웹사이트 중복회원 관리 : 소셜 네트워크 분석 접근)

  • Kang, Eun-Young;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.153-169
    • /
    • 2011
  • Today using Internet environment is considered absolutely essential for establishing corporate marketing strategy. Companies have promoted their products and services through various ways of on-line marketing activities such as providing gifts and points to customers in exchange for participating in events, which is based on customers' membership data. Since companies can use these membership data to enhance their marketing efforts through various data analysis, appropriate website membership management may play an important role in increasing the effectiveness of on-line marketing campaign. Despite the growing interests in proper membership management, however, there have been difficulties in identifying inappropriate members who can weaken on-line marketing effectiveness. In on-line environment, customers tend to not reveal themselves clearly compared to off-line market. Customers who have malicious intent are able to create duplicate IDs by using others' names illegally or faking login information during joining membership. Since the duplicate members are likely to intercept gifts and points that should be sent to appropriate customers who deserve them, this can result in ineffective marketing efforts. Considering that the number of website members and its related marketing costs are significantly increasing, it is necessary for companies to find efficient ways to screen and exclude unfavorable troublemakers who are duplicate members. With this motivation, this study proposes an approach for managing duplicate membership based on the social network analysis and verifies its effectiveness using membership data gathered from real websites. A social network is a social structure made up of actors called nodes, which are tied by one or more specific types of interdependency. Social networks represent the relationship between the nodes and show the direction and strength of the relationship. Various analytical techniques have been proposed based on the social relationships, such as centrality analysis, structural holes analysis, structural equivalents analysis, and so on. Component analysis, one of the social network analysis techniques, deals with the sub-networks that form meaningful information in the group connection. We propose a method for managing duplicate memberships using component analysis. The procedure is as follows. First step is to identify membership attributes that will be used for analyzing relationship patterns among memberships. Membership attributes include ID, telephone number, address, posting time, IP address, and so on. Second step is to compose social matrices based on the identified membership attributes and aggregate the values of each social matrix into a combined social matrix. The combined social matrix represents how strong pairs of nodes are connected together. When a pair of nodes is strongly connected, we expect that those nodes are likely to be duplicate memberships. The combined social matrix is transformed into a binary matrix with '0' or '1' of cell values using a relationship criterion that determines whether the membership is duplicate or not. Third step is to conduct a component analysis for the combined social matrix in order to identify component nodes and isolated nodes. Fourth, identify the number of real memberships and calculate the reliability of website membership based on the component analysis results. The proposed procedure was applied to three real websites operated by a pharmaceutical company. The empirical results showed that the proposed method was superior to the traditional database approach using simple address comparison. In conclusion, this study is expected to shed some light on how social network analysis can enhance a reliable on-line marketing performance by efficiently and effectively identifying duplicate memberships of websites.

A Study on Intuitive IoT Interface System using 3D Depth Camera (3D 깊이 카메라를 활용한 직관적인 사물인터넷 인터페이스 시스템에 관한 연구)

  • Park, Jongsub;Hong, June Seok;Kim, Wooju
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.2
    • /
    • pp.137-152
    • /
    • 2017
  • The decline in the price of IT devices and the development of the Internet have created a new field called Internet of Things (IoT). IoT, which creates new services by connecting all the objects that are in everyday life to the Internet, is pioneering new forms of business that have not been seen before in combination with Big Data. The prospect of IoT can be said to be unlimited in its utilization. In addition, studies of standardization organizations for smooth connection of these IoT devices are also active. However, there is a part of this study that we overlook. In order to control IoT equipment or acquire information, it is necessary to separately develop interworking issues (IP address, Wi-Fi, Bluetooth, NFC, etc.) and related application software or apps. In order to solve these problems, existing research methods have been conducted on augmented reality using GPS or markers. However, there is a disadvantage in that a separate marker is required and the marker is recognized only in the vicinity. In addition, in the case of a study using a GPS address using a 2D-based camera, it was difficult to implement an active interface because the distance to the target device could not be recognized. In this study, we use 3D Depth recognition camera to be installed on smartphone and calculate the space coordinates automatically by linking the distance measurement and the sensor information of the mobile phone without a separate marker. Coordination inquiry finds equipment of IoT and enables information acquisition and control of corresponding IoT equipment. Therefore, from the user's point of view, it is possible to reduce the burden on the problem of interworking of the IoT equipment and the installation of the app. Furthermore, if this technology is used in the field of public services and smart glasses, it will reduce duplication of investment in software development and increase in public services.