• Title/Summary/Keyword: Internet Computing

Search Result 3,550, Processing Time 0.026 seconds

Clustering Analysis by Customer Feature based on SOM for Predicting Purchase Pattern in Recommendation System (추천시스템에서 구매 패턴 예측을 위한 SOM기반 고객 특성에 의한 군집 분석)

  • Cho, Young Sung;Moon, Song Chul;Ryu, Keun Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.2
    • /
    • pp.193-200
    • /
    • 2014
  • Due to the advent of ubiquitous computing environment, it is becoming a part of our common life style. And tremendous information is cumulated rapidly. In these trends, it is becoming a very important technology to find out exact information in a large data to present users. Collaborative filtering is the method based on other users' preferences, can not only reflect exact attributes of user but also still has the problem of sparsity and scalability, though it has been practically used to improve these defects. In this paper, we propose clustering method by user's features based on SOM for predicting purchase pattern in u-Commerce. it is necessary for us to make the cluster with similarity by user's features to be able to reflect attributes of the customer information in order to find the items with same propensity in the cluster rapidly. The proposed makes the task of clustering to apply the variable of featured vector for the user's information and RFM factors based on purchase history data. To verify improved performance of proposing system, we make experiments with dataset collected in a cosmetic internet shopping mall.

Implementation of a Network Simulator for Cyber Attacks and Detections based on SSFNet (SSFNet 기반 사이버 공격 및 탐지를 위한 네트워크 시뮬레이터의 구현)

  • Shim, Jae-Hong;Jung, Hong-Ki;Lee, Cheol-Won;Choi, Kyung-Hee;Park, Seung-Kyu;Jung, Gi-Hyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.457-467
    • /
    • 2002
  • In order to simulate cyber attacks and predict network behavior by attacks, we should represent attributes of network components in the simulation model, and should express characteristics of systems that carry out various cyber attacks and defend from these attacks. To simulate how network load may change under the cyber attacks, we extended SSF[9, 10] that is process-based event-oriented simulation system. We added a firewall class and a packet manipulator into the SSFNet that is a component of SSF. The firewall class, which is related to the security, is to simulate cyber attacks, and the packet manipulator is a set of functions to write attack programs for the simulation. The extended SSFNet enables to simulate a network with the security systems and provides advantages that make easy to port already exsiting attack programs and apply them to the simulation evironment. We made a vitual network model to verify operations of the added classes, and simulated a smurf attack that is a representative denial of sevive attack, and observed the network behavior under the smurf attack. The results showed that the firewall class and packet manipulator developed in this paper worked normaly.

The e-Business Component Construction based on Distributed Component Specification (분산 컴포넌트 명세를 통한 e-비즈니스 컴포넌트 구축)

  • Kim, Haeng-Gon;Choe, Ha-Jeong;Han, Eun-Ju
    • The KIPS Transactions:PartD
    • /
    • v.8D no.6
    • /
    • pp.705-714
    • /
    • 2001
  • The computing systems of today expanded business trade and distributed business process Internet. More and more systems are developed from components with exactly reusability, independency, and portability. Component based development is focused on advanced concepts rater than passive manipulation or source code in class library. The primary component construction in CBD. However, lead to an additional cost for reconstructing the new component with CBD model. It also difficult to serve component information with rapidly and exactly, which normalization model are not established, frequency user logging in Web caused overload. A lot of difficult issues and aspects of Component Based Development have to be investigated to develop good component-based products. There is no established normalization model which will guarantee a proper treatment of components. This paper elaborates on some of those aspects of web application to adapt user requirement with exactly and rapidly. Distributed components in this paper are used in the most tiny size on network and suggest the network-addressable interface based on business domain. We also discuss the internal and external specifications for grasping component internal and external relations of user requirements to be analyzed. The specifications are stored on Servlets after dividing the information between session and entity as an EJB (Enterprise JavaBeans) that are reusable unit size in business domain. The reusable units are used in business component through query to get business component. As a major contribution, we propose a systems model for registration, auto-arrange, search, test, and download component, which covers component reusability and component customization.

  • PDF

Spark based Scalable RDFS Ontology Reasoning over Big Triples with Confidence Values (신뢰값 기반 대용량 트리플 처리를 위한 스파크 환경에서의 RDFS 온톨로지 추론)

  • Park, Hyun-Kyu;Lee, Wan-Gon;Jagvaral, Batselem;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.87-95
    • /
    • 2016
  • Recently, due to the development of the Internet and electronic devices, there has been an enormous increase in the amount of available knowledge and information. As this growth has proceeded, studies on large-scale ontological reasoning have been actively carried out. In general, a machine learning program or knowledge engineer measures and provides a degree of confidence for each triple in a large ontology. Yet, the collected ontology data contains specific uncertainty and reasoning such data can cause vagueness in reasoning results. In order to solve the uncertainty issue, we propose an RDFS reasoning approach that utilizes confidence values indicating degrees of uncertainty in the collected data. Unlike conventional reasoning approaches that have not taken into account data uncertainty, by using the in-memory based cluster computing framework Spark, our approach computes confidence values in the data inferred through RDFS-based reasoning by applying methods for uncertainty estimating. As a result, the computed confidence values represent the uncertainty in the inferred data. To evaluate our approach, ontology reasoning was carried out over the LUBM standard benchmark data set with addition arbitrary confidence values to ontology triples. Experimental results indicated that the proposed system is capable of running over the largest data set LUBM3000 in 1179 seconds inferring 350K triples.

Design and Implementation of a Physical Network Separation System using Virtual Desktop Service based on I/O Virtualization (입출력 가상화 기반 가상 데스크탑 서비스를 이용한 물리적 네트워크 망분리 시스템 설계 및 구현)

  • Kim, Sunwook;Kim, Seongwoon;Kim, Hakyoung;Chung, Seongkwon;Lee, Sookyoung
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.7
    • /
    • pp.506-511
    • /
    • 2015
  • IOV is a technology that supports one or more virtual desktops, and can share a single physical device. In general, the virtual desktop uses the virtual IO devices which are provided by virtualization SW, using SW emulation technology. Virtual desktops that use the IO devices based on SW emulation have a problem in which service quality and performance are declining. Also, they cannot support the high-end application operations such as 3D-based CAD and game applications. In this paper, we propose a physical network separation system using Virtual Desktop Service based on HW direct assignments to overcome these problems. The proposed system provides independent desktops that are used to access the intranet or internet using server virtualization technology in a physical desktop computer for the user. In addition, this system can also support a network separation without network performance degradation caused by inspection of the network packet for logical network separations and additional installations of the desktop for physical network separations.

An Algorithm to Detect P2P Heavy Traffic based on Flow Transport Characteristics (플로우 전달 특성 기반의 P2P 헤비 트래픽 검출 알고리즘)

  • Choi, Byeong-Geol;Lee, Si-Young;Seo, Yeong-Il;Yu, Zhibin;Jun, Jae-Hyun;Kim, Sung-Ho
    • Journal of KIISE:Information Networking
    • /
    • v.37 no.5
    • /
    • pp.317-326
    • /
    • 2010
  • Nowadays, transmission bandwidth for network traffic is increasing and the type is varied such as peer-to-peer (PZP), real-time video, and so on, because distributed computing environment is spread and various network-based applications are developed. However, as PZP traffic occupies much volume among Internet backbone traffics, transmission bandwidth and quality of service(QoS) of other network applications such as web, ftp, and real-time video cannot be guaranteed. In previous research, the port-based technique which checks well-known port number and the Deep Packet Inspection(DPI) technique which checks the payload of packets were suggested for solving the problem of the P2P traffics, however there were difficulties to apply those methods to detection of P2P traffics because P2P applications are not used well-known port number and payload of packets may be encrypted. A proposed algorithm for identifying P2P heavy traffics based on flow transport parameters and behavioral characteristics can solve the problem of the port-based technique and the DPI technique. The focus of this paper is to identify P2P heavy traffic flows rather than all P2P traffics. P2P traffics are consist of two steps i)searching the opposite peer which have some contents ii) downloading the contents from one or more peers. We define P2P flow patterns on these P2P applications' features and then implement the system to classify P2P heavy traffics.

Optimal Header Compression of MIPv6 and NEMO Protocol for Mobility Support in 6LoWPAN (6LoWPAN의 이동성 지원을 위한 MIPv6와 NEMO Protocol의 최적 헤더 압축)

  • Ha, Min-Keun;Hong, Sung-Min;Kim, Young-Joo;Kim, Dae-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.1
    • /
    • pp.55-59
    • /
    • 2010
  • Currently in a Ubiquitous Sensor Network (USN) research field, supporting mobility is recognized as an important technology. MIPv6 and Network Mobility(NEMO) Basic Support Protocol are standard protocols to support mobility in the Internet. However, if they are applied to USN with no modification, handoff performance decreases due to the size of their binding message. An existing lightweight protocol for NEMO protocol has a compatibility problem of Sequence Num. and does not optimally compress binding messages considering 6LoWPAN network structure and addressing. This paper proposes optimal header compression which supports node-based mobility and network-based mobility. Our optimal compression technique compresses a 32bytes binding update(BU) message and a 12bytes binding ACK(BA) message of MIPv6 into 13bytes and 3bytes, and a 40bytes BU message and a 12bytes BA message of NEMO protocol into 13bytes and 3bytes. The result shows that our protocol compresses 15bytes (NEMO-BU) and 1byte (NEMO-BA) more than the existing protocol and achieves 8.72% handoff performance improvement.

An Algorithm Generating All the Playable Transcoding Paths using the QoS Transition Diagram for a Multimedia Presentation Requiring Different QoS between the Source and the Destination (근원지와 목적지에서 서로 다른 서비스 품질(QoS)을 필요로 하는 멀티미디어 연출의 재생을 위한 서비스 품질 전이도 기반의 변환 경로 생성 알고리즘)

  • 전성미;임영환
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.2
    • /
    • pp.208-215
    • /
    • 2003
  • For playing a multimedia presentation in a Internet, the case that the presentation QoS(Quality of Services) at a destination nay be different from the QoS of multimedia data at the source occurs frequently. In this case, the process of trancoding the multimedia data at the source Into the multimedia data satisfying the QoS at the destination should be requited. In addition, even the presentation description having the homogeneous QoS at both sides may have different transcoding paths due to the limitation of display terminals or network bandwidth. That is, for a multimedia description, it is required to regenerate a proper transcoding path whenever the displaying terminals or the network environment gets decided. And the delay time required to go through the transcoding path may affect the playability of the give presentation. Therefore it should be checked whether the presentation requiring a transcoding process is able to be played in a real time. In this paper, the algorithm for generating all the possible transcoding paths for a given multimedia description under a fixed set of transcoders and the network environment is proposed. The algorithm adopts the concept of QoS transition diagram to Prevent from a trancoding Path being cycled by the repetition of a cyclic Path which generates the same QoS of multimedia data as its input QoS. By eliminating all the cyclic Paths, the algorithm can guarantee the termination of the process. And for the playability check, a method of computing the transcoding time and the delay lime between logical data units are proposed.Finally all the proposed methods were implemented in the stream engine, called TransCore and the presentation-authoring tool, called VIP, we had developed. And the test results with sample scenarios were presented at the last.

  • PDF

A Filtering Technique of Streaming XML Data based Postfix Sharing for Partial matching Path Queries (부분매칭 경로질의를 위한 포스트픽스 공유에 기반한 스트리밍 XML 데이타 필터링 기법)

  • Park Seog;Kim Young-Soo
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.138-149
    • /
    • 2006
  • As the environment with sensor network and ubiquitous computing is emerged, there are many demands of handling continuous, fast data such as streaming data. As work about streaming data has begun, work about management of streaming data in Publish-Subscribe system is started. The recent emergence of XML as a standard for information exchange on Internet has led to more interest in Publish - Subscribe system. A filtering technique of streaming XML data in the existing Publish- Subscribe system is using some schemes based on automata and YFilter, which is one of filtering techniques, is very popular. YFilter exploits commonality among path queries by sharing the common prefixes of the paths so that they are processed at most one and that is using the top-down approach. However, because partial matching path queries interrupt the common prefix sharing and don't calculate from root, throughput of YFilter decreases. So we use sharing of commonality among path queries with the common postfixes of the paths and use the bottom-up approach instead of the top-down approach. This filtering technique is called as PoSFilter. And we verify this technique through comparing with YFilter about throughput.

Protein Interaction Possibility Ranking Method based on Domain Combination (도메인 조합 기반 단백질 상호작용 가능성 순위 부여 기법)

  • Han Dong-Soo;Kim Hong-Song;Jong Woo-Hyuk;Lee Sung-Doke
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.5
    • /
    • pp.427-435
    • /
    • 2005
  • With the accumulation of protein and its related data on the Internet, many domain based computational techniques to predict protein interactions have been developed. However, most of the techniques still have many limitations to be used in real fields. They usually suffer from a low accuracy problem in prediction and do not provide any interaction possibility ranking method for multiple protein pairs. In this paper, we reevaluate a domain combination based protein interaction prediction method and develop an interaction possibility ranking method for multiple protein pairs. Probability equations are devised and proposed in the framework of domain combination based protein interaction prediction method. Using the ranking method, one can discern which protein pair is more probable to interact with each other than other protein pairs in multiple protein pairs. In the validation of the ranking method, we revealed that there exist some correlations between the interacting probability and the precision of the prediction in case of the protein pair group having the matching PIP(Primary Interaction Probability) values in the interacting or non interacting PIP distributions.