• Title/Summary/Keyword: Electronic and Processing Set

Search Result 150, Processing Time 0.022 seconds

Designing an Efficient and Secure Credit Card-based Payment System with Web Services Based on the ANSI X9.59-2006

  • Cheong, Chi Po;Fong, Simon;Lei, Pouwan;Chatwin, Chris;Young, Rupert
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.495-520
    • /
    • 2012
  • A secure Electronic Payment System (EPS) is essential for the booming online shopping market. A successful EPS supports the transfer of electronic money and sensitive information with security, accuracy, and integrity between the seller and buyer over the Internet. SET, CyberCash, Paypal, and iKP are the most popular Credit Card-Based EPSs (CCBEPSs). Some CCBEPSs only use SSL to provide a secure communication channel. Hence, they only prevent "Man in the Middle" fraud but do not protect the sensitive cardholder information such as the credit card number from being passed onto the merchant, who may be unscrupulous. Other CCBEPSs use complex mechanisms such as cryptography, certificate authorities, etc. to fulfill the security schemes. However, factors such as ease of use for the cardholder and the implementation costs for each party are frequently overlooked. In this paper, we propose a Web service based new payment system, based on ANSI X9.59-2006 with extra features added on top of this standard. X9.59 is an Account Based Digital Signature (ABDS) and consumer-oriented payment system. It utilizes the existing financial network and financial messages to complete the payment process. However, there are a number of limitations in this standard. This research provides a solution to solve the limitations of X9.59 by adding a merchant authentication feature during the payment cycle without any addenda records to be added in the existing financial messages. We have conducted performance testing on the proposed system via a comparison with SET and X9.59 using simulation to analyze their levels of performance and security.

A Study On Recommend System Using Co-occurrence Matrix and Hadoop Distribution Processing (동시발생 행렬과 하둡 분산처리를 이용한 추천시스템에 관한 연구)

  • Kim, Chang-Bok;Chung, Jae-Pil
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.5
    • /
    • pp.468-475
    • /
    • 2014
  • The recommend system is getting more difficult real time recommend by lager preference data set, computing power and recommend algorithm. For this reason, recommend system is proceeding actively one's studies toward distribute processing method of large preference data set. This paper studied distribute processing method of large preference data set using hadoop distribute processing platform and mahout machine learning library. The recommend algorithm is used Co-occurrence Matrix similar to item Collaborative Filtering. The Co-occurrence Matrix can do distribute processing by many node of hadoop cluster, and it needs many computation scale but can reduce computation scale by distribute processing. This paper has simplified distribute processing of co-occurrence matrix by changes over from four stage to three stage. As a result, this paper can reduce mapreduce job and can generate recommend file. And it has a fast processing speed, and reduce map output data.

Evolutionary Optimized Fuzzy Set-based Polynomial Neural Networks Based on Classified Information Granules

  • Oh, Sung-Kwun;Roh, Seok-Beom;Ahn, Tae-Chon
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2888-2890
    • /
    • 2005
  • In this paper, we introduce a new structure of fuzzy-neural networks Fuzzy Set-based Polynomial Neural Networks (FSPNN). The two underlying design mechanisms of such networks involve genetic optimization and information granulation. The resulting constructs are Fuzzy Polynomial Neural Networks (FPNN) with fuzzy set-based polynomial neurons (FSPNs) regarded as their generic processing elements. First, we introduce a comprehensive design methodology (viz. a genetic optimization using Genetic Algorithms) to determine the optimal structure of the FSPNNs. This methodology hinges on the extended Group Method of Data Handling (GMDH) and fuzzy set-based rules. It concerns FSPNN-related parameters such as the number of input variables, the order of the polynomial, the number of membership functions, and a collection of a specific subset of input variables realized through the mechanism of genetic optimization. Second, the fuzzy rules used in the networks exploit the notion of information granules defined over systems variables and formed through the process of information granulation. This granulation is realized with the aid of the hard C- Means clustering (HCM). The performance of the network is quantified through experimentation in which we use a number of modeling benchmarks already experimented with in the realm of fuzzy or neurofuzzy modeling.

  • PDF

Content-Based Image Retrieval Using Combined Color and Texture Features Extracted by Multi-resolution Multi-direction Filtering

  • Bu, Hee-Hyung;Kim, Nam-Chul;Moon, Chae-Joo;Kim, Jong-Hwa
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.464-475
    • /
    • 2017
  • In this paper, we present a new texture image retrieval method which combines color and texture features extracted from images by a set of multi-resolution multi-direction (MRMD) filters. The MRMD filter set chosen is simple and can be separable to low and high frequency information, and provides efficient multi-resolution and multi-direction analysis. The color space used is HSV color space separable to hue, saturation, and value components, which are easily analyzed as showing characteristics similar to the human visual system. This experiment is conducted by comparing precision vs. recall of retrieval and feature vector dimensions. Images for experiments include Corel DB and VisTex DB; Corel_MR DB and VisTex_MR DB, which are transformed from the aforementioned two DBs to have multi-resolution images; and Corel_MD DB and VisTex_MD DB, transformed from the two DBs to have multi-direction images. According to the experimental results, the proposed method improves upon the existing methods in aspects of precision and recall of retrieval, and also reduces feature vector dimensions.

Efficient Sampling of Graph Signals with Reduced Complexity (저 복잡도를 갖는 효율적인 그래프 신호의 샘플링 알고리즘)

  • Kim, Yoon Hak
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.367-374
    • /
    • 2022
  • A sampling set selection algorithm is proposed to reconstruct original graph signals from the sampled signals generated on the nodes in the sampling set. Instead of directly minimizing the reconstruction error, we focus on minimizing the upper bound on the reconstruction error to reduce the algorithm complexity. The metric is manipulated by using QR factorization to produce the upper triangular matrix and the analytic result is presented to enable a greedy selection of the next nodes at iterations by using the diagonal entries of the upper triangular matrix, leading to an efficient sampling process with reduced complexity. We run experiments for various graphs to demonstrate a competitive reconstruction performance of the proposed algorithm while offering the execution time about 3.5 times faster than one of the previous selection methods.

On-Demand Remote Software Code Execution Unit Using On-Chip Flash Memory Cloudification for IoT Environment Acceleration

  • Lee, Dongkyu;Seok, Moon Gi;Park, Daejin
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.191-202
    • /
    • 2021
  • In an Internet of Things (IoT)-configured system, each device executes on-chip software. Recent IoT devices require fast execution time of complex services, such as analyzing a large amount of data, while maintaining low-power computation. As service complexity increases, the service requires high-performance computing and more space for embedded space. However, the low performance of IoT edge devices and their small memory size can hinder the complex and diverse operations of IoT services. In this paper, we propose a remote on-demand software code execution unit using the cloudification of on-chip code memory to accelerate the program execution of an IoT edge device with a low-performance processor. We propose a simulation approach to distribute remote code executed on the server side and on the edge side according to the program's computational and communicational needs. Our on-demand remote code execution unit simulation platform, which includes an instruction set simulator based on 16-bit ARM Thumb instruction set architecture, successfully emulates the architectural behavior of on-chip flash memory, enabling embedded devices to accelerate and execute software using remote execution code in the IoT environment.

A study on characteristics of crystallization according to changes of top structure with phase change memory cell of $Ge_2Sb_2Te_5$ ($Ge_2Sb_2Te_5$ 상변화 소자의 상부구조 변화에 따른 결정화 특성 연구)

  • Lee, Jae-Min;Shin, Kyung;Choi, Hyuck;Chung, Hong-Bay
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2005.11a
    • /
    • pp.80-81
    • /
    • 2005
  • Chalcogenide phase change memory has high performance to be next generation memory, because it is a nonvolatile memory processing high programming speed, low programming voltage, high sensing margin, low consumption and long cycle duration. We have developed a sample of PRAM with thermal protected layer. We have investigated the phase transition behaviors in function of process factor including thermal protect layer. As a result, we have observed that set voltage and duration of protect layer are more improved than no protect layer.

  • PDF

Object Tracking with the Multi-Templates Regression Model Based MS Algorithm

  • Zhang, Hua;Wang, Lijia
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1307-1317
    • /
    • 2018
  • To deal with the problems of occlusion, pose variations and illumination changes in the object tracking system, a regression model weighted multi-templates mean-shift (MS) algorithm is proposed in this paper. Target templates and occlusion templates are extracted to compose a multi-templates set. Then, the MS algorithm is applied to the multi-templates set for obtaining the candidate areas. Moreover, a regression model is trained to estimate the Bhattacharyya coefficients between the templates and candidate areas. Finally, the geometric center of the tracked areas is considered as the object's position. The proposed algorithm is evaluated on several classical videos. The experimental results show that the regression model weighted multi-templates MS algorithm can track an object accurately in terms of occlusion, illumination changes and pose variations.

A Prediction Model of the Sum of Container Based on Combined BP Neural Network and SVM

  • Ding, Min-jie;Zhang, Shao-zhong;Zhong, Hai-dong;Wu, Yao-hui;Zhang, Liang-bin
    • Journal of Information Processing Systems
    • /
    • v.15 no.2
    • /
    • pp.305-319
    • /
    • 2019
  • The prediction of the sum of container is very important in the field of container transport. Many influencing factors can affect the prediction results. These factors are usually composed of many variables, whose composition is often very complex. In this paper, we use gray relational analysis to set up a proper forecast index system for the prediction of the sum of containers in foreign trade. To address the issue of the low accuracy of the traditional prediction models and the problem of the difficulty of fully considering all the factors and other issues, this paper puts forward a prediction model which is combined with a back-propagation (BP) neural networks and the support vector machine (SVM). First, it gives the prediction with the data normalized by the BP neural network and generates a preliminary forecast data. Second, it employs SVM for the residual correction calculation for the results based on the preliminary data. The results of practical examples show that the overall relative error of the combined prediction model is no more than 1.5%, which is less than the relative error of the single prediction models. It is hoped that the research can provide a useful reference for the prediction of the sum of container and related studies.

A Review of Computer Vision Methods for Purpose on Computer-Aided Diagnosis

  • Song, Hyewon;Nguyen, Anh-Duc;Gong, Myoungsik;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • v.3 no.1
    • /
    • pp.1-8
    • /
    • 2016
  • In the field of Radiology, the Computer Aided Diagnosis is the technology which gives valuable information for surgical purpose. For its importance, several computer vison methods are processed to obtain useful information of images acquired from the imaging devices such as X-ray, Magnetic Resonance Imaging (MRI) and Computed Tomography (CT). These methods, called pattern recognition, extract features from images and feed them to some machine learning algorithm to find out meaningful patterns. Then the learned machine is then used for exploring patterns from unseen images. The radiologist can therefore easily find the information used for surgical planning or diagnosis of a patient through the Computer Aided Diagnosis. In this paper, we present a review on three widely-used methods applied to Computer Aided Diagnosis. The first one is the image processing methods which enhance meaningful information such as edge and remove the noise. Based on the improved image quality, we explain the second method called segmentation which separates the image into a set of regions. The separated regions such as bone, tissue, organs are then delivered to machine learning algorithms to extract representative information. We expect that this paper gives readers basic knowledges of the Computer Aided Diagnosis and intuition about computer vision methods applied in this area.