• Title/Summary/Keyword: Internet Bank

Search Result 175, Processing Time 0.02 seconds

Automatic Component Reconfiguration Tool Based on the Feature Configuration and GenVoca Architecture (특성 구성과 GenVoca 아키텍처에 기반한 컴포넌트 재구성 자동화 도구)

  • Choi Seung Hoon
    • Journal of Internet Computing and Services
    • /
    • v.5 no.4
    • /
    • pp.125-134
    • /
    • 2004
  • Recently a lot of researches on the component-based software product lines and on applying generative programming into software product lines are being performed actively. This paper proposes an automatic component reconfiguration tool that could be applied in constructing the component-based software product lines. Our tool accepts the reuser's requirement via a feature model which is the main result of the domain engineering, and makes the feature configuration from this requirement. Then it generates the source code of the reconfigured component according to this feature configuration. To accomplish this process, the component family in our tool should have the architecture of GenVoca that is one of the most influential generative programming approaches. In addition, XSLT scripts provide the code templates for implementation elements which are the ingredients of the target component. Taking the ‘Bank Account' component family as our example, we showed that our component reconfiguration tool produced automatically the component source code that the reuser wants to create. The result of this paper would be applied extensively for creasing the productivity of building the software product lines.

  • PDF

Performance Evaluation for Speed of Mobile Devices in UFMC Systems (UFMC 시스템에서 모바일 장치의 이동속도에 대한 성능평가)

  • Lee, Kyuseop;Choi, Ginkyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.1
    • /
    • pp.53-58
    • /
    • 2017
  • UFMC is known as the one among novel multi-carrier modulation techniques which are designed for replacing OFDM for 5G wireless communication systems. It is the generalized model of OFDM and FBMC, which combines the advantages of OFDM and FBMC and avoids their weak points. UFMC is more robust in synchronization condition like Time-frequency misalignment compared to CP-OFDM. Moreover UFMC is more proper to burst uplink transmission like M2M 5G Communications. In this paper we analyze the BER performance in various channels and speeds. The simulation result shows that the BER performance is lowered when mobile devices are moving fast and the BER performance is so sensitive for the good channel environment.

Certificate-based SSO Protocol Complying with Web Standard (웹 표준을 준수하는 인증서기반 통합 인증 프로토콜)

  • Yun, Jong Pil;Kim, Jonghyun;Lee, Kwangsu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.8
    • /
    • pp.1466-1477
    • /
    • 2016
  • Public key infrastructure(PKI), principle technology of the certificate, is a security technology providing functions such as identification, non-repudiation, and anti-forgery of electronic documents on the Internet. Our government and financial organizations use PKI authentication using ActiveX to prevent security accident on the Internet service. However, like ActiveX, plug-in technology is vulnerable to security and inconvenience since it is only serviceable to certain browser. Therefore, the research on HTML5 authentication system has been conducted actively. Recently, domestic bank introduced PKI authentication complying with web standard for the first time. However, it still has inconvenience to register a certification on each website because of same origin policy of web storage. This paper proposes the certificate based SSO protocol that complying with web standard to provide user authentication using certificate on several sites by going around same origin policy and its security proof.

A Fair Radio Resource Allocation Algorithm for Uplink of FBMC Based CR Systems

  • Jamal, Hosseinali;Ghorashi, Seyed Ali;Sadough, Seyed Mohammad-Sajad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.6
    • /
    • pp.1479-1495
    • /
    • 2012
  • Spectrum scarcity seems to be the most challenging issue to be solved in new wireless telecommunication services. It is shown that spectrum unavailability is mainly due to spectrum inefficient utilization and inappropriate physical layer execution rather than spectrum shortage. Daily increasing demand for new wireless services with higher data rate and QoS level makes the upgrade of the physical layer modulation techniques inevitable. Orthogonal Frequency Division Multiple Access (OFDMA) which utilizes multicarrier modulation to provide higher data rates with the capability of flexible resource allocation, although has widely been used in current wireless systems and standards, seems not to be the best candidate for cognitive radio systems. Filter Bank based Multi-Carrier (FBMC) is an evolutionary scheme with some advantages over the widely-used OFDM multicarrier technique. In this paper, we focus on the total throughput improvement of a cognitive radio network using FBMC modulation. Along with this modulation scheme, we propose a novel uplink radio resource allocation algorithm in which fairness issue is also considered. Moreover, the average throughput of the proposed FBMC based cognitive radio is compared to a conventional OFDM system in order to illustrate the efficiency of using FBMC in future cognitive radio systems. Simulation results show that in comparison with the state of the art two algorithms (namely, Shaat and Wang) our proposed algorithm achieves higher throughputs and a better fairness for cognitive radio applications.

ACCESS CONTROL MODEL FOR DATA STORED ON CLOUD COMPUTING

  • Mateen, Ahmed;Zhu, Qingsheng;Afsar, Salman;Rehan, Akmal;Mumtaz, Imran;Ahmad, Wasi
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.208-221
    • /
    • 2019
  • The inference for this research was concentrated on client's data protection in cloud computing i.e. data storages protection problems and how to limit unauthenticated access to info by developing access control model then accessible preparations were introduce after that an access control model was recommend. Cloud computing might refer as technology base on internet, having share, adaptable authority that might be utilized as organization by clients. Compositely cloud computing is software's and hardware's are conveying by internet as a service. It is a remarkable technology get well known because of minimal efforts, adaptability and versatility according to client's necessity. Regardless its prevalence large administration, propositions are reluctant to proceed onward cloud computing because of protection problems, particularly client's info protection. Management have communicated worries overs info protection as their classified and delicate info should be put away by specialist management at any areas all around. Several access models were accessible, yet those models do not satisfy the protection obligations as per services producers and cloud is always under assaults of hackers and data integrity, accessibility and protection were traded off. This research presented a model keep in aspect the requirement of services producers that upgrading the info protection in items of integrity, accessibility and security. The developed model helped the reluctant clients to effectively choosing to move on cloud while considerate the uncertainty related with cloud computing.

An XPDL-Based Workflow Control-Structure and Data-Sequence Analyzer

  • Kim, Kwanghoon Pio
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1702-1721
    • /
    • 2019
  • A workflow process (or business process) management system helps to define, execute, monitor and manage workflow models deployed on a workflow-supported enterprise, and the system is compartmentalized into a modeling subsystem and an enacting subsystem, in general. The modeling subsystem's functionality is to discover and analyze workflow models via a theoretical modeling methodology like ICN, to graphically define them via a graphical representation notation like BPMN, and to systematically deploy those graphically defined models onto the enacting subsystem by transforming into their textual models represented by a standardized workflow process definition language like XPDL. Before deploying those defined workflow models, it is very important to inspect its syntactical correctness as well as its structural properness to minimize the loss of effectiveness and the depreciation of efficiency in managing the corresponding workflow models. In this paper, we are particularly interested in verifying very large-scale and massively parallel workflow models, and so we need a sophisticated analyzer to automatically analyze those specialized and complex styles of workflow models. One of the sophisticated analyzers devised in this paper is able to analyze not only the structural complexity but also the data-sequence complexity, especially. The structural complexity is based upon combinational usages of those control-structure constructs such as subprocesses, exclusive-OR, parallel-AND and iterative-LOOP primitives with preserving matched pairing and proper nesting properties, whereas the data-sequence complexity is based upon combinational usages of those relevant data repositories such as data definition sequences and data use sequences. Through the devised and implemented analyzer in this paper, we are able eventually to achieve the systematic verifications of the syntactical correctness as well as the effective validation of the structural properness on those complicate and large-scale styles of workflow models. As an experimental study, we apply the implemented analyzer to an exemplary large-scale and massively parallel workflow process model, the Large Bank Transaction Workflow Process Model, and show the structural complexity analysis results via a series of operational screens captured from the implemented analyzer.

A Protection Profile for E-Document Issuing System (온라인증명서 발급시스템 보호프로파일에 관한 연구)

  • Lee, Hyun-Jung;Won, Dong-Ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.6
    • /
    • pp.109-117
    • /
    • 2011
  • We can use document issuance services provided by a school, bank, hospital, company, etc. either by visiting those facilities or by simply visiting their Web sites. Services available through the Internet allow us to use the same services as we do by actually going to those facilities at home or office any time. As much as it saves us time and money, there also arises a problem of information being forged on the Internet or on a printed document. There has to be security functions to deal with the problem. This paper intends to think of the possible security threats and draw out the necessary security functions that an on-line document issuance system should have based on the CC v3.1, so that anyone can use it as reference when they evaluate or introduce the system.

Designing a low-power L1 cache system using aggressive data of frequent reference patterns

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.7
    • /
    • pp.9-16
    • /
    • 2022
  • Today, with the advent of the 4th industrial revolution, IoT (Internet of Things) systems are advancing rapidly. For this reason, a various application with high-performance and large-capacity are emerging. Therefore, there is a need for low-power and high-performance memory for computing systems with these applications. In this paper, we propose an effective structure for the L1 cache memory, which consumes the most energy in the computing system. The proposed cache system is largely composed of two parts, the L1 main cache and the buffer cache. The main cache is 2 banks, and each bank consists of a 2-way set association. When the L1 cache hits, the data is copied into buffer cache according to the proposed algorithm. According to simulation, the proposed L1 cache system improved the performance of energy delay products by about 65% compared to the existing 4-way set associative cache memory.

GCNXSS: An Attack Detection Approach for Cross-Site Scripting Based on Graph Convolutional Networks

  • Pan, Hongyu;Fang, Yong;Huang, Cheng;Guo, Wenbo;Wan, Xuelin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.4008-4023
    • /
    • 2022
  • Since machine learning was introduced into cross-site scripting (XSS) attack detection, many researchers have conducted related studies and achieved significant results, such as saving time and labor costs by not maintaining a rule database, which is required by traditional XSS attack detection methods. However, this topic came across some problems, such as poor generalization ability, significant false negative rate (FNR) and false positive rate (FPR). Moreover, the automatic clustering property of graph convolutional networks (GCN) has attracted the attention of researchers. In the field of natural language process (NLP), the results of graph embedding based on GCN are automatically clustered in space without any training, which means that text data can be classified just by the embedding process based on GCN. Previously, other methods required training with the help of labeled data after embedding to complete data classification. With the help of the GCN auto-clustering feature and labeled data, this research proposes an approach to detect XSS attacks (called GCNXSS) to mine the dependencies between the units that constitute an XSS payload. First, GCNXSS transforms a URL into a word homogeneous graph based on word co-occurrence relationships. Then, GCNXSS inputs the graph into the GCN model for graph embedding and gets the classification results. Experimental results show that GCNXSS achieved successful results with accuracy, precision, recall, F1-score, FNR, FPR, and predicted time scores of 99.97%, 99.75%, 99.97%, 99.86%, 0.03%, 0.03%, and 0.0461ms. Compared with existing methods, GCNXSS has a lower FNR and FPR with stronger generalization ability.

Parallel Implementations of Digital Focus Indices Based on Minimax Search Using Multi-Core Processors

  • HyungTae, Kim;Duk-Yeon, Lee;Dongwoon, Choi;Jaehyeon, Kang;Dong-Wook, Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.542-558
    • /
    • 2023
  • A digital focus index (DFI) is a value used to determine image focus in scientific apparatus and smart devices. Automatic focus (AF) is an iterative and time-consuming procedure; however, its processing time can be reduced using a general processing unit (GPU) and a multi-core processor (MCP). In this study, parallel architectures of a minimax search algorithm (MSA) are applied to two DFIs: range algorithm (RA) and image contrast (CT). The DFIs are based on a histogram; however, the parallel computation of the histogram is conventionally inefficient because of the bank conflict in shared memory. The parallel architectures of RA and CT are constructed using parallel reduction for MSA, which is performed through parallel relative rating of the image pixel pairs and halved the rating in every step. The array size is then decreased to one, and the minimax is determined at the final reduction. Kernels for the architectures are constructed using open source software to make it relatively platform independent. The kernels are tested in a hexa-core PC and an embedded device using Lenna images of various sizes based on the resolutions of industrial cameras. The performance of the kernels for the DFIs was investigated in terms of processing speed and computational acceleration; the maximum acceleration was 32.6× in the best case and the MCP exhibited a higher performance.