• Title/Summary/Keyword: Internet Applications

Search Result 2,442, Processing Time 0.031 seconds

Comparative Analysis of ViSCa Platform-based Mobile Payment Service with other Cases (스마트카드 가상화(ViSCa) 플랫폼 기반 모바일 결제 서비스 제안 및 타 사례와의 비교분석)

  • Lee, June-Yeop;Lee, Kyoung-Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.163-178
    • /
    • 2014
  • Following research proposes "Virtualization of Smart Cards (ViSCa)" which is a security system that aims to provide a multi-device platform for the deployment of services that require a strong security protocol, both for the access & authentication and execution of its applications and focuses on analyzing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service by comparing with other similar cases. At the present day, the appearance of new ICT, the diffusion of new user devices (such as smartphones, tablet PC, and so on) and the growth of internet penetration rate are creating many world-shaking services yet in the most of these applications' private information has to be shared, which means that security breaches and illegal access to that information are real threats that have to be solved. Also mobile payment service is, one of the innovative services, has same issues which are real threats for users because mobile payment service sometimes requires user identification, an authentication procedure and confidential data sharing. Thus, an extra layer of security is needed in their communication and execution protocols. The Virtualization of Smart Cards (ViSCa), concept is a holistic approach and centralized management for a security system that pursues to provide a ubiquitous multi-device platform for the arrangement of mobile payment services that demand a powerful security protocol, both for the access & authentication and execution of its applications. In this sense, Virtualization of Smart Cards (ViSCa) offers full interoperability and full access from any user device without any loss of security. The concept prevents possible attacks by third parties, guaranteeing the confidentiality of personal data, bank accounts or private financial information. The Virtualization of Smart Cards (ViSCa) concept is split in two different phases: the execution of the user authentication protocol on the user device and the cloud architecture that executes the secure application. Thus, the secure service access is guaranteed at anytime, anywhere and through any device supporting previously required security mechanisms. The security level is improved by using virtualization technology in the cloud. This virtualization technology is used terminal virtualization to virtualize smart card hardware and thrive to manage virtualized smart cards as a whole, through mobile cloud technology in Virtualization of Smart Cards (ViSCa) platform-based mobile payment service. This entire process is referred to as Smart Card as a Service (SCaaS). Virtualization of Smart Cards (ViSCa) platform-based mobile payment service virtualizes smart card, which is used as payment mean, and loads it in to the mobile cloud. Authentication takes place through application and helps log on to mobile cloud and chooses one of virtualized smart card as a payment method. To decide the scope of the research, which is comparing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service with other similar cases, we categorized the prior researches' mobile payment service groups into distinct feature and service type. Both groups store credit card's data in the mobile device and settle the payment process at the offline market. By the location where the electronic financial transaction information (data) is stored, the groups can be categorized into two main service types. First is "App Method" which loads the data in the server connected to the application. Second "Mobile Card Method" stores its data in the Integrated Circuit (IC) chip, which holds financial transaction data, which is inbuilt in the mobile device secure element (SE). Through prior researches on accept factors of mobile payment service and its market environment, we came up with six key factors of comparative analysis which are economic, generality, security, convenience(ease of use), applicability and efficiency. Within the chosen group, we compared and analyzed the selected cases and Virtualization of Smart Cards (ViSCa) platform-based mobile payment service.

A Smoothing Data Cleaning based on Adaptive Window Sliding for Intelligent RFID Middleware Systems (지능적인 RFID 미들웨어 시스템을 위한 적응형 윈도우 슬라이딩 기반의 유연한 데이터 정제)

  • Shin, DongCheon;Oh, Dongok;Ryu, SeungWan;Park, Seikwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.1-18
    • /
    • 2014
  • Over the past years RFID/SN has been an elementary technology in a diversity of applications for the ubiquitous environments, especially for Internet of Things. However, one of obstacles for widespread deployment of RFID technology is the inherent unreliability of the RFID data streams by tag readers. In particular, the problem of false readings such as lost readings and mistaken readings needs to be treated by RFID middleware systems because false readings ultimately degrade the quality of application services due to the dirty data delivered by middleware systems. As a result, for the higher quality of services, an RFID middleware system is responsible for intelligently dealing with false readings for the delivery of clean data to the applications in accordance with the tag reading environment. One of popular techniques used to compensate false readings is a sliding window filter. In a sliding window scheme, it is evident that determining optimal window size intelligently is a nontrivial important task in RFID middleware systems in order to reduce false readings, especially in mobile environments. In this paper, for the purpose of reducing false readings by intelligent window adaption, we propose a new adaptive RFID data cleaning scheme based on window sliding for a single tag. Unlike previous works based on a binomial sampling model, we introduce the weight averaging. Our insight starts from the need to differentiate the past readings and the current readings, since the more recent readings may indicate the more accurate tag transitions. Owing to weight averaging, our scheme is expected to dynamically adapt the window size in an efficient manner even for non-homogeneous reading patterns in mobile environments. In addition, we analyze reading patterns in the window and effects of decreased window so that a more accurate and efficient decision on window adaption can be made. With our scheme, we can expect to obtain the ultimate goal that RFID middleware systems can provide applications with more clean data so that they can ensure high quality of intended services.

Design and Implementation of a Web Application Firewall with Multi-layered Web Filter (다중 계층 웹 필터를 사용하는 웹 애플리케이션 방화벽의 설계 및 구현)

  • Jang, Sung-Min;Won, Yoo-Hun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.157-167
    • /
    • 2009
  • Recently, the leakage of confidential information and personal information is taking place on the Internet more frequently than ever before. Most of such online security incidents are caused by attacks on vulnerabilities in web applications developed carelessly. It is impossible to detect an attack on a web application with existing firewalls and intrusion detection systems. Besides, the signature-based detection has a limited capability in detecting new threats. Therefore, many researches concerning the method to detect attacks on web applications are employing anomaly-based detection methods that use the web traffic analysis. Much research about anomaly-based detection through the normal web traffic analysis focus on three problems - the method to accurately analyze given web traffic, system performance needed for inspecting application payload of the packet required to detect attack on application layer and the maintenance and costs of lots of network security devices newly installed. The UTM(Unified Threat Management) system, a suggested solution for the problem, had a goal of resolving all of security problems at a time, but is not being widely used due to its low efficiency and high costs. Besides, the web filter that performs one of the functions of the UTM system, can not adequately detect a variety of recent sophisticated attacks on web applications. In order to resolve such problems, studies are being carried out on the web application firewall to introduce a new network security system. As such studies focus on speeding up packet processing by depending on high-priced hardware, the costs to deploy a web application firewall are rising. In addition, the current anomaly-based detection technologies that do not take into account the characteristics of the web application is causing lots of false positives and false negatives. In order to reduce false positives and false negatives, this study suggested a realtime anomaly detection method based on the analysis of the length of parameter value contained in the web client's request. In addition, it designed and suggested a WAF(Web Application Firewall) that can be applied to a low-priced system or legacy system to process application data without the help of an exclusive hardware. Furthermore, it suggested a method to resolve sluggish performance attributed to copying packets into application area for application data processing, Consequently, this study provide to deploy an effective web application firewall at a low cost at the moment when the deployment of an additional security system was considered burdened due to lots of network security systems currently used.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

A Novel Idle Mode Operation in IEEE 802.11 WLANs: Prototype Implementation and Performance Evaluation (IEEE 802.11 WLAN을 위한 Idle Mode Operation: Prototype 구현 및 성능 측정)

  • Jin, Sung-Geun;Han, Kwang-Hun;Choi, Sung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2A
    • /
    • pp.152-161
    • /
    • 2007
  • IEEE 802.11 Wireless Local Area Network (WLAN) became a prevailing technology for the broadband wireless Internet access, and new applications such as Voice over WLAM (VoWLAN) are fast emerging today. For the battery-powered VoWLAN devices, the standby time extension is a key concern for the market acceptance while today's 802.11 is not optimized for such an operation. In this paper, we propose a novel Idle Mode operation, which comprises paging, idle handoff, and delayed handoff. Under the idle mode operation, a Mobile Host (MH) does not need to perform a handoff within a predefined Paging Area (PA). Only when the MH enters a new PA, an idle handoff is performed with a minimum level of signaling. Due to the absence of such an idle mode operation, both IP paging and Power Saving Mode (PSM) have been considered the alternatives so far even though they are not efficient approaches. We implement our proposed scheme in order to prove the feasibility. The implemented prototype demonstrates that the proposed scheme outperforms the legacy alternatives with respect to energy consumption, thus extending the standby time.

Image Contrast and Sunlight Readability Enhancement for Small-sized Mobile Display (소형 모바일 디스플레이의 영상 컨트라스트 및 야외시인성 개선 기법)

  • Chung, Jin-Young;Hossen, Monir;Choi, Woo-Young;Kim, Ki-Doo
    • Journal of IKEEE
    • /
    • v.13 no.4
    • /
    • pp.116-124
    • /
    • 2009
  • Recently the CPU performance of modem chipsets or multimedia processors of mobile phone is as high as notebook PC. That is why mobile phone has been emerged as a leading ICON on the convergence of consumer electronics. The various applications of mobile phone such as DMB, digital camera, video telephony and internet full browsing are servicing to consumers. To meet all the demands the image quality has been increasingly important. Mobile phone is a portable device which is widely using in both the indoor and outside environments, so it is needed to be overcome to deteriorate image quality depending on environmental light source. Furthermore touch window is popular on the mobile display panel and it makes contrast loss because of low transmittance of ITO film. This paper presents the image enhancement algorithm to be embedded on image enhancement SoC. In contrast enhancement, we propose Clipped histogram stretching method to make it adaptive with the input images, while S-shape curve and gain/offset method for the static application And CIELCh color space is used to sunlight readability enhancement by controlling the lightness and chroma components which is depended on the sensing value of light sensor. Finally the performance of proposed algorithm is evaluated by using histogram, RGB pixel distribution, entropy and dynamic range of resultant images. We expect that the proposed algorithm is suitable for image enhancement of embedded SoC system which is applicable for the small-sized mobile display.

  • PDF

Importances of Smart Phone Attributes by Pursuit Benefits (추구편익에 따른 스마트폰 속성 중요도)

  • Kim, Mi-Ae;Joo, Young-Jin
    • The Journal of Society for e-Business Studies
    • /
    • v.20 no.1
    • /
    • pp.99-115
    • /
    • 2015
  • This study aims to classify the pursuit benefits of smart-phone users, to find smart-phone market segments by pursuit benefits, and to analyze the relative importances of smart-phone attributes according to the smart-phone market segments. As a result, we found that smart-phone users are pursuing the network benefit as well as the two traditional benefits (the utilitarian benefit and the hedonic benefit). According to the levels of these three pursuit benefits, smart-phone users can be classified into four segments : All Benefits Cluster, Utilitarian-Network Benefits Cluster, Hedonic-Network Benefits Cluster, and Non-Network Benefits Cluster. We also verified that, according to the four smart-phone user segments by the pursuit benefits, there exist significant differences in relative importances of the seven smart-phone attributes : hand-set price, hand-set brand, hand-set speed, applications, tariff, mobile internet quality, and number of same service users.

Efficient Mining of Frequent Subgraph with Connectivity Constraint

  • Moon, Hyun-S.;Lee, Kwang-H.;Lee, Do-Heon
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.267-271
    • /
    • 2005
  • The goal of data mining is to extract new and useful knowledge from large scale datasets. As the amount of available data grows explosively, it became vitally important to develop faster data mining algorithms for various types of data. Recently, an interest in developing data mining algorithms that operate on graphs has been increased. Especially, mining frequent patterns from structured data such as graphs has been concerned by many research groups. A graph is a highly adaptable representation scheme that used in many domains including chemistry, bioinformatics and physics. For example, the chemical structure of a given substance can be modelled by an undirected labelled graph in which each node corresponds to an atom and each edge corresponds to a chemical bond between atoms. Internet can also be modelled as a directed graph in which each node corresponds to an web site and each edge corresponds to a hypertext link between web sites. Notably in bioinformatics area, various kinds of newly discovered data such as gene regulation networks or protein interaction networks could be modelled as graphs. There have been a number of attempts to find useful knowledge from these graph structured data. One of the most powerful analysis tool for graph structured data is frequent subgraph analysis. Recurring patterns in graph data can provide incomparable insights into that graph data. However, to find recurring subgraphs is extremely expensive in computational side. At the core of the problem, there are two computationally challenging problems. 1) Subgraph isomorphism and 2) Enumeration of subgraphs. Problems related to the former are subgraph isomorphism problem (Is graph A contains graph B?) and graph isomorphism problem(Are two graphs A and B the same or not?). Even these simplified versions of the subgraph mining problem are known to be NP-complete or Polymorphism-complete and no polynomial time algorithm has been existed so far. The later is also a difficult problem. We should generate all of 2$^n$ subgraphs if there is no constraint where n is the number of vertices of the input graph. In order to find frequent subgraphs from larger graph database, it is essential to give appropriate constraint to the subgraphs to find. Most of the current approaches are focus on the frequencies of a subgraph: the higher the frequency of a graph is, the more attentions should be given to that graph. Recently, several algorithms which use level by level approaches to find frequent subgraphs have been developed. Some of the recently emerging applications suggest that other constraints such as connectivity also could be useful in mining subgraphs : more strongly connected parts of a graph are more informative. If we restrict the set of subgraphs to mine to more strongly connected parts, its computational complexity could be decreased significantly. In this paper, we present an efficient algorithm to mine frequent subgraphs that are more strongly connected. Experimental study shows that the algorithm is scaling to larger graphs which have more than ten thousand vertices.

  • PDF

An Improved Location Polling Algorithm for Location-Based Alert Services (위치기반 경보서비스를 위한 향상된 위치획득 알고리즘)

  • Song, Jin-Woo;Ahn, Byung-Ik;Lee, Kwang-Jo;Han, Jung-Suk;Yang, Sung-Bong
    • Journal of KIISE:Databases
    • /
    • v.37 no.1
    • /
    • pp.22-32
    • /
    • 2010
  • Location-based services have been expanded rapidly in local and overseas markets due to technological advances and increasing applications of wireless internet. Various researches have been made to manage efficiently the location information of moving objects. A basic location-based alert service provides alerting messages automatically when either entering or leaving a specific location and it is expected to become one of the most important location-based services. Location-based alert services require a location polling method to acquire current locations for a large number of moving objects. However, a simple periodical location polling method causes severe system overload because a system should keep updating location information of the moving objects ceaselessly. Most location polling algorithms for location-based alerting services are not suitable for mobile users with dynamic and unsteady moving patterns. In this paper, we propose an improved location polling algorithm for location-based alerting services to reduce the amount of location information acquisition and therefore, to decrease the system load. Various experiments show that the proposed algorithm outperforms other algorithms.

XML Web Services for Learning ContentsBased on a Pedagogical Design Model (교수법적 설계 모델링에 기반한 학습 컨텐츠의 XML 웹 서비스 구축)

  • Shin, Haeng-Ja;Park, Kyung-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.8
    • /
    • pp.1131-1144
    • /
    • 2004
  • In this paper, we investigate a problem with an e-learning system for e-business environments and introduce the solving method of the problem. To be more accurate, existing Web-hosted and ASP (Application Service Provider)-oriented service model is difficult to cooperate and integrate among the different kinds of systems. So we have produced sharable and reusable learning object, they have extracted a principle from pedagogical designs for units of reuse. We call LIO (Learning Item Object). This modeling makes use of a constructing for XML Web Services. So to speak, units of reuse from pedagogical designs are test tutorial, resource, case example, simulation, problem, test, discovery and discussion and then map introduction, fact, try, quiz, test, link-more, tell-more LIO learning object. These typed LIOs are stored in metadata along with the information for a content location. Each one of LIOs is designed with components and exposed in an interface for XML Web services. These services are module applications, which are used a standard SOAP (Simple Object Access Protocol) and locate any computer over Internet and publish, find and bind to services. This guarantees the interoperation and integration of the different kinds of systems. As a result, the problem of e-learning systems for e-business environments was resolved and then the power of understanding about learning objects based on pedagogical design was increased for learner and instruction designers. And organizations of education hope for particular decreased costs in constructing e-learning systems.

  • PDF