• Title/Summary/Keyword: Client-Server System

Search Result 1,217, Processing Time 0.03 seconds

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

Design of a Crowd-Sourced Fingerprint Mapping and Localization System (군중-제공 신호지도 작성 및 위치 추적 시스템의 설계)

  • Choi, Eun-Mi;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.595-602
    • /
    • 2013
  • WiFi fingerprinting is well known as an effective localization technique used for indoor environments. However, this technique requires a large amount of pre-built fingerprint maps over the entire space. Moreover, due to environmental changes, these maps have to be newly built or updated periodically by experts. As a way to avoid this problem, crowd-sourced fingerprint mapping attracts many interests from researchers. This approach supports many volunteer users to share their WiFi fingerprints collected at a specific environment. Therefore, crowd-sourced fingerprinting can automatically update fingerprint maps up-to-date. In most previous systems, however, individual users were asked to enter their positions manually to build their local fingerprint maps. Moreover, the systems do not have any principled mechanism to keep fingerprint maps clean by detecting and filtering out erroneous fingerprints collected from multiple users. In this paper, we present the design of a crowd-sourced fingerprint mapping and localization(CMAL) system. The proposed system can not only automatically build and/or update WiFi fingerprint maps from fingerprint collections provided by multiple smartphone users, but also simultaneously track their positions using the up-to-date maps. The CMAL system consists of multiple clients to work on individual smartphones to collect fingerprints and a central server to maintain a database of fingerprint maps. Each client contains a particle filter-based WiFi SLAM engine, tracking the smartphone user's position and building each local fingerprint map. The server of our system adopts a Gaussian interpolation-based error filtering algorithm to maintain the integrity of fingerprint maps. Through various experiments, we show the high performance of our system.

Implementation and Performance Measuring of Erasure Coding of Distributed File System (분산 파일시스템의 소거 코딩 구현 및 성능 비교)

  • Kim, Cheiyol;Kim, Youngchul;Kim, Dongoh;Kim, Hongyeon;Kim, Youngkyun;Seo, Daewha
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1515-1527
    • /
    • 2016
  • With the growth of big data, machine learning, and cloud computing, the importance of storage that can store large amounts of unstructured data is growing recently. So the commodity hardware based distributed file systems such as MAHA-FS, GlusterFS, and Ceph file system have received a lot of attention because of their scale-out and low-cost property. For the data fault tolerance, most of these file systems uses replication in the beginning. But as storage size is growing to tens or hundreds of petabytes, the low space efficiency of the replication has been considered as a problem. This paper applied erasure coding data fault tolerance policy to MAHA-FS for high space efficiency and introduces VDelta technique to solve data consistency problem. In this paper, we compares the performance of two file systems, MAHA-FS and GlusterFS. They have different IO processing architecture, the former is server centric and the latter is client centric architecture. We found the erasure coding performance of MAHA-FS is better than GlusterFS.

A Efficient RSIP Address Translation Technique in Linux-based Intranet Environment (리눅스기반 인트라넷 환경에서 효율적인 RSIP주소 변환기법)

  • Lee, Youngtaek;Kim, Won;Jeon, Moon-Seok
    • Journal of the Korea Computer Industry Society
    • /
    • v.5 no.1
    • /
    • pp.39-48
    • /
    • 2004
  • An IP address shortage problem is happening with a rapid propagation of the Internet and demands about a new IP address. Address translation technology as NAT is becoming use widely in order to solve these problems. NAT is an very useful If address translation technique that allows two connected networks to use different and incompatible IP address schemes. Rut it is difficult to use NAT particularly for applications that embeded IP addresses in data payloads or encrypted IP packet to guarantee End-to-End Security such as IPSec. In addition to rewiting the source/destination IP address in the packet, NAT must modify IP checksum every time, which could lead to considerablely performance decrease of the overall system in the process of address translation. RSIP is an alternative to solve these disadvantages and address shortage problems of NAT. Both NAT and RSIP divide networks into inside and outside addressing realms. NAT translates addresses between internal network and external network, but RSIP uses a borrowed external address for outside communications. RSIP server assigns a routable, public address to an RSIP client temporaily to communicate with public network outside the private network. In this paper, I will analyze NAT and RSIP gateway system, and then I will propose the Linux-based RSIP gateway for more efficient IP Address Translation in Intranet environments based on RSIP standard of IETF.

  • PDF

A Study on Implementation of SVG for ENC Applications (전자해도 활용을 위한 SVG 변환 연구)

  • Oh, Se-Woong;Park, Jong-Min;Suh, Sang-Hyun
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.1
    • /
    • pp.133-138
    • /
    • 2006
  • Electronic Navigational Charts(ENCs) are official nautical charts which are equivalent to paper charts with supplementary information. Although their main purpose is to be used for the safe navigation of ships, they also contain much information on coasts and seas which may be interesting to ordinary people. However, there is no easy way to access them because of therir specialized data format, access method and visualization. This paper proposes on implementation of SVG for the access and services of ENCs. SVG(Scalable Vector Graphic) makes it possible to make use of Vector graphics for servicing maps in basic internet browsing environment. Implement of SVG for ENC applications by this research is free of special server side GIS mapping system and client side extra technology. The implementation of SVG for ENC Applications can be summarized as follows: Firstly, SVG provides spatial information to possess searching engine to embody SVG map. Secondly, SVG can provide high-quality vector map graphics and interactive facility without special Internet GIS system. It makes it possible to use services with very low cost. Thirdly, SVG information service targeting on maritime transportation can be used as template, so it can be used dynamically any other purpose such as traffic management and vessel monitoring. Many good characteristics of SVG in mapping at computer screen and reusability of SVG document provide new era of visualization of marine geographic information.

  • PDF

A Study on Implementation of SVG for ENC Applications (전자해도 활용을 위한 SVG 변환 연구)

  • Oh, Se-Woong;Park, Jong-Min;Seo, Ki-Yeol;Suh, Sang-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.10
    • /
    • pp.1930-1936
    • /
    • 2007
  • Electronic Navigational Charts(ENCs) are official nautical charts which are equivalent to paper charts with supplementary information. Although their main purpose is to be used for the safe navigation of ships, they also contain much information on coasts and seas which may be interesting to ordinary people. However, there is no easy way to access them because of their specialized data format, access method and visualization. This paper proposes m implementation of SVG for the access and services of ENCs. SVG(Scalable Vector Graphic) makes it possible to make use of Vector graphics for map services in basic internet browsing environment. Implementation of SVG for ENC applications by this research is free of special server side GIS mapping system and client side extra technology. The Implementation of SVG for ENC Applications can be summarized as follows: Firstly, SVG provides spatial information to possess searching engine to embody SVG map. Secondly SVG can provide high-quality vector map graphics and interactive facility without special Internet GIS system. It makes it possible to use services with very low cost. Thirdly, SVG information service targeting on maritime transportation can be used as template, so it can be used dynamically any other purpose such as traffic management and vessel monitoring. Many good characteristics of SVG in mapping at computer screen and reusability of SVG document provide new era of visualization of marine geographic information.

Implementation of Reporting Tool Supporting OLAP and Data Mining Analysis Using XMLA (XMLA를 사용한 OLAP과 데이타 마이닝 분석이 가능한 리포팅 툴의 구현)

  • Choe, Jee-Woong;Kim, Myung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.3
    • /
    • pp.154-166
    • /
    • 2009
  • Database query and reporting tools, OLAP tools and data mining tools are typical front-end tools in Business Intelligence environment which is able to support gathering, consolidating and analyzing data produced from business operation activities and provide access to the result to enterprise's users. Traditional reporting tools have an advantage of creating sophisticated dynamic reports including SQL query result sets, which look like documents produced by word processors, and publishing the reports to the Web environment, but data source for the tools is limited to RDBMS. On the other hand, OLAP tools and data mining tools have an advantage of providing powerful information analysis functions on each own way, but built-in visualization components for analysis results are limited to tables or some charts. Thus, this paper presents a system that integrates three typical front-end tools to complement one another for BI environment. Traditional reporting tools only have a query editor for generating SQL statements to bring data from RDBMS. However, the reporting tool presented by this paper can extract data also from OLAP and data mining servers, because editors for OLAP and data mining query requests are added into this tool. Traditional systems produce all documents in the server side. This structure enables reporting tools to avoid repetitive process to generate documents, when many clients intend to access the same dynamic document. But, because this system targets that a few users generate documents for data analysis, this tool generates documents at the client side. Therefore, the tool has a processing mechanism to deal with a number of data despite the limited memory capacity of the report viewer in the client side. Also, this reporting tool has data structure for integrating data from three kinds of data sources into one document. Finally, most of traditional front-end tools for BI are dependent on data source architecture from specific vendor. To overcome the problem, this system uses XMLA that is a protocol based on web service to access to data sources for OLAP and data mining services from various vendors.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

The Experimental Analysis of Integrated (Name/Property) Dynamic Binding Service Model for Wide-Area Objects Computing (광역 객체 컴퓨팅에서 통합(이름/속성) 기반의 동적 바인딩 서비스 모델의 실험분석)

  • Jeong, Chang-Won;Joo, Su-Chong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.10
    • /
    • pp.746-758
    • /
    • 2006
  • Many objects existing on wide area environments have the replication characteristics according to how to categorize using their own names or properties. From the clients' requests, the existing naming and trading services have not supported with the binding service for replicated solver object with the same service type. For this reason, we present an integrated model that can support the selection of replicated object and dynamic binding services on wide-area computing environments. This model suggests provides not only location management of replicated objects but also active binding service which enables to select a least-loaded object on the system to keep the balance of load between systems. In this purpose, constructing both the service plan and model for support solver object's binding with replication property on wide area computing environments has been researched. In this paper, we showed the test environment and analyzed the performance evaluation of client/server binding procedures via integrated binding service in federation model and verified our model under the condition to see whether load balance can be applied to our model. For the performance evaluation of suggested wide area integrated binding service federation model, evaluated the integrated binding service of each domain and analyzed the performance evaluation of process for non-replication object's under federation model environment. Also, we analyzed the performance evaluation of the federation model between domains for wide area environment. From the execution results, we showed the federation model provides lowers search-cost on the physical tree structure of network.

Performance Analysis of Cost-Effective Location and Service Management Schemes in LTE Networks (LTE 네트워크에서 비용효과적인 위치 및 서비스 관리 기법의 성능분석)

  • Lee, June-Hee;Jeong, Jongpil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.6
    • /
    • pp.1-16
    • /
    • 2012
  • In this paper, we propose a cost-effective location and service management scheme in LTE (Long Term Evolution) networks, which a per-user service proxy is created to serve as a gateway between the mobile user and all client-server applications engaged by the mobile user. The service proxy is always co-located with the mobile user's location database such that whenever the MU's location database moves during a location hand-off, a service hand-off also ensues to co-locate the service proxy with the location database. This allows the proxy to know the location of the mobile user all the time to reduce the network communication cost for service delivery. We analyze four integrated location and service management schemes. Our results show that the centralized scheme performs the best when the mobile user's SMR (service to mobility ratio) is low and ${\upsilon}$(session to mobility ratio) is high, while the fully distributed scheme performs the best when both SMR and ${\upsilon}$ are high. In all other conditions, the dynamic anchor scheme is the best except when the service context transfer cost is high under which the static anchor scheme performs the best. Through analytical results, we demonstrate that different users with vastly different mobility and service patterns should adopt different integrated location and service management methods to optimize system performance.