• Title/Summary/Keyword: Multiple Client

Search Result 218, Processing Time 0.028 seconds

Two Level Bin-Packing Algorithm for Data Allocation on Multiple Broadcast Channels (다중 방송 채널에 데이터 할당을 위한 두 단계 저장소-적재 알고리즘)

  • Kwon, Hyeok-Min
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.9
    • /
    • pp.1165-1174
    • /
    • 2011
  • In data broadcasting systems, servers continuously disseminate data items through broadcast channels, and mobile client only needs to wait for the data of interest to present on a broadcast channel. However, because broadcast channels are shared by a large set of data items, the expected delay of receiving a desired data item may increase. This paper explores the issue of designing proper data allocation on multiple broadcast channels to minimize the average expected delay time of all data items, and proposes a new data allocation scheme named two level bin-packing(TLBP). This paper first introduces the theoretical lower-bound of the average expected delay, and determines the bin capacity based on this value. TLBP partitions all data items into a number of groups using bin-packing algorithm and allocates each group of data items on an individual channel. By employing bin-packing algorithm in two step, TLBP can reflect a variation of access probabilities among data items allocated on the same channel to the broadcast schedule, and thus enhance the performance. Simulation is performed to compare the performance of TLBP with three existing approaches. The simulation results show that TLBP outperforms others in terms of the average expected delay time at a reasonable execution overhead.

Effect of Relational Structure with Multiple Vendors on IT Outsourcing Performance: Transaction Cost Theory Perspective (복수 공급업체와의 관계구조가 정보기술 아웃소싱 성과에 미치는 영향: 거래비용 이론 관점)

  • Koo, Yunmo;Lee, Jae-Nam;Son, Insoo
    • Information Systems Review
    • /
    • v.18 no.1
    • /
    • pp.177-197
    • /
    • 2016
  • Information technology (IT) outsourcing is considered an effective strategy to manage and maintain organizational technologies in a rapidly changing business environment. In particular, to meet diverse market needs, many organizations that outsource their IT functions practice a multi-vendor approach as their main outsourcing strategy. Although a few studies have been conducted about the multi-vendor approach, most previous works primarily emphasized conceptual arguments and normative prescriptions. In addition, scant attention has been directed toward the relational structure between the client and multiple vendors in the multi-vendor approach and its implications for outsourcing success. This study proposes a model from the transaction cost perspective by conceptualizing two dominant relational structures of the multi-vendor approach, namely, single-vendor dominant model and the multi-vendor dominant model, and hypothesizing their relationships with two outsourcing outcomes, project success and user satisfaction. The proposed model is examined using the data collected from 246 companies that have implemented multi-vendor outsourcing. As expected, results indicate that the single-vendor dominant model has a more significant impact on project success, whereas the multi-vendor dominant model has a more significant impact on user satisfaction. The study concludes with the theoretical implications and directions for future research.

Multi-threaded Web Crawling Design using Queues (큐를 이용한 다중스레드 방식의 웹 크롤링 설계)

  • Kim, Hyo-Jong;Lee, Jun-Yun;Shin, Seung-Soo
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.2
    • /
    • pp.43-51
    • /
    • 2017
  • Background/Objectives : The purpose of this study is to propose a multi-threaded web crawl using queues that can solve the problem of time delay of single processing method, cost increase of parallel processing method, and waste of manpower by utilizing multiple bots connected by wide area network Design and implement. Methods/Statistical analysis : This study designs and analyzes applications that run on independent systems based on multi-threaded system configuration using queues. Findings : We propose a multi-threaded web crawler design using queues. In addition, the throughput of web documents can be analyzed by dividing by client and thread according to the formula, and the efficiency and the number of optimal clients can be confirmed by checking efficiency of each thread. The proposed system is based on distributed processing. Clients in each independent environment provide fast and reliable web documents using queues and threads. Application/Improvements : There is a need for a system that quickly and efficiently navigates and collects various web sites by applying queues and multiple threads to a general purpose web crawler, rather than a web crawler design that targets a particular site.

Design of a Crowd-Sourced Fingerprint Mapping and Localization System (군중-제공 신호지도 작성 및 위치 추적 시스템의 설계)

  • Choi, Eun-Mi;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.595-602
    • /
    • 2013
  • WiFi fingerprinting is well known as an effective localization technique used for indoor environments. However, this technique requires a large amount of pre-built fingerprint maps over the entire space. Moreover, due to environmental changes, these maps have to be newly built or updated periodically by experts. As a way to avoid this problem, crowd-sourced fingerprint mapping attracts many interests from researchers. This approach supports many volunteer users to share their WiFi fingerprints collected at a specific environment. Therefore, crowd-sourced fingerprinting can automatically update fingerprint maps up-to-date. In most previous systems, however, individual users were asked to enter their positions manually to build their local fingerprint maps. Moreover, the systems do not have any principled mechanism to keep fingerprint maps clean by detecting and filtering out erroneous fingerprints collected from multiple users. In this paper, we present the design of a crowd-sourced fingerprint mapping and localization(CMAL) system. The proposed system can not only automatically build and/or update WiFi fingerprint maps from fingerprint collections provided by multiple smartphone users, but also simultaneously track their positions using the up-to-date maps. The CMAL system consists of multiple clients to work on individual smartphones to collect fingerprints and a central server to maintain a database of fingerprint maps. Each client contains a particle filter-based WiFi SLAM engine, tracking the smartphone user's position and building each local fingerprint map. The server of our system adopts a Gaussian interpolation-based error filtering algorithm to maintain the integrity of fingerprint maps. Through various experiments, we show the high performance of our system.

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

Application of OGC WPS 2.0 to Geo-Spatial Web Services (공간정보 웹 서비스에서 OGC WPS 2.0 적용)

  • YOON, Goo-Seon;LEE, Ki-Won
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.16-28
    • /
    • 2016
  • Advancing geo-spatial web technologies and their applications require compatible and interoperable heterogeneous browsers and platforms. Reduction of common or supporting components for web-based system development is also necessary. If properly understood and applied, OGC-based standards can be utilized as effective solutions for these problems. Thus, OGC standards are central to the design and development of web-based geo-spatial systems, and are particularly applicable to web services, which contain data processing modules. However, the application for OGC WPS 2.0 is at an early stage as compared with other OGC standards; thus, this study describes a test implementation of a web-based geo-spatial processing system with OGC WPS 2.0 focused on asynchronous processing functionality. While a binary thresholding algorithm was tested in this system, further experiments with other processing modules can be performed on requests for many types of processing from multiple users. The client system of the implemented product was based on open sources such as jQuery and OpenLayers, and server-side running on Spring framework also used various types of open sources such as ZOO project, and GeoServer. The results of geo-spatial image processing by this system implies further applicability and extensibility of OGC WPS 2.0 on user interfaces for practical applications.

An Algorithm for Managing Storage Space to Maximize the CPU Availability in VOD Systems (VOD 시스템에서 CPU 가용성을 최대화하는 저장공간관리 알고리즘)

  • Jung, Ji-Chan;Go, Jae-Doo;Song, Min-Seok;Sim, Jeong-Seop
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.3
    • /
    • pp.140-148
    • /
    • 2009
  • Recent advances in communication and multimedia technologies make it possible to provide video-on-demand(VOD) services and people can access video servers over the Internet at any time using their electronic devices, such as PDA, mobile phone and digital TV. Each device has different processing capabilities, energy budgets, display sizes and network connectivities. To support such diverse devices, multiple versions of videos are needed to meet users' requests. In general cases, VOD servers cannot store all the versions of videos due to the storage limitation. When a device requests a stored version, the server can send the appropriate version immediately, but when the requested version is not stored, the server first converts some stored version to the requested version, and then sends it to the client. We call this conversion process transcoding. If transcoding occurs frequently in a VOD server, the CPU resource of the server becomes insufficient to response to clients. Thus, to admit as many requests as possible, we need to maximize the CPU availability. In this paper, we propose a new algorithm to select versions from those stored on disk using a branch and bound technique to maximize the CPU availability. We also explore the impact of these storage management policies on streaming to heterogeneous users.

The Emotional Intelligence Effects on Foreign LCs' Self-Efficacy and Job Stress (외국계 생명보험 설계사의 감성지능이 직무스트레스에 미치는 영향 : 자기효능감의 매개효과를 중심으로)

  • Jung, Kwang-Jin;Park, Sang-Beom
    • The Journal of Industrial Distribution & Business
    • /
    • v.9 no.5
    • /
    • pp.93-104
    • /
    • 2018
  • Purpose - This study is to investigate the relationship among emotional intelligence, self-efficacy and job stress of foreign life insurance consultants focusing on the mediating effect of self-efficacy. Regarding job security, in general foreign life insurance companies in Korea have more severe working conditions in terms of required contract performance. For foreign life insurance consultants, they are assumed to need higher level of emotional intelligence and self efficacy to meet the conditions. In this study, focus is cast on these aspects. Research design, data, and methodology - Basically the research is conducted upon questionnaires responded by foreign life insurance consultants. That is, data are collected from 255 sample of insurance consultants who work for a foreign owned life insurance company. The Questionnaire measure the level of emotional intelligence, self-efficacy and job stress of insurance consultants. The data are analyzed using pearson's correlation coefficient and hierarchical multiple regression, descriptive statistics, t-test, ANOVA, Durbin-Watson test. Results - The general characteristics of respondents are gender, age, marital status, education level, income monthly, career length, change jobs no, working day per week, call no. per week, meeting no. with client per week, contract regularity, contract no. per month and cancellation contract per year. The mean of emotional intelligence is 2.63, self-efficacy is 3.44 and job stress is 2.20. Emotional intelligence is composed with mean value of self emotion appraisal(3.93), other's emotion appraisal(3.78), regulation of emotion(3.29) and use of emotion(3.52). The mean of self efficacy is composed with mean value of self-confidence(3.41), self-regulated efficacy(3.59) and preference task difficulty(3.30). The job stress is composed with mean value of job requirement(2.61), lack of job autonomy(1.99), conflict of personal relations(1.99), job instability(2.38), organizational system(2.19) and inappropriate compensation(2.07). There is a significant positive correlation between emotional intelligence and self-efficacy. The emotional intelligence and self-efficacy are significantly negative correlation with job stress. The self-efficacy is showed a mediating variable between emotional intelligence and job stress. Conclusions - To decrease job stress level, foreign life insurance company should find the factors to improve the emotional intelligence and self-efficacy of life insurance consultants, and develop appropriate plans using a mediating role of self- efficacy between emotional intelligence and job stress.

Implementation of ATM/Internet Gateway System for Real Time Multimedia Service (실시간 멀티미디어 서비스를 위한 ATM/Internet 게이트웨이 시스템의 구현)

  • Han Tae-Man;Jeong You-Hyeon;Kim Dong-Won
    • The KIPS Transactions:PartC
    • /
    • v.11C no.6 s.95
    • /
    • pp.799-806
    • /
    • 2004
  • A growing diversity of pervasive devices is gaming access to the Internet and other information. However, much of the rich multimedia contents cannot be easily handled by the client devices because of the limited communication, processing, storage and display capabilities. The in-tegration of voice, data and video service modified the target of networking technologies. Networks must have some the capabilities for in-tegration of various services and also for QoS support as required by each of those service. Because of these reasons, we developed EAGIS(Efficient ATM Gateway for real time Internet Service) to provide seamless multimedia service between the ATM network and the Internet. EAGIS consists of the interworking unit, content server, transcoding server, and the serveice broker to provide seamless multimedia service be-tween the ATM network and the Internet. In this paper, we design the architecture and transcoding service scenario of the EAGIS. When the RTP is used for the bi-directional communication, transcoding time is configured by the time-stamp of RTCP. When HTTP is used for unidirec-tional communication, self-timer is used. By using these reference time, standard transcoding method is applicable according to the frame trans-mission rate and network traffic load. And we can also assure the QoS of the multiple users` effective bandwidth by our algorithm.

Segment-based Cache Replacement Policy in Transcoding Proxy (트랜스코딩 프록시에서 세그먼트 기반 캐쉬 교체 정책)

  • Park, Yoo-Hyun;Kim, Hag-Young;Kim, Kyong-Sok
    • The KIPS Transactions:PartA
    • /
    • v.15A no.1
    • /
    • pp.53-60
    • /
    • 2008
  • Streaming media has contributed to a significant amount of today's Internet Traffic. Like traditional web objects, rich media objects can benefit from proxy caching, but caching streaming media is more of challenging than caching simple web objects, because the streaming media have features such as huge size and high bandwidth. And to support various bandwidth requirements for the heterogeneous ubiquitous devices, a transcoding proxy is usually necessary to provide not only adapting multimedia streams to the client by transcoding, but also caching them for later use. The traditional proxy considers only a single version of the objects, whether they are to be cached or not. However the transcoding proxy has to evaluate the aggregate effect from caching multiple versions of the same object to determine an optimal set of cache objects. And recent researches about multimedia caching frequently store initial parts of videos on the proxy to reduce playback latency and archive better performance. Also lots of researches manage the contents with segments for efficient storage management. In this paper, we define the 9-events of transcoding proxy using 4-atomic events. According to these events, the transcoding proxy can define the next actions. Then, we also propose the segment-based caching policy for the transcoding proxy system. The performance results show that the proposing policy have a low delayed start time, high byte-hit ratio and less transcoding data.