• Title/Summary/Keyword: reward time

Search Result 166, Processing Time 0.032 seconds

Conceptualizing Digital Shadow Work: Focused on Mandatory and Reward Related Issues (디지털그림자노동(Digital Shadow Work)의 개념화: 강제성과 대가성 이슈를 중심으로)

  • Bu, Shaoyang;Koh, Joon
    • The Journal of Information Systems
    • /
    • v.31 no.3
    • /
    • pp.89-108
    • /
    • 2022
  • Purpose The purpose of this study is to clarify the conceptualizations of mandatory and reward that have come into focus in the definition of digital shadow work. And explore how users in a shared services environment view cost and coercion from the perspective of digital shadow work. Design/methodology/approach We conducted one-on-one interviews with 4 participants, with each interview being an average of 25 minutes. Based on literature review, stakeholder observation, and interviews on digital shadow work so far, very objective results can be derived through triangulation based on the basis of multiple sources. Findings According to the results of the preliminary study, there are some rewards for each type of digital shadow work, but time saving and service convenience are considered more than financial rewards. Unfair demands in determining whether to implement them in consideration of the difficulty and expected benefits of the demanding digital work can cause dissatisfaction with the service. Academic implications and future research directions are also discussed.

The Structural Relationship of Factors Impacting on e-Loyalty to MMORPG (MMORPG 이용자 충성도에 대한 영향요인간 구조적 관계)

  • Kim, Jung-Ho;Kim, Yoo-Jung;Kang, So-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.12
    • /
    • pp.274-289
    • /
    • 2010
  • MMORPG makes up 25.9% of domestic market share in game industry, and it has become increasingly fierce in competition. Game developer and publisher make every effort to build customer e-loyalty to sustain their competitiveness. Thus, this paper examines the determinants of e-loyalty that reflect the features of MMORPG such as intensive and real-time interactivity. For this purpose, we selected interactivity, sense of community, reward and fun as the key antecedents of e-loyalty based on the extensive review of previous researches related to online game and Internet service usage. A total of 202 responses were used for analysis and the research results are as follow. The findings show that interactivity, sense of community and reward influence significantly on fun, then fun is positively related to e-loyalty. Also, interactivity and reward are proven to have a positive influence on sense of community, and sense of community mediates the effect of interactivity and reward on fun.

IRIS Task Scheduling Algorithm Based on Task Selection Policies (태스크 선택정책에 기반을 둔 IRIS 태스크 스케줄링 알고리즘)

  • Shim, Jae-Hong;Choi, Kyung-Hee;Jung, Gi-Hyun
    • The KIPS Transactions:PartA
    • /
    • v.10A no.3
    • /
    • pp.181-188
    • /
    • 2003
  • We propose a heuristic on-line scheduling algorithm for the IRIS (Increasing Reward with Increasing Service) tasks, which has low computation complexity and produces total reward approximated to that of previous on-line optimal algorithms. The previous on-line optimal algorithms for IRIS tasks perform scheduling on all tasks in a system to maximize total reward. Therefore, the complexities of these algorithms are too high to apply them to practical systems handling many tasks. The proposed algorithm doesn´t perform scheduling on all tasks in a system, but on (constant) W´s tasks selected by a predefined task selection policy. The proposed algorithm is based on task selection policies that define how to select tasks to be scheduled. We suggest two simple and intuitive selection policies and a generalized selection policy that integrates previous two selection policies. By narrowing down scheduling scope to only W´s selected tasks, the computation complexity of proposed algorithm can be reduced to O(Wn). However, simulation results for various cases show that it is closed to O(W) on the average.

Evaluating SR-Based Reinforcement Learning Algorithm Under the Highly Uncertain Decision Task (불확실성이 높은 의사결정 환경에서 SR 기반 강화학습 알고리즘의 성능 분석)

  • Kim, So Hyeon;Lee, Jee Hang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.331-338
    • /
    • 2022
  • Successor representation (SR) is a model of human reinforcement learning (RL) mimicking the underlying mechanism of hippocampal cells constructing cognitive maps. SR utilizes these learned features to adaptively respond to the frequent reward changes. In this paper, we evaluated the performance of SR under the context where changes in latent variables of environments trigger the reward structure changes. For a benchmark test, we adopted SR-Dyna, an integration of SR into goal-driven Dyna RL algorithm in the 2-stage Markov Decision Task (MDT) in which we can intentionally manipulate the latent variables - state transition uncertainty and goal-condition. To precisely investigate the characteristics of SR, we conducted the experiments while controlling each latent variable that affects the changes in reward structure. Evaluation results showed that SR-Dyna could learn to respond to the reward changes in relation to the changes in latent variables, but could not learn rapidly in that situation. This brings about the necessity to build more robust RL models that can rapidly learn to respond to the frequent changes in the environment in which latent variables and reward structure change at the same time.

Localization and a Distributed Local Optimal Solution Algorithm for a Class of Multi-Agent Markov Decision Processes

  • Chang, Hyeong-Soo
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.3
    • /
    • pp.358-367
    • /
    • 2003
  • We consider discrete-time factorial Markov Decision Processes (MDPs) in multiple decision-makers environment for infinite horizon average reward criterion with a general joint reward structure but a factorial joint state transition structure. We introduce the "localization" concept that a global MDP is localized for each agent such that each agent needs to consider a local MDP defined only with its own state and action spaces. Based on that, we present a gradient-ascent like iterative distributed algorithm that converges to a local optimal solution of the global MDP. The solution is an autonomous joint policy in that each agent's decision is based on only its local state.cal state.

The Effects of Urban Housewives′ Environmental Knowledge and Family Resource Management Attitude on Family Resource Management Behavior (도시주부의 환경지식과 자원절약태도가 자원절약행동에 미치는 영향)

  • Hong Sang-Hee;Rhee Kyung-Hee;Kwak In-Suk
    • Journal of the Korean Home Economics Association
    • /
    • v.42 no.9
    • /
    • pp.67-83
    • /
    • 2004
  • The purpose of this study were, (1) to analyze the effect of the selected variables on urban housewives' family resource management attitude and behavior, and (2) to identify the casual effects of variables on family resource management behavior. A sample of 641 was selected from housewives living in urban area. For data analysis, one-way ANOVA, DMR test, t-test, multiple regression, and path analysis were used. The major findings were as follows: 1. The housewives' family resource management behavior level was lower than their attitude level. 2. The family resource management attitude and behavior among the respondents were affected by the following independent variables : interest with environmental reports and newspapers, perception of time constraints, perception of economic reward. 3. The family resource management attitude had the greatest causal effect on the family resource management behavior.

Two-Dimensional POMDP-Based Opportunistic Spectrum Access in Time-Varying Environment with Fading Channels

  • Wang, Yumeng;Xu, Yuhua;Shen, Liang;Xu, Chenglong;Cheng, Yunpeng
    • Journal of Communications and Networks
    • /
    • v.16 no.2
    • /
    • pp.217-226
    • /
    • 2014
  • In this research, we study the problem of opportunistic spectrum access (OSA) in a time-varying environment with fading channels, where the channel state is characterized by both channel quality and the occupancy of primary users (PUs). First, a finite-state Markov channel model is introduced to represent a fading channel. Second, by probing channel quality and exploring the activities of PUs jointly, a two-dimensional partially observable Markov decision process framework is proposed for OSA. In addition, a greedy strategy is designed, where a secondary user selects a channel that has the best-expected data transmission rate to maximize the instantaneous reward in the current slot. Compared with the optimal strategy that considers future reward, the greedy strategy brings low complexity and relatively ideal performance. Meanwhile, the spectrum sensing error that causes the collision between a PU and a secondary user (SU) is also discussed. Furthermore, we analyze the multiuser situation in which the proposed single-user strategy is adopted by every SU compared with the previous one. By observing the simulation results, the proposed strategy attains a larger throughput than the previous works under various parameter configurations.

An Optimal Pricing Strategy in An M/M/1 Queueing System Based on Customer's Sojourn Time-Dependent Reward Level (고객의 체류시간의존 보상에 기반한 M/M/1 대기행렬 시스템에서의 최적 가격책정 전략)

  • Lee, Doo Ho
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.7
    • /
    • pp.146-153
    • /
    • 2016
  • This work studies the equilibrium behavior of customers and optimal pricing strategies of the sever in a continuous-time M/M/1 queueing system. In this work, we consider two pricing models. The first one is called the ex-ante payment scheme where the server charges a flat price for all services, and the second one is called the ex-post payment scheme where the server charges a price that is proportional to the time a customer spends in the system. In each pricing model, the departing customer receives the reward that is inversely proportional to his/her sojourn time. The server should make the optimal pricing decisions in order to maximize its expected profit per unit time in each payment scheme. This work also investigates customer's equilibrium joining or balking behaviors under server's optimal pricing strategies. Numerical experiments are conducted to help the server best select one between two pricing models.

The Study about Problem in the course of Education of Special Guard (특수경비원 교육훈련실태 및 발전방안에 관한 연구)

  • Kang, Gil-Hoon
    • Korean Security Journal
    • /
    • no.6
    • /
    • pp.291-326
    • /
    • 2003
  • The first, Improvement of education training condition Education training is influenced by facilities, environment around. according to questionnaire, it is very poor, we should set up a training institute as soon as possible. The second, Improvement of education training contents In working as special guard, they do not feel the need of curriculums like bayonet fencing, criminal law, and so on. accordingly we should adjust the contents of educaton training. The third, Improvement of education training course People were satisfied with the contents of lectures and educator more than half to some degree, but there was a question of time, communication, contents. we should try to remedy things like this. The fourth, Adjustment of education training time The 60% people of all were not satisfied with the time of education training about new duty. we need to intensify and oversee a duty training and the restructure of training time. The fifth, Fairness of valuation reward and punishment in education training The 80% people of all had the bad feeling against reward and punishment, so we tried to let fairness of valuation, reward and punishment completed by educational institution. The sixth, Establishment of the institution for special guard special guard have to be raised by special institution, but lacking of educational program, educational facility, educational Environment, university took the place of government as institution in raising special guard, education still leave much to be desired. so to develop the industry of a civil security, government or a guard association will set up the school of training, education, system about civil security as a whole. The seventh, Improvement of education training form People have to be taught for 80 hours in education training. according to questionnaire, over 75%people wanted to lodge at education accommodation, so in doing education training, we need to improve a system and form. The eighth, Operation of education training suitable for a characteristic in jobs In the education of 80 hours, common courses will need to be carried out together, depending on class, the object of national facility, inspection and practice will need to be done. maybe this can be the improvement of growing up education training. In the result of the study, we need to build up the satisfaction of education training through a lot of opinion like program, system, circumstances. Keep in mind that the paper was a few of problems because of the limit of the survey of 132 peoples, accordingly we try to collect a survey related with this around country. especially this will need to be asked for harmony between the law and the background of system. in the future, to develop the special guard service, increase the demand of this service, have to raised the expert and the special guard service has to enlarge.

  • PDF

An optimal management policy for the surplus process with investments (재투자가 있는 잉여금 과정의 최적 운용정책)

  • Lim, Se-Jin;Choi, Seungkyoung;Lee, Eui-Yong
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.7
    • /
    • pp.1165-1172
    • /
    • 2016
  • In this paper, a surplus process with investments is introduced. Whenever the level of the surplus reaches a target value V > 0, amount S($0{\leq}S{\leq}V$) is invested into other business. After assigning three costs to the surplus process, a reward per unit amount of the investment, a penalty of the surplus being empty and the keeping (opportunity) cost per unit amount of the surplus per unit time, we obtain the long-run average cost per unit time to manage the surplus. We prove that there exists a unique value of S minimizing the long-run average cost per unit time for a given value of V, and also that there exists a unique value of V minimizing the long-run average cost per unit time for a given value of S. These two facts show that an optimal investment policy of the surplus exists when we manage the surplus in the long-run.